id
stringlengths 3
8
| url
stringlengths 32
207
| title
stringlengths 1
114
| text
stringlengths 93
492k
|
---|---|---|---|
4222171 | https://en.wikipedia.org/wiki/MacArthur%20Fellows%20Program | MacArthur Fellows Program | The MacArthur Fellows Program, also known as the MacArthur Fellowship and commonly but unofficially known as the "Genius Grant", is a prize awarded annually by the John D. and Catherine T. MacArthur Foundation typically to between 20 and 30 individuals, working in any field, who have shown "extraordinary originality and dedication in their creative pursuits and a marked capacity for self-direction" and are citizens or residents of the United States.
According to the foundation's website, "the fellowship is not a reward for past accomplishment, but rather an investment in a person's originality, insight, and potential". The current prize is $625,000 paid over five years in quarterly installments. This figure was increased from $500,000 in 2013 with the release of a review of the MacArthur Fellows Program. Since 1981, 1086 people have been named MacArthur Fellows, ranging in age from 18 to 82. The award has been called "one of the most significant awards that is truly 'no strings attached'".
The program does not accept applications. Anonymous and confidential nominations are invited by the foundation and reviewed by an anonymous and confidential selection committee of about a dozen people. The committee reviews all nominees and recommends recipients to the president and board of directors. Most new fellows first learn of their nomination and award upon receiving a congratulatory phone call. MacArthur Fellow Jim Collins described this experience in an editorial column of The New York Times.
Cecilia Conrad is the managing director leading the MacArthur Fellows Program.
Recipients
1981
A. R. Ammons, poet
Joseph Brodsky, poet
John Cairns, molecular biologist
Gregory V. Chudnovsky, mathematician
Joel E. Cohen, population biologist
Robert Coles, child psychiatrist
Richard Critchfield, essayist
Shelly Errington, cultural anthropologist
Howard Gardner, psychologist
Henry Louis Gates Jr., literary critic
John Gaventa, sociologist
Michael Ghiselin, evolutionary biologist
Stephen Jay Gould, paleontologist
Ian Graham, archaeologist
David Hawkins, philosopher
John P. Holdren, arms control and energy analyst
Ada Louise Huxtable, architectural critic and historian
John Imbrie, climatologist
Robert Kates, geographer
Raphael Carl Lee, surgeon
Elma Lewis, arts educator
Cormac McCarthy, writer
Barbara McClintock, geneticist
James Alan McPherson, short story writer and essayist
Roy P. Mottahedeh, historian
Richard C. Mulligan, molecular biologist
Douglas D. Osheroff, physicist
Elaine H. Pagels, historian of religion
David Pingree, historian of science
Paul G. Richards, seismologist
Robert Root-Bernstein, biologist and historian of science
Richard Rorty, philosopher
Lawrence Rosen, attorney and anthropologist
Carl Emil Schorske, intellectual historian
Leslie Marmon Silko, writer
Joseph Hooton Taylor Jr., astrophysicist
Derek Walcott, poet and playwright
Robert Penn Warren, poet, novelist, and literary critic
Stephen Wolfram, computer scientist and physicist
Michael Woodford, economist
George Zweig, physicist and neurobiologist
1982
Fouad Ajami, political scientist
Charles A. Bigelow, type designer
Peter Robert Lamont Brown, historian
Robert Darnton, European historian
Persi Diaconis, statistician
William Gaddis, novelist
Ved Mehta, writer
Bob Moses, educator and philosopher
Richard A. Muller, geologist and astrophysicist
Conlon Nancarrow, composer
Alfonso Ortiz, cultural anthropologist
Francesca Rochberg, Assyriologist and historian of science
Charles Sabel, political scientist and legal scholar
Ralph Shapey, composer and conductor
Michael Silverstein, linguist
Randolph Whitfield Jr., ophthalmologist
Frank Wilczek, physicist
Frederick Wiseman, documentary filmmaker
Edward Witten, physicist, creator of the M-Theory
1983
R. Stephen Berry, physical chemist
Seweryn Bialer, political scientist
William C. Clark, ecologist and environmental policy analyst
Philip D. Curtin, historian of Africa
William H. Durham, biological anthropologist
Bradley Efron, statistician
David L. Felten, neuroscientist
Randall W. Forsberg, political scientist and arms control strategist
Alexander L. George, political scientist
Shelomo Dov Goitein, medieval historian
Mott T. Greene, historian of science
James E. Gunn, astronomer
Ramón A. Gutiérrez, historian
John J. Hopfield, physicist and biologist
Béla Julesz, psychologist
William Kennedy, novelist
Leszek Kołakowski, historian of philosophy and religion
Sylvia A. Law, human rights lawyer
Brad Leithauser, poet and writer
Lawrence W. Levine, historian
Ralph Manheim, translator
Robert K. Merton, historian and sociologist of science
Walter F. Morris Jr., cultural preservationist
Charles S. Peskin, mathematician and physiologist
A.K. Ramanujan, poet, translator, and literary scholar
Alice M. Rivlin, economist and policy analyst
Julia Robinson, mathematician
John Sayles, filmmaker and writer
Richard M. Schoen, mathematician
Peter Sellars, theater and opera director
Karen K. Uhlenbeck, mathematician
Adrian Wilson, book designer, printer, and book historian
Irene J. Winter, art historian and archaeologist
Mark S. Wrighton, chemist
1984
George W. Archibald, ornithologist
Shelly Bernstein, pediatric hematologist
Peter J. Bickel, statistician
Ernesto J. Cortes Jr., community organizer
William Drayton, public service innovator
Sidney Drell, physicist and arms policy analyst
Mitchell J. Feigenbaum, mathematical physicist
Michael H. Freedman, mathematician
Curtis G. Hames, family physician
Robert Hass, poet, critic, and translator
Shirley Heath, linguistic anthropologist
J. Bryan Hehir, religion and foreign policy scholar
Bette Howland, writer and literary critic
Bill Irwin, clown, writer, and performance artist
Robert Irwin, light and space artist
Ruth Prawer Jhabvala, novelist and screenwriter
Fritz John, mathematician
Galway Kinnell, poet
Henry Kraus, labor and art historian
Paul Oskar Kristeller, intellectual historian and philosopher
Sara Lawrence-Lightfoot, educator
Heather Lechtman, materials scientist and archaeologist
Michael Lerner, public health leader
Andrew W. Lewis, medieval historian
Arnold J. Mandell, neuroscientist and psychiatrist
Peter Mathews, archaeologist and epigrapher
Matthew Meselson, geneticist and arms control analyst
David R. Nelson, physicist
Beaumont Newhall, historian of photography
Roger S. Payne, zoologist and conservationist
Michael Piore, economist
Edward V. Roberts, disability rights leader
Judith N. Shklar, political philosopher
Charles Simic, poet, translator, and essayist
Elliot Sperling, Tibetan studies scholar
David Stuart, linguist and epigrapher
Frank Sulloway, psychologist (child birth-order research)
John E. Toews, intellectual historian
Alar Toomre, astronomer and mathematician
James Turrell, light sculptor
Amos Tversky, cognitive scientist
Bret Wallach, geographer
Jay Weiss, psychologist
Arthur Winfree, physiologist and mathematician
J. Kirk Varnedoe, art historian
Carl R. Woese, molecular biologist
Billie Young, community development leader
1985
Joan Abrahamson, community development leader
John Ashbery, poet
John F. Benton, medieval historian
Harold Bloom, literary critic
Valery Chalidze, physicist and human rights organizer
William Cronon, environmental historian
Merce Cunningham, choreographer
Jared Diamond, environmental historian and geographer
Marian Wright Edelman, Children's Defense Fund founder
Morton Halperin, political scientist
Robert M. Hayes, lawyer and human rights leader
Edwin Hutchins, cognitive scientist
Sam Maloof, professional woodworker and furniture maker
Andrew McGuire, trauma prevention specialist
Patrick Noonan, conservationist
George Oster, mathematical biologist
Thomas G. Palaima, classicist
Peter Raven, botanist
Jane S. Richardson, biochemist
Gregory Schopen, historian of religion
Franklin Stahl, geneticist
J. Richard Steffy, nautical archaeologist
Ellen Stewart, theater director
Paul Taylor, choreographer, dance company founder
Shing-Tung Yau, mathematician
1986
Paul Adams, neurobiologist
Milton Babbitt, composer and music theorist
Christopher Beckwith, philologist
Richard Benson, photographer
Lester R. Brown, agricultural economist
Caroline Bynum, medieval historian
William A. Christian, historian of religion
Nancy Farriss, historian
Benedict Gross, mathematician
Daryl Hine, poet and translator
John Robert Horner, paleobiologist
Thomas C. Joe, social policy analyst
David Keightley, historian and sinologist
Albert J. Libchaber, physicist
David C. Page, molecular geneticist
George Perle, composer and music theorist
James Randi, magician
David Rudovsky, civil rights lawyer
Robert Shapley, neurophysiologist
Leo Steinberg, art historian
Richard P. Turco, atmospheric scientist
Thomas Whiteside, journalist
Allan C. Wilson, biochemist
Jay Wright, poet and playwright
Charles Wuorinen, composer
1987
Walter Abish, writer
Robert Axelrod, political scientist
Robert F. Coleman, mathematician
Douglas Crase, poet
Daniel Friedan, physicist
David Gross, physicist
Ira Herskowitz, molecular geneticist
Irving Howe, literary and social critic
Wesley Charles Jacobs Jr., rural planner
Peter Jeffery, musicologist
Horace Freeland Judson, historian of science
Stuart Alan Kauffman, evolutionary biologist
Richard Kenney, poet
Eric Lander, geneticist and mathematician
Michael Malin, geologist and planetary scientist
Deborah W. Meier, education reform leader
Arnaldo Dante Momigliano, historian
David Mumford, mathematician
Tina Rosenberg, journalist
David Rumelhart, cognitive scientist and psychologist
Robert Morris Sapolsky, neuroendocrinologist and primatologist
Meyer Schapiro, art historian
John H. Schwarz, physicist
Jon Seger, evolutionary ecologist
Stephen Shenker, physicist
David Dean Shulman, historian of religion
Muriel S. Snowden, community organizer
Mark Strand, poet and writer
May Swenson, poet
Huỳnh Sanh Thông, translator and editor
William Julius Wilson, sociologist
Richard Wrangham, primate ethologist
1988
Charles Archambeau, geophysicist
Michael Baxandall, art historian
Ruth Behar, cultural anthropologist
Ran Blake, composer and pianist
Charles Burnett, filmmaker
Philip James DeVries, insect biologist
Andre Dubus, writer
Helen T. Edwards, physicist
Jon H. Else, documentary filmmaker
John G. Fleagle, primatologist and paleontologist
Cornell H. Fleischer, Middle Eastern historian
Getatchew Haile, philologist and linguist
Raymond Jeanloz, geophysicist
Marvin Philip Kahl, zoologist
Naomi Pierce, biologist
Thomas Pynchon, novelist
Stephen J. Pyne, environmental historian
Max Roach, drummer and jazz composer
Hipolito (Paul) Roldan, community developer
Anna Curtenius Roosevelt, archaeologist
David Alan Rosenberg, military historian
Susan Irene Rotroff, archaeologist
Bruce Schwartz, figurative sculptor and puppeteer
Robert Shaw, physicist
Jonathan Spence, historian
Noel M. Swerdlow, historian of science
Gary A. Tomlinson, musicologist
Alan Walker, paleontologist
Eddie N. Williams, policy analyst and civil rights leader
Rita P. Wright, archaeologist
Garth Youngberg, agriculturalist
1989
Anthony Amsterdam, attorney and legal scholar
Byllye Avery, women's healthcare leader
Alvin Bronstein, human rights lawyer
Leo Buss, evolutionary biologist
Jay Cantor, writer
George Davis, environmental policy analyst
Allen Grossman, poet
John Harbison, composer and conductor
Keith Hefner, journalist and educator
Ralf Hotchkiss, rehabilitation engineer
John Rice Irwin, curator and cultural preservationist
Daniel Janzen, ecologist
Bernice Johnson Reagon, music historian, composer, and vocalist
Aaron Lansky, cultural preservationist
Jennifer Moody, archaeologist and anthropologist
Errol Morris, filmmaker
Vivian Paley, educator and writer
Richard Powers, novelist
Martin Puryear, sculptor
Theodore Rosengarten, historian
Margaret W. Rossiter, historian of science
George Russell, composer and music theorist
Pam Solo, arms control analyst
Ellendea Proffer Teasley, translator and publisher
Claire Van Vliet, book artist
Baldemar Velasquez, farm labor leader
Bill Viola, video artist
Eliot Wigginton, educator
Patricia Wright, primatologist
1990
John Christian Bailar, biostatistician
Martha Clarke, theater director
Jacques d'Amboise, dance educator
Guy Davenport, writer, critic, and translator
Lisa Delpit, education reform leader
John Eaton, composer
Paul R. Ehrlich, population biologist
Charlotte Erickson, historian
Lee Friedlander, photographer
Margaret Geller, astrophysicist
Jorie Graham, poet
Patricia Hampl, writer
John Hollander, poet and literary critic
Thomas Cleveland Holt, social and cultural historian
David Kazhdan, mathematician
Calvin King, land and farm development specialist
M. A. R. Koehl, marine biologist
Nancy Kopell, mathematician
Michael Moschen, performance artist
Gary Nabhan, ethnobotanist
Sherry Ortner, anthropologist
Otis Pitts, community development leader
Yvonne Rainer, filmmaker and choreographer
Michael Schudson, sociologist
Rebecca J. Scott, historian
Marc Shell, scholar
Susan Sontag, writer and cultural critic
Richard Stallman, Free Software Foundation founder, copyleft concept inventor
Guy Tudor, conservationist
Maria Varela, community development leader
Gregory Vlastos, classicist and philosopher
Kent Whealy, preservationist
Eric Wolf, anthropologist
Sidney Wolfe, physician
Robert Woodson, community development leader
José Zalaquett, human rights lawyer
1991
Jacqueline Barton, biophysical chemist
Paul Berman, journalist
James Blinn, computer animator
Taylor Branch, social historian
Trisha Brown, choreographer
Mari Jo Buhle, American historian
Patricia Churchland, (neuro)philosopher
David Donoho, statistician
Steven Feld, anthropologist
Alice Fulton, poet
Guillermo Gómez-Peña, writer and artist
Jerzy Grotowski, theater director
David Hammons, artist
Sophia Bracy Harris, child care leader
Lewis Hyde, writer
Ali Akbar Khan, musician
Sergiu Klainerman, mathematician
Martin Kreitman, geneticist
Harlan Lane, psychologist and linguist
William Linder, community development leader
Patricia Locke, tribal rights leader
Mark Morris, choreographer and dancer
Marcel Ophüls, documentary filmmaker
Arnold Rampersad, biographer and literary critic
Gunther Schuller, composer, conductor, jazz historian
Joel Schwartz, epidemiologist
Cecil Taylor, jazz pianist and composer
Julie Taymor, theater director
David Werner, health care leader
James Westphal, engineer and scientist
Eleanor Wilner, poet
1992
Janet Benshoof, human rights lawyer
Robert Blackburn, printmaker
Unita Blackwell, civil rights leader
Lorna Bourg, rural development leader
Stanley Cavell, philosopher
Amy Clampitt, poet
Ingrid Daubechies, mathematician
Wendy Ewald, photographer
Irving Feldman, poet
Barbara Fields, historian
Robert Hall, journalist
Ann Ellis Hanson, historian
John Henry Holland, computer scientist
Wes Jackson, agronomist
Evelyn Keller, historian and philosopher of science
Steve Lacy, saxophonist and composer
Suzanne Lebsock, social historian
Sharon Long, plant biologist
Norman Manea, writer
Paule Marshall, writer
Michael Massing, journalist
Robert McCabe, educator
Susan Meiselas, photojournalist
Amalia Mesa-Bains, artist and cultural critic
Stephen Schneider, climatologist
Joanna Scott, writer
John T. Scott, artist
John Terborgh, conservation biologist
Twyla Tharp, dancer and choreographer
Philip Treisman, mathematics educator
Laurel Thatcher Ulrich, historian
Geerat J. Vermeij, evolutionary biologist
Günter Wagner, developmental biologist
1993
Nancy Cartwright, philosopher
Demetrios Christodoulou, mathematician and physicist
Maria Crawford, geologist
Stanley Crouch, jazz critic and writer
Nora England, anthropological linguist
Paul Farmer, medical anthropologist
Victoria Foe, developmental biologist
Ernest Gaines, writer
Pedro Greer, physician
Thom Gunn, poet and literary critic
Ann Hamilton, artist
Sokoni Karanja, child and family development specialist
Ann Lauterbach, poet and literary critic
Stephen Lee, chemist
Carol Levine, AIDS policy specialist
Amory Lovins, physicist and energy analyst
Jane Lubchenco, marine biologist
Ruth Lubic, nurse and midwife
Jim Powell, poet, translator, and literary critic
Margie Profet, evolutionary biologist
Thomas Scanlon, philosopher
Aaron Shirley, health care leader
William Siemering, journalist and radio producer
Ellen Silbergeld, toxicologist
Leonard van der Kuijp, philologist and historian
Frank von Hippel, arms control and energy analyst
John Edgar Wideman, writer
Heather Williams, biologist and ornithologist
Marion Williams, gospel music performer
Robert H. Williams, physicist and energy analyst
Henry T. Wright, archaeologist and anthropologist
1994
Robert Adams, photographer
Jeraldyne Blunden, choreographer
Anthony Braxton, avant-garde composer and musician
Rogers Brubaker, sociologist
Ornette Coleman, jazz performer and composer
Israel Gelfand, mathematician
Faye Ginsburg, anthropologist
Heidi Hartmann, economist
Bill T. Jones, dancer and choreographer
Peter E. Kenmore, agricultural entomologist
Joseph E. Marshall, educator
Carolyn McKecuen, economic development leader
Donella Meadows, writer
Arthur Mitchell, company director and choreographer
Hugo Morales, radio producer
Janine Pease, educator
Willie Reale, theater arts educator
Adrienne Rich, poet and writer
Sam-Ang Sam, musician and cultural preservationist
Jack Wisdom, physicist
1995
Allison Anders, filmmaker
Jed Z. Buchwald, historian
Octavia E. Butler, science fiction novelist
Sandra Cisneros, writer and poet
Sandy Close, journalist
Frederick C. Cuny, disaster relief specialist
Sharon Emerson, biologist
Richard Foreman, theater director
Alma Guillermoprieto, journalist
Virginia Hamilton, writer
Donald Hopkins, physician
Susan W. Kieffer, geologist
Elizabeth LeCompte, theater director
Patricia Nelson Limerick, historian
Michael Marletta, chemist
Pamela Matson, ecologist
Susan McClary, musicologist
Meredith Monk, vocalist, composer, director
Rosalind P. Petchesky, political scientist
Joel Rogers, political scientist
Cindy Sherman, photographer
Bryan Stevenson, human rights lawyer
Nicholas Strausfeld, neurobiologist
Richard White, historian
1996
James Roger Prior Angel, astronomer
Joaquin Avila, voting rights advocate
Allan Bérubé, historian
Barbara Block, marine biologist
Joan Breton Connelly, classical archaeologist
Thomas Daniel, biologist
Martin Daniel Eakes, economic development strategist
Rebecca Goldstein, writer
Robert Greenstein, public policy analyst
Richard Howard, poet, translator, and literary critic
John Jesurun, playwright
Richard Lenski, biologist
Louis Massiah, documentary filmmaker
Vonnie McLoyd, developmental psychologist
Thylias Moss, poet and writer
Eiko Otake and Koma Otake, dancers, choreographers
Nathan Seiberg, physicist
Anna Deavere Smith, playwright, journalist, actress
Dorothy Stoneman, educator
Bill Strickland, art educator
1997
Luis Alfaro, writer and performance artist
Lee Breuer, playwright
Vija Celmins, artist
Eric Charnov, evolutionary biologist
Elouise P. Cobell, banker
Peter Galison, historian
Mark Harrington, AIDS researcher
Eva Harris, molecular biologist
Michael Kremer, economist
Russell Lande, biologist
Kerry James Marshall, artist
Nancy A. Moran, evolutionary biologist and ecologist
Han Ong, playwright
Kathleen Ross, educator
Pamela Samuelson, copyright scholar and activist
Susan Stewart, literary scholar and poet
Elizabeth Streb, dancer and choreographer
Trimpin, sound sculptor
Loïc Wacquant, sociologist
Kara Walker, artist
David Foster Wallace, author and journalist
Andrew Wiles, mathematician
Brackette Williams, anthropologist
1998
Janine Antoni, artist
Ida Applebroog, artist
Ellen Barry, attorney and human rights activist
Tim Berners-Lee, inventor of the World Wide Web
Linda Bierds, poet
Bernadette Brooten, historian
John Carlstrom, astrophysicist
Mike Davis, historian
Nancy Folbre, economist
Avner Greif, economist
Kun-Liang Guan, biochemist
Gary Hill, artist
Edward Hirsch, poet, essayist
Ayesha Jalal, historian
Charles R. Johnson, writer
Leah Krubitzer, neuroscientist
Stewart Kwoh, human rights activist
Charles Lewis, journalist
William W. McDonald, rancher and conservationist
Peter N. Miller, historian
Don Mitchell, cultural geographer
Rebecca Nelson, plant pathologist
Elinor Ochs, linguistic anthropologist
Ishmael Reed, poet, essayist, novelist
Benjamin D. Santer, atmospheric scientist
Karl Sims, computer scientist and artist
Dorothy Thomas, human rights activist
Leonard Zeskind, human rights activist
Mary Zimmerman, playwright
1999
Jillian Banfield, geologist
Carolyn Bertozzi, chemist
Xu Bing, artist and printmaker
Bruce G. Blair, policy analyst
John Bonifaz, election lawyer and voting rights leader
Shawn Carlson, science educator
Mark Danner, journalist
Alison L. Des Forges, human rights activist
Elizabeth Diller, architect
Saul Friedländer, historian
Jennifer Gordon, lawyer
David Hillis, biologist
Sara Horowitz, lawyer
Jacqueline Jones, historian
Laura L. Kiessling, biochemist
Leslie Kurke, classicist
David Levering Lewis, biographer and historian
Juan Maldacena, physicist
Gay J. McDougall, human rights lawyer
Campbell McGrath, poet
Denny Moore, anthropological linguist
Elizabeth Murray, artist
Pepón Osorio, artist
Ricardo Scofidio, architect
Peter Shor, computer scientist
Eva Silverstein, physicist
Wilma Subra, scientist
Ken Vandermark, saxophonist, composer
Naomi Wallace, playwright
Jeffrey Weeks, mathematician
Fred Wilson, artist
Ofelia Zepeda, linguist
2000
Susan E. Alcock, archaeologist
K. Christopher Beard, paleontologist
Lucy Blake, conservationist
Anne Carson, poet
Peter J. Hayes, energy policy activist
David Isay, radio producer
Alfredo Jaar, photographer
Ben Katchor, graphic novelist
Hideo Mabuchi, physicist
Susan Marshall, choreographer
Samuel Mockbee, architect
Cecilia Muñoz, civil rights policy analyst
Margaret Murnane, optical physicist
Laura Otis, literary scholar and historian of science
Lucia M. Perillo, poet
Matthew Rabin, economist
Carl Safina, marine conservationist
Daniel P. Schrag, geochemist
Susan E. Sygall, civil rights leader
Gina G. Turrigiano, neuroscientist
Gary Urton, anthropologist
Patricia J. Williams, legal scholar
Deborah Willis, historian of photography and photographer
Erik Winfree, computer and materials scientist
Horng-Tzer Yau, mathematician
2001
Andrea Barrett, writer
Christopher Chyba, astrobiologist
Michael Dickinson, fly biologist, bioengineer
Rosanne Haggerty, housing and community development leader
Lene Hau, physicist
Dave Hickey, art critic
Stephen Hough, pianist and composer
Kay Redfield Jamison, psychologist
Sandra Lanham, pilot and conservationist
Iñigo Manglano-Ovalle, artist
Cynthia Moss, natural historian
Aihwa Ong, anthropologist
Dirk Obbink, classicist and papyrologist
Norman R. Pace, biochemist
Suzan-Lori Parks, playwright
Brooks Pate, physical chemist
Xiao Qiang, human rights leader
Geraldine Seydoux, molecular biologist
Bright Sheng, composer
David Spergel, astrophysicist
Jean Strouse, biographer
Julie Su, human rights lawyer
David Wilson, museum founder
2002
Danielle Allen, classicist and political scientist
Bonnie Bassler, molecular biologist
Ann M. Blair, intellectual historian
Katherine Boo, journalist
Paul Ginsparg, physicist
David B. Goldstein, energy conservation specialist
Karen Hesse, writer
Janine Jagger, epidemiologist
Daniel Jurafsky, computer scientist and linguist
Toba Khedoori, artist
Liz Lerman, choreographer
George E. Lewis, trombonist
Liza Lou, artist
Edgar Meyer, bassist and composer
Jack Miles, writer and Biblical scholar
Erik Mueggler, anthropologist and ethnographer
Sendhil Mullainathan, economist
Stanley Nelson, documentary filmmaker
Lee Ann Newsom, paleoethnobotanist
Daniela L. Rus, computer scientist
Charles C. Steidel, astronomer
Brian Tucker, seismologist
Camilo José Vergara, photographer
Paul Wennberg, atmospheric chemist
Colson Whitehead, writer
2003
Guillermo Algaze, archaeologist
Jim Collins, biomedical engineer
Lydia Davis, writer and translator
Erik Demaine, theoretical computer scientist
Corinne Dufka, human rights researcher
Peter Gleick, conservation analyst
Osvaldo Golijov, composer
Deborah Jin, physicist
Angela Johnson, writer
Tom Joyce, blacksmith
Sarah H. Kagan, gerontological nurse
Ned Kahn, artist and science exhibit designer
Jim Yong Kim, public health physician
Nawal M. Nour, obstetrician and gynecologist
Loren H. Rieseberg, botanist
Amy Rosenzweig, biochemist
Pedro A. Sanchez, agronomist
Lateefah Simon, women's development leader
Peter Sís, illustrator
Sarah Sze, sculptor
Eve Troutt Powell, historian
Anders Winroth, historian
Daisy Youngblood, ceramic artist
Xiaowei Zhuang, biophysicist
2004
Angela Belcher, materials scientist and engineer
Gretchen Berland, physician and filmmaker
James Carpenter, artist
Joseph DeRisi, biologist
Katherine Gottlieb, health care leader
David Green, technology transfer innovator
Aleksandar Hemon, writer
Heather Hurst, archaeological illustrator
Edward P. Jones, writer
John Kamm, human rights activist
Daphne Koller, computer scientist
Naomi Leonard, engineer
Tommie Lindsey, school debate coach
Rueben Martinez, businessman and activist
Maria Mavroudi, historian
Vamsi Mootha, physician and computational biologist
Judy Pfaff, sculptor
Aminah Robinson, artist
Reginald Robinson, pianist and composer
Cheryl Rogowski, farmer
Amy Smith, inventor and mechanical engineer
Julie Theriot, microbiologist
C. D. Wright, poet
2005
Marin Alsop, symphony conductor
Ted Ames, fisherman, conservationist, marine biologist
Terry Belanger, rare book preservationist
Edet Belzberg, documentary filmmaker
Majora Carter, urban revitalization strategist
Lu Chen, neuroscientist
Michael Cohen, pharmacist
Joseph Curtin, violinmaker
Aaron Dworkin, music educator
Teresita Fernández, sculptor
Claire Gmachl, quantum cascade laser engineer
Sue Goldie, physician and researcher
Steven Goodman, conservation biologist
Pehr Harbury, biochemist
Nicole King, molecular biologist
Jon Kleinberg, computer scientist
Jonathan Lethem, novelist
Michael Manga, geophysicist
Todd Martinez, theoretical chemist
Julie Mehretu, painter
Kevin M. Murphy, economist
Olufunmilayo Olopade, clinician and researcher
Fazal Sheikh, photographer
Emily Thompson, aural historian
Michael Walsh, vehicle emissions specialist
2006
David Carroll, naturalist author and illustrator
Regina Carter, jazz violinist
Kenneth C. Catania, neurobiologist
Lisa Curran, tropical forester
Kevin Eggan, biologist
Jim Fruchterman, technologist, CEO of Benetech
Atul Gawande, surgeon and author
Linda Griffith, bioengineer
Victoria Hale, CEO of OneWorld Health
Adrian Nicole LeBlanc, journalist and author
David Macaulay, author and illustrator
Josiah McElheny, sculptor
D. Holmes Morton, physician
John A. Rich, physician
Jennifer Richeson, social psychologist
Sarah Ruhl, playwright
George Saunders, short story writer
Anna Schuleit, commemorative artist
Shahzia Sikander, painter
Terence Tao, mathematician
Claire J. Tomlin, aviation engineer
Luis von Ahn, computer scientist
Edith Widder, deep-sea explorer
Matias Zaldarriaga, cosmologist
John Zorn, composer and musician
2007
Deborah Bial, education strategist
Peter Cole, translator, poet, publisher
Lisa Cooper, public health physician
Ruth DeFries, environmental geographer
Mercedes Doretti, forensic anthropologist
Stuart Dybek, short story writer
Marc Edwards, water quality engineer
Michael Elowitz, molecular biologist
Saul Griffith, inventor
Sven Haakanson, Alutiiq curator, anthropologist, preservationist
Corey Harris, blues musician
Cheryl Hayashi, spider silk biologist
My Hang V. Huynh, chemist
Claire Kremen, conservation biologist
Whitfield Lovell, painter and installation artist
Yoky Matsuoka, neuroroboticist
Lynn Nottage, playwright
Mark Roth, biomedical scientist
Paul Rothemund, nanotechnologist
Jay Rubenstein, medieval historian
Jonathan Shay, clinical psychiatrist and classicist
Joan Snyder, painter
Dawn Upshaw, vocalist
Shen Wei, choreographer
2008
Chimamanda Ngozi Adichie, novelist
Will Allen, urban farmer
Regina Benjamin, rural family doctor
Kirsten Bomblies, evolutionary plant geneticist
Tara Donovan, artist
Andrea Ghez, astrophysicist
Stephen D. Houston, anthropologist
Mary Jackson, weaver and sculptor
Leila Josefowicz, violinist
Alexei Kitaev, physicist
Walter Kitundu, instrument maker and composer
Susan Mango, developmental biologist
Diane E. Meier, geriatrician
David R. Montgomery, geomorphologist
John Ochsendorf, engineer and architectural historian
Peter Pronovost, critical care physician
Adam Riess, astrophysicist
Alex Ross, music critic
Wafaa El-Sadr, infectious disease specialist
Nancy Siraisi, historian of medicine
Marin Soljačić, optical physicist
Sally Temple, neuroscientist
Jennifer Tipton, stage lighting designer
Rachel Wilson, experimental neurobiologist
Miguel Zenón, saxophonist and composer
2009
Lynsey Addario, photojournalist
Maneesh Agrawala, computer vision technologist
Timothy Barrett, papermaker
Mark Bradford, mixed media artist
Edwidge Danticat, novelist
Rackstraw Downes, painter
Esther Duflo, economist
Deborah Eisenberg, short story writer
Lin He, molecular biologist
Peter Huybers, climate scientist
James Longley, filmmaker
L. Mahadevan, applied mathematician
Heather McHugh, poet
Jerry Mitchell, investigative reporter
Rebecca Onie, health services innovator
Richard Prum, ornithologist
John A. Rogers, applied physicist
Elyn Saks, mental health lawyer
Jill Seaman, infectious disease physician
Beth Shapiro, evolutionary biologist
Daniel Sigman, biogeochemist
Mary Tinetti, geriatric physician
Camille Utterback, digital artist
Theodore Zoli, bridge engineer
2010
Amir Abo-Shaeer, physics teacher
Jessie Little Doe Baird, Wampanoag language preservation and revival
Kelly Benoit-Bird, marine biologist
Nicholas Benson, stone carver
Drew Berry, biomedical animator
Carlos D. Bustamante, population geneticist
Matthew Carter, type designer
David Cromer, theater director and actor
John Dabiri, biophysicist
Shannon Lee Dawdy, anthropologist
Annette Gordon-Reed, American historian
Yiyun Li, fiction writer
Michal Lipson, optical physicist
Nergis Mavalvala, quantum astrophysicist
Jason Moran, jazz pianist and composer
Carol Padden, sign language linguist
Jorge Pardo, installation artist
Sebastian Ruth, violist, violinist, and music educator
Emmanuel Saez, economist
David Simon, author, screenwriter, and producer
Dawn Song, computer security specialist
Marla Spivak, entomologist
Elizabeth Turk, sculptor
2011
Jad Abumrad, radio host and producer
Marie-Therese Connolly, elder rights lawyer
Roland Fryer, economist
Jeanne Gang, architect
Elodie Ghedin, parasitologist and virologist
Markus Greiner, condensed matter physicist
Kevin Guskiewicz, sports medicine researcher
Peter Hessler, long-form journalist
Tiya Miles, public historian
Matthew Nock, clinical psychologist
Francisco Núñez, choral conductor and composer
Sarah Otto, evolutionary geneticist
Shwetak Patel, sensor technologist and computer scientist
Dafnis Prieto, jazz percussionist and composer
Kay Ryan, poet
Melanie Sanford, organometallic chemist
William Seeley, neuropathologist
Jacob Soll, European historian
A. E. Stallings, poet and translator
Ubaldo Vitali, conservator and silversmith
Alisa Weilerstein, cellist
Yukiko Yamashita, developmental biologist
2012
Natalia Almada, documentary filmmaker
Uta Barth, photographer
Claire Chase, arts entrepreneur and flautist
Raj Chetty, economist
Maria Chudnovsky, mathematician
Eric Coleman, geriatrician
Junot Díaz, fiction writer
David Finkel, journalist
Olivier Guyon, optical physicist and astronomer
Elissa Hallem, neurobiologist
An-My Lê, photographer
Sarkis Mazmanian, medical microbiologist
Dinaw Mengestu, writer
Maurice Lim Miller, social services innovator
Dylan C. Penningroth, historian
Terry Plank, geochemist
Laura Poitras, documentary filmmaker
Nancy Rabalais, marine ecologist
Benoît Rolland, stringed-instrument bow maker
Daniel Spielman, computer scientist
Melody Swartz, bioengineer
Chris Thile, mandolinist and composer
Benjamin Warf, neurosurgeon
2013
Kyle Abraham, choreographer and dancer
Donald Antrim, writer
Phil Baran, organic chemist
C. Kevin Boyce, paleobotanist
Jeffrey Brenner, primary care physician
Colin Camerer, behavioral economist
Jeremy Denk, pianist and writer
Angela Duckworth, research psychologist
Craig Fennie, materials scientist
Robin Fleming, medieval historian
Carl Haber, audio preservationist
Vijay Iyer, jazz pianist and composer
Dina Katabi, computer scientist
Julie Livingston, public health historian and anthropologist
David Lobell, agricultural ecologist
Tarell Alvin McCraney, playwright
Susan Murphy, statistician
Sheila Nirenberg, neuroscientist
Alexei Ratmansky, choreographer
Ana Maria Rey, atomic physicist
Karen Russell, fiction writer
Sara Seager, astrophysicist
Margaret Stock, immigration lawyer
Carrie Mae Weems, photographer and video artist
2014
Danielle Bassett, physicist
Alison Bechdel, cartoonist and graphic memoirist
Mary L. Bonauto, civil rights lawyer
Tami Bond, environmental engineer
Steve Coleman, jazz composer and saxophonist
Sarah Deer, legal scholar and advocate
Jennifer Eberhardt, social psychologist
Craig Gentry, computer scientist
Terrance Hayes, poet
John Henneberger, housing advocate
Mark Hersam, materials scientist
Samuel D. Hunter, playwright
Pamela O. Long, historian of science and technology
Rick Lowe, public artist
Jacob Lurie, mathematician
Khaled Mattawa, translator and poet
Joshua Oppenheimer, documentary filmmaker
Ai-jen Poo, labor organizer
Jonathan Rapping, criminal lawyer
Tara Zahra, historian of modern Europe
Yitang Zhang, mathematician
2015
Patrick Awuah, education entrepreneur
Kartik Chandran, environmental engineer
Ta-Nehisi Coates, journalist and memoirist
Gary Cohen, environmental health advocate
Matthew Desmond, sociologist
William Dichtel, chemist
Michelle Dorrance, tap dancer and choreographer
Nicole Eisenman, painter
LaToya Ruby Frazier, photographer and video artist
Ben Lerner, writer
Mimi Lien, set designer
Lin-Manuel Miranda, playwright, songwriter, and performer
Dimitri Nakassis, classicist
John Novembre, computational biologist
Christopher Ré, computer scientist
Marina Rustow, historian
Juan Salgado, Chicago-based community leader
Beth Stevens, neuroscientist
Lorenz Studer, stem-cell biologist
Alex Truesdell, designer
Basil Twist, puppeteer
Ellen Bryant Voigt, poet
Heidi Williams, economist
Peidong Yang, inorganic chemist
2016
Ahilan Arulanantham, human rights lawyer
Daryl Baldwin, linguist and cultural preservationist
Anne Basting, theater artist and educator
Vincent Fecteau, sculptor
Branden Jacobs-Jenkins, playwright
Kellie Jones, art historian and curator
Subhash Khot, theoretical computer scientist
Josh Kun, cultural historian
Maggie Nelson, writer
Dianne Newman, microbiologist
Victoria Orphan, geobiologist
Manu Prakash, physical biologist and inventor
José A. Quiñonez, financial services innovator
Claudia Rankine, poet
Lauren Redniss, artist and writer
Mary Reid Kelley, video artist
Rebecca Richards-Kortum, bioengineer
Joyce J. Scott, jewelry maker and sculptor
Sarah Stillman, long-form journalist
Bill Thies, computer scientist
Julia Wolfe, composer
Gene Luen Yang, graphic novelist
Jin-Quan Yu, synthetic chemist
2017
Njideka Akunyili Crosby, painter
Sunil Amrith, historian
Greg Asbed, human rights strategist
Annie Baker, playwright
Regina Barzilay, computer scientist
Dawoud Bey, photographer
Emmanuel Candès, mathematician and statistician
Jason De León, anthropologist
Rhiannon Giddens, musician
Nikole Hannah-Jones, journalist
Cristina Jiménez Moreta, activist
Taylor Mac, performance artist
Rami Nashashibi, community leader
Viet Thanh Nguyen, writer
Kate Orff, landscape architect
Trevor Paglen, artist
Betsy Levy Paluck, psychologist
Derek Peterson, historian
Damon Rich, designer and urban planner
Stefan Savage, computer scientist
Yuval Sharon, opera director
Tyshawn Sorey, composer
Gabriel Victora, immunologist
Jesmyn Ward, writer
2018
Matthew Aucoin, composer and conductor
Julie Ault, artist and curator
William J. Barber II, pastor
Clifford Brangwynne, biophysical engineer
Natalie Diaz, poet
Livia S. Eberlin, chemist
Deborah Estrin, computer scientist
Amy Finkelstein, health economist
Gregg Gonsalves, global health advocate
Vijay Gupta, musician
Becca Heller, lawyer
Raj Jayadev, community organizer
Titus Kaphar, painter
John Keene, writer
Kelly Link, writer
Dominique Morisseau, playwright
Okwui Okpokwasili, choreographer
Kristina Olson, psychologist
Lisa Parks, media scholar
Rebecca Sandefur, legal scholar
Allan Sly, mathematician
Sarah T. Stewart-Mukhopadhyay, geologist
Wu Tsang, filmmaker and performance artist
Doris Tsao, neuroscientist
Ken Ward Jr., investigative journalist
2019
Elizabeth S. Anderson, philosopher
sujatha baliga, attorney
Lynda Barry, cartoonist
Mel Chin, artist
Danielle Citron, legal scholar
Lisa Daugaard, criminal justice reformer
Annie Dorsen, theater artist
Andrea Dutton, paleoclimatologist
Jeffrey Gibson, artist
Mary Halvorson, guitarist
Saidiya Hartman, literary scholar
Walter Hood, public artist
Stacy Jupiter, marine scientist
Zachary Lippman, plant biologist
Valeria Luiselli, writer
Kelly Lytle Hernández, historian
Sarah Michelson, choreographer
Jeffrey Alan Miller, literary scholar
Jerry X. Mitrovica, theoretical geophysicist
Emmanuel Pratt, urban designer
Cameron Rowland, artist
Vanessa Ruta, neuroscientist
Joshua Tenenbaum, cognitive scientist
Jenny Tung, evolutionary anthropologist
Ocean Vuong, writer
Emily Wilson, classicist and translator
2020
Isaiah Andrews, econometrician
Tressie McMillan Cottom, sociologist, writer and public scholar
Paul Dauenhauer, chemical engineer
Nels Elde, evolutionary geneticist
Damien Fair, cognitive neuroscientist
Larissa FastHorse, playwright
Catherine Coleman Flowers, environmental health advocate
Mary L. Gray, anthropologist and media scholar
N.K. Jemisin, speculative fiction writer
Ralph Lemon, artist
Polina V. Lishko, cellular and developmental biologist
Thomas Wilson Mitchell, property law scholar
Natalia Molina, American historian
Fred Moten, cultural theorist and poet
Cristina Rivera Garza, fiction writer
Cécile McLorin Salvant, singer and composer
Monika Schleier-Smith, experimental physicist
Mohammad R. Seyedsayamdost, biological chemist
Forrest Stuart, sociologist
Nanfu Wang, documentary filmmaker
Jacqueline Woodson, writer
2021
Hanif Abdurraqib, music critic, essayist and poet
Daniel Alarcón, writer and radio producer
Marcella Alsan, physician-economist
Trevor Bedford, computational virologist
Reginald Dwayne Betts, poet and lawyer
Jordan Casteel, painter
Don Mee Choi, poet and translator
Ibrahim Cissé, cellular biophysicist
Nicole Fleetwood, art historian and curator
Cristina Ibarra, documentary filmmaker
Ibram X. Kendi, American historian and cultural critic
Daniel Lind-Ramos, sculptor and painter
Monica Muñoz Martinez, public historian
Desmond Meade, civil rights activist
Joshua Miele, adaptive technology designer
Michelle Monje, neurologist and neuro-oncologist
Safiya Noble, digital media scholar
J. Taylor Perron, geomorphologist
Alex Rivera, filmmaker and media artist
Lisa Schulte Moore, landscape ecologist
Jesse Shapiro, applied microeconomist
Jacqueline Stewart, cinema studies scholar and curator
Keeanga-Yamahtta Taylor, historian
Victor J. Torres, microbiologist
Jawole Willa Jo Zollar, choreographer and dance entrepreneur
In popular culture
In the 2008 Charlie Kaufman film Synecdoche, New York, the main character Caden Cotard was a recipient of the Grant, and used it to fund his immersive play.
The Big Bang Theory: In the episode "The Bath Item Gift Hypothesis" of season 2, Michael Trucco plays the role of Dr. David Underhill, an experimental physicist and a MacArthur Grant recipient. In the episode "The Geology Elevation" of season 10, Sheldon becomes jealous of Dr. Bertram "Bert" Kibbler (a geology professor) when he learns that he has won a MacArthur Grant for his work on endolithic organisms.
A Discovery of Witches: Christopher "Chris" Roberts, best friend of main character Diana Bishop, is a human molecular biologist who has won the MacArthur Fellowship
Bones: In the Season 12 episode "The Brain in the Bot", forensic artist Angela Montenegro wins the MacArthur Fellowship, which is confirmed later to be nominated by Dr. Temperance Brennan, for her "groundbreaking" work with the "Angelatron".
In Noah Baumbach's 2019 film Marriage Story, Charlie Barber, a successful theater director, is a recipient of the MacArthur Fellowship, and uses the first payout to hire a divorce lawyer.
Family Guy, season 4, episode 6, "Petarded", Peter Griffin is prompted to take a test to qualify for the MacArthur Grant after winning a game of Trivial Pursuit.
See also
Guggenheim Fellowship
Thomas J. Watson Fellowship
References
External links
MacArthur Fellows Program website
Fellowships
Lists of award winners |
52606309 | https://en.wikipedia.org/wiki/1975%20Liberty%20Bowl | 1975 Liberty Bowl | The 1975 Liberty Bowl was a college football postseason bowl game played on December 22, 1975, in Memphis, Tennessee. In the 17th edition of the Liberty Bowl, the USC Trojans defeated the Texas A&M Aggies, 20–0. This was the first playing of the bowl with the venue named as Liberty Bowl Memorial Stadium, as its name had been changed from Memphis Memorial Stadium earlier the same month.
Background
The Aggies had won more games than in the previous season for the third straight year, and it culminated in a conference title, albeit a shared one. The Aggies started the season ranked at #8, opening the season up with a victory over Ole Miss. The Aggies won their first ten games, with the last being against #5 Texas 20–10. However, quarterback Mike Jay injured his back during the victory, and David Shipman replaced him for the game against #18 Arkansas in early December. The Aggies lost 31–6 to fall to #6 and finish with a share of the Southwest Conference title with Arkansas and Texas, with the former going to the Cotton Bowl that year. Instead, the Aggies were invited to the Liberty Bowl, their first ever appearance in the game along with their first bowl appearance since 1968.
The Trojans started their season off ranked fourth in the nation, as they won their first seven games of the season to be at #4 heading into the latter part of their conference schedule. But losses to California, Stanford, Washington, and #14 UCLA dropped them out of the polls and out of the race for the Pacific-8 Conference title, as they finished at 3-4, behind the teams that had beaten USC. This was their fourth straight bowl game along with their first Liberty Bowl appearance. This was the first season that the Pac-8 allowed bowl participation in addition to the Rose Bowl; Cal, Stanford, and Washington stayed at home while fifth place USC was invited to Memphis.
Game summary
A Monday night game, temperatures were around and all of the scoring was in the first half. Glen Walker started the scoring off with a field goal from 45 yards. Quarterback Vince Evans' 65-yard pass set up a Mosi Tatupu touchdown plunge from a yard out to make it 10–0 in the second quarter. Clint Strozier intercepted a pass at the Aggie 19 to set up a Walker field goal from 40 yards out. A screen pass from Evans to Ricky Bell went 76 yards for a touchdown to give the Trojans a 20–0 lead with 5:14 in the half.
There were no scoring drives in the second half as the Trojans completed the shutout. Texas A&M was completely stymied on the day, turning the ball over four times, including two times when in Trojan territory. Despite having more first downs and rushing yards (15 and 148 yards to USC's 13 and 141, respectively), USC outpassed them 174 to 99 while Bell rushed for 82 yards on 28 carries. With the yards carried in the game, he broke the USC's single season rushing record of 1,880 yards set by O. J. Simpson in 1968 with 1,957 yards in 12 games.
Aftermath
This was head coach John McKay's final game with the Trojans, as he left for the expansion Tampa Bay Buccaneers in the National Football League (NFL). USC continued their run with new coach John Robinson, going to three more bowl games in the decade. They have not been invited to the Liberty Bowl since this game. Texas A&M also went to three more bowl games in the decade, though they did not return to the Liberty Bowl again until 2014. The two teams met again two years later in the Astro-Bluebonnet Bowl, which USC won 47–28.
References
Liberty Bowl
Liberty Bowl
Texas A&M Aggies football bowl games
USC Trojans football bowl games
Liberty Bowl
December 1975 sports events in the United States |
25486000 | https://en.wikipedia.org/wiki/ODROID | ODROID | The ODROID is a series of single-board computers and tablet computers created by Hardkernel Co., Ltd., located in South Korea. Even though the name ODROID is a portmanteau of open + Android, the hardware is not actually open because some parts of the design are retained by the company. Many ODROID systems are capable of running not only Android, but also regular Linux distributions.
Hardware
Several models of ODROID's have been released by Hardkernel. The first generation was released in 2009, followed by higher specification models.
C models feature an Amlogic system on a chip (SoC), while XU models feature an Samsung Exynos SoC. Both include an ARM central processing unit (CPU) and an on chip graphics processing unit (GPU). CPU architectures include ARMv7-A and ARMv8-A, a board memory range from 1 GB RAM to 4 GiB RAM. Secure Digital SD cards are used to store the operating system and program memory in either the SDHC or MicroSDHC sizes. Most boards have between three and five mixed USB 2.0 or 3.0 slots, HDMI output, and a 3.5 mm jack. Lower level output is provided by a number of general-purpose input/output (GPIO) pins which support common protocols like I²C. Current models have an Gigabit Ethernet (8P8C) port and eMMC module socket.
Specifications
Software
Operating systems
References
External links
Official Hardkernel website
ODROID official forum
ODROID Wiki
ODROID Magazine
Single-board computers
Android (operating system) devices
Handheld game consoles
Linux-based devices
Products introduced in 2009
Tablet computers
AI based Human Capital Management Solutions |
414413 | https://en.wikipedia.org/wiki/Internet%20Printing%20Protocol | Internet Printing Protocol | The Internet Printing Protocol (IPP) is a specialized Internet protocol for communication between client devices (computers, mobile phones, tablets, etc.) and printers (or print servers). It allows clients to submit one or more print jobs to the printer or print server, and perform tasks such as querying the status of a printer, obtaining the status of print jobs, or cancelling individual print jobs.
Like all IP-based protocols, IPP can run locally or over the Internet. Unlike other printing protocols, IPP also supports access control, authentication, and encryption, making it a much more capable and secure printing mechanism than older ones.
IPP is the basis of several printer logo certification programs including AirPrint, IPP Everywhere, and Mopria Alliance, and is supported by over 98% of printers sold today.
History
IPP began as a proposal by Novell for the creation of an Internet printing protocol project in 1996. The result was a draft written by Novell and Xerox called the Lightweight Document Printing Application (LDPA), derived from ECMA-140: Document Printing Application (DPA). At about the same time, Lexmark publicly proposed something called the HyperText Printing Protocol (HTPP), and both HP and Microsoft had started work on new print services for what became Windows 2000. Each of the companies chose to start a common Internet Printing Protocol project in the Printer Working Group (PWG) and negotiated an IPP birds-of-a-feather (or BOF) session with the Application Area Directors in the Internet Engineering Task Force (IETF). The BOF session in December 1996 showed sufficient interest in developing a printing protocol, leading to the creation of the IETF Internet Printing Protocol (ipp) working group, which concluded in 2005.
Work on IPP continues in the PWG Internet Printing Protocol workgroup with the publication of 23 candidate standards, 1 new and 3 updated IETF RFCs, and several registration and best practice documents providing extensions to IPP and support for different services including 3D Printing, scanning, facsimile, cloud-based services, and overall system and resource management.
IPP/1.0 was published as a series of experimental documents (RFC 2565, RFC 2566, RFC 2567, RFC 2568, RFC 2569, and RFC 2639) in 1999.
IPP/1.1 followed as a draft standard in 2000 with support documents in 2001, 2003, and 2015 (RFC 2910, RFC 2911, RFC 3196, RFC 3510 RFC 7472). IPP/1.1 was updated as a proposed standard in January 2017 (RFC 8010, RFC 8011,) and then adopted as Internet Standard 92 (STD 92,) in June 2018.
IPP 2.0 was published as a PWG Candidate Standard in 2009 (PWG 5100.10-2009,) and defined two new IPP versions (2.0 for printers and 2.1 for print servers) with additional conformance requirements beyond IPP 1.1. A subsequent Candidate Standard replaced it 2011 defining an additional 2.2 version for production printers (PWG 5100.12-2011,). This specification was updated and approved as a full PWG Standard (PWG 5100.12-2015,) in 2015.
IPP Everywhere was published in 2013 and provides a common baseline for printers to support so-called "driverless" printing from client devices. It builds on IPP and specifies additional rules for interoperability, such as a list of document formats printers need to support. A corresponding self-certification manual and tool suite was published in 2016 allowing printer manufacturers and print server implementors to certify their solutions against the published specification and be listed on the IPP Everywhere printers page maintained by the PWG.
Implementation
IPP is implemented using the Hypertext Transfer Protocol (HTTP) and inherits all of the HTTP streaming and security features. For example, authorization can take place via HTTP's Digest access authentication mechanism, GSSAPI, or any other HTTP authentication methods. Encryption is provided using the TLS protocol-layer, either in the traditional always-on mode used by HTTPS or using the HTTP Upgrade extension to HTTP (RFC 2817). Public key certificates can be used for authentication with TLS. Streaming is supported using HTTP chunking. The document to be printed is usually sent as a data stream.
IPP accommodates various formats for documents to be printed. The PWG defined an image format called PWG Raster specifically for this purpose. Other formats include PDF or JPEG, depending on the capabilities of the destination printer.
IPP uses the traditional client–server model, with clients sending IPP request messages with the MIME media type "application/ipp" in HTTP POST requests to an IPP printer. IPP request messages consist of key–value pairs using a custom binary encoding followed by an "end of attributes" tag and any document data required for the request (such as the document to be printed). The IPP response is sent back to the client in the HTTP POST response, again using the "application/ipp" MIME media type.
Among other things, IPP allows a client to:
query a printer's capabilities (such as supported character sets, media types and document formats)
submit print jobs to a printer
query the status of a printer
query the status of one or more print jobs
cancel previously submitted jobs
IPP uses TCP with port 631 as its well-known port.
Products using the Internet Printing Protocol include CUPS (which is part of Apple macOS and many BSD and Linux distributions and is the reference implementation for most versions of IPP ), Novell iPrint, and Microsoft Windows versions starting from MS Windows 2000. Windows XP and Windows Server 2003 offer IPP printing via HTTPS. Windows Vista, Windows 7, Windows Server 2008 and 2008 R2 also support IPP printing over RPC in the "Medium-Low" security zone.
See also
CUPS
Job Definition Format
Line Printer Daemon protocol
T.37 (ITU-T recommendation)
References
Further reading
Standards
.
Informational documents
External links
.
.
.
.
.
Printing protocols
Computer printing |
33172 | https://en.wikipedia.org/wiki/Wireless%20network | Wireless network | A wireless network is a computer network that uses wireless data connections between network nodes.
Wireless networking is a method by which homes, telecommunications networks and business installations avoid the costly process of introducing cables into a building, or as a connection between various equipment locations. Admin telecommunications networks are generally implemented and administered using radio communication. This implementation takes place at the physical level (layer) of the OSI model network structure.
Examples of wireless networks include cell phone networks, wireless local area networks (WLANs), wireless sensor networks, satellite communication networks, and terrestrial microwave networks.
History
Wireless networks
The first professional wireless network was developed under the brand ALOHAnet in 1969 at the University of Hawaii and became operational in June 1971. The first commercial wireless network was the WaveLAN product family, developed by NCR in 1986.
1973 Ethernet 802.3
1991 2G cell phone network
June 1997 802.11 "Wi-Fi" protocol first release
1999 803.11 VoIP integration
Underlying technology
Advances in MOSFET (MOS transistor) wireless technology enabled the development of digital wireless networks. The wide adoption of RF CMOS (radio frequency CMOS), power MOSFET and LDMOS (lateral diffused MOS) devices led to the development and proliferation of digital wireless networks by the 1990s, with further advances in MOSFET technology leading to increasing bandwidth in the 2000s (Edholm's law). Most of the essential elements of wireless networks are built from MOSFETs, including the mobile transceivers, base station modules, routers, RF power amplifiers, telecommunication circuits, RF circuits, and radio transceivers, in networks such as 2G, 3G, and 4G.
Wireless links
Terrestrial microwave – Terrestrial microwave communication uses Earth-based transmitters and receivers resembling satellite dishes. Terrestrial microwaves are in the low gigahertz range, which limits all communications to line-of-sight. Relay stations are spaced approximately apart.
Communications satellites – Satellites communicate via microwave radio waves, which are not deflected by the Earth's atmosphere. The satellites are stationed in space, typically in geosynchronous orbit above the equator. These Earth-orbiting systems are capable of receiving and relaying voice, data, and TV signals.
Cellular and PCS systems use several radio communications technologies. The systems divide the region covered into multiple geographic areas. Each area has a low-power transmitter or radio relay antenna device to relay calls from one area to the next area.
Radio and spread spectrum technologies – Wireless local area networks use a high-frequency radio technology similar to digital cellular and a low-frequency radio technology. Wireless LANs use spread spectrum technology to enable communication between multiple devices in a limited area. IEEE 802.11 defines a common flavor of open-standards wireless radio-wave technology known as Wi-Fi.
Free-space optical communication uses visible or invisible light for communications. In most cases, line-of-sight propagation is used, which limits the physical positioning of communicating devices.
Types of wireless networks
Wireless PAN
Wireless personal area networks (WPANs) connect devices within a relatively small area, that is generally within a person's reach. For example, both Bluetooth radio and invisible infrared light provides a WPAN for interconnecting a headset to a laptop. ZigBee also supports WPAN applications. Wi-Fi PANs are becoming commonplace (2010) as equipment designers start to integrate Wi-Fi into a variety of consumer electronic devices. Intel "My WiFi" and Windows 7 "virtual Wi-Fi" capabilities have made Wi-Fi PANs simpler and easier to set up and configure.
Wireless LAN
A wireless local area network (WLAN) links two or more devices over a short distance using a wireless distribution method, usually providing a connection through an access point for internet access. The use of spread-spectrum or OFDM technologies may allow users to move around within a local coverage area, and still remain connected to the network.
Products using the IEEE 802.11 WLAN standards are marketed under the Wi-Fi brand name.
Fixed wireless technology implements point-to-point links between computers or networks at two distant locations, often using dedicated microwave or modulated laser light beams over line of sight paths. It is often used in cities to connect networks in two or more buildings without installing a wired link.
To connect to Wi-Fi using a mobile device, one can use a device like a wireless router or the private hotspot capability of another mobile device.
Wireless ad hoc network
A wireless ad hoc network, also known as a wireless mesh network or mobile ad hoc network (MANET), is a wireless network made up of radio nodes organized in a mesh topology. Each node forwards messages on behalf of the other nodes and each node performs routing. Ad hoc networks can "self-heal", automatically re-routing around a node that has lost power. Various network layer protocols are needed to realize ad hoc mobile networks, such as Distance Sequenced Distance Vector routing, Associativity-Based Routing, Ad hoc on-demand Distance Vector routing, and Dynamic source routing.
Wireless MAN
Wireless metropolitan area networks are a type of wireless network that connects several wireless LANs.
WiMAX is a type of Wireless MAN and is described by the IEEE 802.16 standard.
Wireless WAN
Wireless wide area networks are wireless networks that typically cover large areas, such as between neighbouring towns and cities, or city and suburb. These networks can be used to connect branch offices of business or as a public Internet access system. The wireless connections between access points are usually point to point microwave links using parabolic dishes on the 2.4 GHz and 5.8 GHz band, rather than omnidirectional antennas used with smaller networks. A typical system contains base station gateways, access points and wireless bridging relays. Other configurations are mesh systems where each access point acts as a relay also. When combined with renewable energy systems such as photovoltaic solar panels or wind systems they can be stand alone systems.
Cellular network
A cellular network or mobile network is a radio network distributed over land areas called cells, each served by at least one fixed-location transceiver, known as a cell site or base station. In a cellular network, each cell characteristically uses a different set of radio frequencies from all their immediate neighbouring cells to avoid any interference.
When joined these cells provide radio coverage over a wide geographic area. This enables a large number of portable transceivers (e.g., mobile phones, pagers, etc.) to communicate with each other and with fixed transceivers and telephones anywhere in the network, via base stations, even if some of the transceivers are moving through more than one cell during transmission.
Although originally intended for cell phones, with the development of smartphones, cellular telephone networks routinely carry data in addition to telephone conversations:
Global System for Mobile Communications (GSM): The GSM network is divided into three major systems: the switching system, the base station system, and the operation and support system. The cell phone connects to the base system station which then connects to the operation and support station; it then connects to the switching station where the call is transferred to where it needs to go. GSM is the most common standard and is used for a majority of cell phones.
Personal Communications Service (PCS): PCS is a radio band that can be used by mobile phones in North America and South Asia. Sprint happened to be the first service to set up a PCS.
D-AMPS: Digital Advanced Mobile Phone Service, an upgraded version of AMPS, is being phased out due to advancement in technology. The newer GSM networks are replacing the older system.
Global area network
A global area network (GAN) is a network used for supporting mobile across an arbitrary number of wireless LANs, satellite coverage areas, etc. The key challenge in mobile communications is handing off user communications from one local coverage area to the next. In IEEE Project 802, this involves a succession of terrestrial wireless LANs.
Space network
Space networks are networks used for communication between spacecraft, usually in the vicinity of the Earth. The example of this is NASA's Space Network.
Uses
Some examples of usage include cellular phones which are part of everyday wireless networks, allowing easy personal communications. Another example, Intercontinental network systems, use radio satellites to communicate across the world. Emergency services such as the police utilize wireless networks to communicate effectively as well. Individuals and businesses use wireless networks to send and share data rapidly, whether it be in a small office building or across the world.
Properties
General
In a general sense, wireless networks offer a vast variety of uses by both business and home users.
"Now, the industry accepts a handful of different wireless technologies. Each wireless technology is defined by a standard that describes unique functions at both the Physical and the Data Link layers of the OSI model. These standards differ in their specified signaling methods, geographic ranges, and frequency usages, among other things. Such differences can make certain technologies better suited to home networks and others better suited to network larger organizations."
Performance
Each standard varies in geographical range, thus making one standard more ideal than the next depending on what it is one is trying to accomplish with a wireless network.
The performance of wireless networks satisfies a variety of applications such as voice and video. The use of this technology also gives room for expansions, such as from 2G to 3G and, 4G and 5G technologies, which stand for the fourth and fifth generation of cell phone mobile communications standards. As wireless networking has become commonplace, sophistication increases through configuration of network hardware and software, and greater capacity to send and receive larger amounts of data, faster, is achieved. Now the wireless network has been running on LTE, which is a 4G mobile communication standard. Users of an LTE network should have data speeds that are 10x faster than a 3G network.
Space
Space is another characteristic of wireless networking. Wireless networks offer many advantages when it comes to difficult-to-wire areas trying to communicate such as across a street or river, a warehouse on the other side of the premises or buildings that are physically separated but operate as one. Wireless networks allow for users to designate a certain space which the network will be able to communicate with other devices through that network.
Space is also created in homes as a result of eliminating clutters of wiring. This technology allows for an alternative to installing physical network mediums such as TPs, coaxes, or fiber-optics, which can also be expensive.
Home
For homeowners, wireless technology is an effective option compared to Ethernet for sharing printers, scanners, and high-speed Internet connections. WLANs help save the cost of installation of cable mediums, save time from physical installation, and also creates mobility for devices connected to the network.
Wireless networks are simple and require as few as one single wireless access point connected directly to the Internet via a router.
Wireless network elements
The telecommunications network at the physical layer also consists of many interconnected wireline network elements (NEs). These NEs can be stand-alone systems or products that are either supplied by a single manufacturer or are assembled by the service provider (user) or system integrator with parts from several different manufacturers.
Wireless NEs are the products and devices used by a wireless carrier to provide support for the backhaul network as well as a mobile switching center (MSC).
Reliable wireless service depends on the network elements at the physical layer to be protected against all operational environments and applications (see GR-3171, Generic Requirements for Network Elements Used in Wireless Networks – Physical Layer Criteria).
What are especially important are the NEs that are located on the cell tower to the base station (BS) cabinet. The attachment hardware and the positioning of the antenna and associated closures and cables are required to have adequate strength, robustness, corrosion resistance, and resistance against wind, storms, icing, and other weather conditions. Requirements for individual components, such as hardware, cables, connectors, and closures, shall take into consideration the structure to which they are attached.
Difficulties
Interference
Compared to wired systems, wireless networks are frequently subject to electromagnetic interference. This can be caused by other networks or other types of equipment that generate radio waves that are within, or close, to the radio bands used for communication. Interference can degrade the signal or cause the system to fail.
Absorption and reflection
Some materials cause absorption of electromagnetic waves, preventing it from reaching the receiver, in other cases, particularly with metallic or conductive materials reflection occurs. This can cause dead zones where no reception is available. Aluminium foiled thermal isolation in modern homes can easily reduce indoor mobile signals by 10 dB frequently leading to complaints about the bad reception of long-distance rural cell signals.
Multipath fading
In multipath fading two or more different routes taken by the signal, due to reflections, can cause the signal to cancel out each other at certain locations, and to be stronger in other places (upfade).
Hidden node problem
The hidden node problem occurs in some types of network when a node is visible from a wireless access point (AP), but not from other nodes communicating with that AP. This leads to difficulties in media access control (collisions).
Exposed terminal node problem
The exposed terminal problem is when a node on one network is unable to send because of co-channel interference from a node that is on a different network.
Shared resource problem
The wireless spectrum is a limited resource and shared by all nodes in the range of its transmitters. Bandwidth allocation becomes complex with multiple participating users. Often users are not aware that advertised numbers (e.g., for IEEE 802.11 equipment or LTE networks) are not their capacity, but shared with all other users and thus the individual user rate is far lower. With increasing demand, the capacity crunch is more and more likely to happen. User-in-the-loop (UIL) may be an alternative solution to ever upgrading to newer technologies for over-provisioning.
Capacity
Channel
Shannon's theorem can describe the maximum data rate of any single wireless link, which relates to the bandwidth in hertz and to the noise on the channel.
One can greatly increase channel capacity by using MIMO techniques, where multiple aerials or multiple frequencies can exploit multiple paths to the receiver to achieve much higher throughput – by a factor of the product of the frequency and aerial diversity at each end.
Under Linux, the Central Regulatory Domain Agent (CRDA) controls the setting of channels.
Network
The total network bandwidth depends on how dispersive the medium is (more dispersive medium generally has better total bandwidth because it minimises interference), how many frequencies are available, how noisy those frequencies are, how many aerials are used and whether a directional antenna is in use, whether nodes employ power control and so on.
Cellular wireless networks generally have good capacity, due to their use of directional aerials, and their ability to reuse radio channels in non-adjacent cells. Additionally, cells can be made very small using low power transmitters this is used in cities to give network capacity that scales linearly with population density.
Safety
Wireless access points are also often close to humans, but the drop off in power over distance is fast, following the inverse-square law.
The position of the United Kingdom's Health Protection Agency (HPA) is that “...radio frequency (RF) exposures from WiFi are likely to be lower than those from mobile phones". It also saw “...no reason why schools and others should not use WiFi equipment". In October 2007, the HPA launched a new "systematic" study into the effects of WiFi networks on behalf of the UK government, in order to calm fears that had appeared in the media in a recent period up to that time". Dr Michael Clark, of the HPA, says published research on mobile phones and masts does not add up to an indictment of WiFi.
See also
Rendezvous delay
Wireless access point
Wireless community network
Wireless LAN client comparison
Wireless site survey
Network simulation
Optical mesh network
Wireless mesh network
Wireless mobility management
References
Further reading
External links
de:Kabellose Übertragungsverfahren |
39652 | https://en.wikipedia.org/wiki/OCaml | OCaml | OCaml ( , formerly Objective Caml) is a general-purpose, multi-paradigm programming language which extends the Caml dialect of ML with object-oriented features. OCaml was created in 1996 by Xavier Leroy, Jérôme Vouillon, Damien Doligez, Didier Rémy, Ascánder Suárez, and others.
The OCaml toolchain includes an interactive top-level interpreter, a bytecode compiler, an optimizing native code compiler, a reversible debugger, and a package manager (OPAM). OCaml was initially developed in the context of automated theorem proving, and has an outsize presence in static analysis and formal methods software. Beyond these areas, it has found serious use in systems programming, web development, and financial engineering, among other application domains.
The acronym CAML originally stood for Categorical Abstract Machine Language, but OCaml omits this abstract machine. OCaml is a free and open-source software project managed and principally maintained by the French Institute for Research in Computer Science and Automation (INRIA). In the early 2000s, elements from OCaml were adopted by many languages, notably F# and Scala.
Philosophy
ML-derived languages are best known for their static type systems and type-inferring compilers. OCaml unifies functional, imperative, and object-oriented programming under an ML-like type system. Thus, programmers need not be highly familiar with the pure functional language paradigm to use OCaml.
By requiring the programmer to work within the constraints of its static type system, OCaml eliminates many of the type-related runtime problems associated with dynamically typed languages. Also, OCaml's type-inferring compiler greatly reduces the need for the manual type annotations that are required in most statically typed languages. For example, the data types of variables and the signatures of functions usually need not be declared explicitly, as they do in languages like Java and C#, because they can be inferred from the operators and other functions that are applied to the variables and other values in the code. Effective use of OCaml's type system can require some sophistication on the part of a programmer, but this discipline is rewarded with reliable, high-performance software.
OCaml is perhaps most distinguished from other languages with origins in academia by its emphasis on performance. Its static type system prevents runtime type mismatches and thus obviates runtime type and safety checks that burden the performance of dynamically typed languages, while still guaranteeing runtime safety, except when array bounds checking is turned off or when some type-unsafe features like serialization are used. These are rare enough that avoiding them is quite possible in practice.
Aside from type-checking overhead, functional programming languages are, in general, challenging to compile to efficient machine language code, due to issues such as the funarg problem. Along with standard loop, register, and instruction optimizations, OCaml's optimizing compiler employs static program analysis methods to optimize value boxing and closure allocation, helping to maximize the performance of the resulting code even if it makes extensive use of functional programming constructs.
Xavier Leroy has stated that "OCaml delivers at least 50% of the performance of a decent C compiler", although a direct comparison is impossible. Some functions in the OCaml standard library are implemented with faster algorithms than equivalent functions in the standard libraries of other languages. For example, the implementation of set union in the OCaml standard library in theory is asymptotically faster than the equivalent function in the standard libraries of imperative languages (e.g., C++, Java) because the OCaml implementation exploits the immutability of sets to reuse parts of input sets in the output (see persistent data structure).
Features
OCaml features a static type system, type inference, parametric polymorphism, tail recursion, pattern matching, first class lexical closures, functors (parametric modules), exception handling, and incremental generational automatic garbage collection.
OCaml is notable for extending ML-style type inference to an object system in a general-purpose language. This permits structural subtyping, where object types are compatible if their method signatures are compatible, regardless of their declared inheritance (an unusual feature in statically typed languages).
A foreign function interface for linking to C primitives is provided, including language support for efficient numerical arrays in formats compatible with both C and Fortran. OCaml also supports creating libraries of OCaml functions that can be linked to a main program in C, so that an OCaml library can be distributed to C programmers who have no knowledge or installation of OCaml.
The OCaml distribution contains:
Lexical analysis and parsing tools called ocamllex and ocamlyacc
Debugger that supports stepping backwards to investigate errors
Documentation generator
Profiler – to measure performance
Many general-purpose libraries
The native code compiler is available for many platforms, including Unix, Microsoft Windows, and Apple macOS. Portability is achieved through native code generation support for major architectures: IA-32, X86-64 (AMD64), Power, RISC-V, ARM, and ARM64.
OCaml bytecode and native code programs can be written in a multithreaded style, with preemptive context switching. However, because the garbage collector of the INRIA OCaml system (which is the only currently available full implementation of the language) is not designed for concurrency, symmetric multiprocessing is unsupported. OCaml threads in the same process execute by time sharing only. There are however several libraries for distributed computing such as Functory and ocamlnet/Plasma.
Development environment
Since 2011, many new tools and libraries have been contributed to the OCaml development environment:
Development tools
opam is a package manager for OCaml.
Merlin provides IDE-like functionality for multiple editors, including type throwback, go-to-definition, and auto-completion.
Dune is a composable build-system for OCaml.
OCamlformat is an auto-formatter for OCaml.
ocaml-lsp-server is a Language Server Protocol for OCaml IDE integration.
Web sites:
OCaml.org is the primary site for the language.
discuss.ocaml.org is an instance of Discourse that serves as the primary discussion site for OCaml.
Alternate compilers for OCaml:
js_of_ocaml, developed by the Ocsigen team, is an optimizing compiler from OCaml to JavaScript.
BuckleScript, which also targets JavaScript, with a focus on producing readable, idiomatic JavaScript output.
ocamlcc is a compiler from OCaml to C, to complement the native code compiler for unsupported platforms.
OCamlJava, developed by INRIA, is a compiler from OCaml to the Java virtual machine (JVM).
OCaPic, developed by Lip6, is an OCaml compiler for PIC microcontrollers.
Code examples
Snippets of OCaml code are most easily studied by entering them into the top-level REPL. This is an interactive OCaml session that prints the inferred types of resulting or defined expressions. The OCaml top-level is started by simply executing the OCaml program:
$ ocaml
Objective Caml version 3.09.0
#
Code can then be entered at the "#" prompt. For example, to calculate 1+2*3:
# 1 + 2 * 3;;
- : int = 7
OCaml infers the type of the expression to be "int" (a machine-precision integer) and gives the result "7".
Hello World
The following program "hello.ml":
print_endline "Hello World!"
can be compiled into a bytecode executable:
$ ocamlc hello.ml -o hello
or compiled into an optimized native-code executable:
$ ocamlopt hello.ml -o hello
and executed:
$ ./hello
Hello World!
$
The first argument to ocamlc, "hello.ml", specifies the source file to compile and the "-o hello" flag specifies the output file.
Summing a list of integers
Lists are one of the fundamental datatypes in OCaml. The following code example defines a recursive function sum that accepts one argument, integers, which is supposed to be a list of integers. Note the keyword rec which denotes that the function is recursive. The function recursively iterates over the given list of integers and provides a sum of the elements. The match statement has similarities to C's switch element, though it is far more general.
let rec sum integers = (* Keyword rec means 'recursive'. *)
match integers with
| [] -> 0 (* Yield 0 if integers is the empty
list []. *)
| first :: rest -> first + sum rest;; (* Recursive call if integers is a non-
empty list; first is the first
element of the list, and rest is a
list of the rest of the elements,
possibly []. *)
# sum [1;2;3;4;5];;
- : int = 15
Another way is to use standard fold function that works with lists.
let sum integers =
List.fold_left (fun accumulator x -> accumulator + x) 0 integers;;
# sum [1;2;3;4;5];;
- : int = 15
Since the anonymous function is simply the application of the + operator, this can be shortened to:
let sum integers =
List.fold_left (+) 0 integers
Furthermore, one can omit the list argument by making use of a partial application:
let sum =
List.fold_left (+) 0
Quicksort
OCaml lends itself to concisely expressing recursive algorithms. The following code example implements an algorithm similar to quicksort that sorts a list in increasing order.
let rec qsort = function
| [] -> []
| pivot :: rest ->
let is_less x = x < pivot in
let left, right = List.partition is_less rest in
qsort left @ [pivot] @ qsort right
Birthday problem
The following program calculates the smallest number of people in a room for whom the probability of completely unique birthdays is less than 50% (the birthday problem, where for 1 person the probability is 365/365 (or 100%), for 2 it is 364/365, for 3 it is 364/365 × 363/365, etc.) (answer = 23).
let year_size = 365.
let rec birthday_paradox prob people =
let prob = (year_size -. float people) /. year_size *. prob in
if prob < 0.5 then
Printf.printf "answer = %d\n" (people+1)
else
birthday_paradox prob (people+1)
;;
birthday_paradox 1.0 1
Church numerals
The following code defines a Church encoding of natural numbers, with successor (succ) and addition (add). A Church numeral is a higher-order function that accepts a function and a value and applies to exactly times. To convert a Church numeral from a functional value to a string, we pass it a function that prepends the string to its input and the constant string .
let zero f x = x
let succ n f x = f (n f x)
let one = succ zero
let two = succ (succ zero)
let add n1 n2 f x = n1 f (n2 f x)
let to_string n = n (fun k -> "S" ^ k) "0"
let _ = to_string (add (succ two) two)
Arbitrary-precision factorial function (libraries)
A variety of libraries are directly accessible from OCaml. For example, OCaml has a built-in library for arbitrary-precision arithmetic. As the factorial function grows very rapidly, it quickly overflows machine-precision numbers (typically 32- or 64-bits). Thus, factorial is a suitable candidate for arbitrary-precision arithmetic.
In OCaml, the Num module (now superseded by the ZArith module) provides arbitrary-precision arithmetic and can be loaded into a running top-level using:
# #use "topfind";;
# #require "num";;
# open Num;;
The factorial function may then be written using the arbitrary-precision numeric operators , and :
# let rec fact n =
if n =/ Int 0 then Int 1 else n */ fact(n -/ Int 1);;
val fact : Num.num -> Num.num = <fun>
This function can compute much larger factorials, such as 120!:
# string_of_num (fact (Int 120));;
- : string =
"6689502913449127057588118054090372586752746333138029810295671352301633
55724496298936687416527198498130815763789321409055253440858940812185989
8481114389650005964960521256960000000000000000000000000000"
Triangle (graphics)
The following program renders a rotating triangle in 2D using OpenGL:
let () =
ignore (Glut.init Sys.argv);
Glut.initDisplayMode ~double_buffer:true ();
ignore (Glut.createWindow ~title:"OpenGL Demo");
let angle t = 10. *. t *. t in
let render () =
GlClear.clear [ `color ];
GlMat.load_identity ();
GlMat.rotate ~angle: (angle (Sys.time ())) ~z:1. ();
GlDraw.begins `triangles;
List.iter GlDraw.vertex2 [-1., -1.; 0., 1.; 1., -1.];
GlDraw.ends ();
Glut.swapBuffers () in
GlMat.mode `modelview;
Glut.displayFunc ~cb:render;
Glut.idleFunc ~cb:(Some Glut.postRedisplay);
Glut.mainLoop ()
The LablGL bindings to OpenGL are required. The program may then be compiled to bytecode with:
$ ocamlc -I +lablGL lablglut.cma lablgl.cma simple.ml -o simple
or to nativecode with:
$ ocamlopt -I +lablGL lablglut.cmxa lablgl.cmxa simple.ml -o simple
or, more simply, using the ocamlfind build command
$ ocamlfind opt simple.ml -package lablgl.glut -linkpkg -o simple
and run:
$ ./simple
Far more sophisticated, high-performance 2D and 3D graphical programs can be developed in OCaml. Thanks to the use of OpenGL and OCaml, the resulting programs can be cross-platform, compiling without any changes on many major platforms.
Fibonacci sequence
The following code calculates the Fibonacci sequence of a number n inputted. It uses tail recursion and pattern matching.
let fib n =
let rec fib_aux m a b =
match m with
| 0 -> a
| _ -> fib_aux (m - 1) b (a + b)
in fib_aux n 0 1
Higher-order functions
Functions may take functions as input and return functions as result. For example, applying twice to a function f yields a function that applies f two times to its argument.
let twice (f : 'a -> 'a) = fun (x : 'a) -> f (f x);;
let inc (x : int) : int = x + 1;;
let add2 = twice inc;;
let inc_str (x : string) : string = x ^ " " ^ x;;
let add_str = twice(inc_str);;
# add2 98;;
- : int = 100
# add_str "Test";;
- : string = "Test Test Test Test"
The function twice uses a type variable 'a to indicate that it can be applied to any function f mapping from a type 'a to itself, rather than only to int->int functions. In particular, twice can even be applied to itself.
# let fourtimes f = (twice twice) f;;
val fourtimes : ('a -> 'a) -> 'a -> 'a = <fun>
# let add4 = fourtimes inc;;
val add4 : int -> int = <fun>
# add4 98;;
- : int = 102
Derived languages
MetaOCaml
MetaOCaml is a multi-stage programming extension of OCaml enabling incremental compiling of new machine code during runtime. Under some circumstances, significant speedups are possible using multistage programming, because more detailed information about the data to process is available at runtime than at the regular compile time, so the incremental compiler can optimize away many cases of condition checking, etc.
As an example: if at compile time it is known that some power function is needed often, but the value of is known only at runtime, a two-stage power function can be used in MetaOCaml:
let rec power n x =
if n = 0
then .<1>.
else
if even n
then sqr (power (n/2) x)
else .<.~x *. .~(power (n - 1) x)>.
As soon as is known at runtime, a specialized and very fast power function can be created:
.<fun x -> .~(power 5 .<x>.)>.
The result is:
fun x_1 -> (x_1 *
let y_3 =
let y_2 = (x_1 * 1)
in (y_2 * y_2)
in (y_3 * y_3))
The new function is automatically compiled.
Other derived languages
AtomCaml provides a synchronization primitive for atomic (transactional) execution of code.
Emily (2006) is a subset of OCaml 3.08 that uses a design rule verifier to enforce object-capability model security principles.
F# is a .NET Framework language based on OCaml.
Fresh OCaml facilitates manipulating names and binders.
GCaml adds extensional polymorphism to OCaml, thus allowing overloading and type-safe marshalling.
JoCaml integrates constructions for developing concurrent and distributed programs.
OCamlDuce extends OCaml with features such as XML expressions and regular-expression types.
OCamlP3l is a parallel programming system based on OCaml and the P3L language.
While not truly a separate language, Reason is an alternative OCaml syntax and toolchain for OCaml created at Facebook.
Software written in OCaml
0install, a multi-platform package manager.
Coccinelle, a utility for transforming the source code of C programs.
Coq, a formal proof management system.
FFTW, a library for computing discrete Fourier transforms. Several C routines have been generated by an OCaml program named .
The web version of Facebook Messenger.
Flow, a static analyzer created at Facebook that infers and checks static types for JavaScript.
Owl Scientific Computing, a dedicated system for scientific and engineering computing.
Frama-C, a framework for analyzing C programs.
GeneWeb, free and open-source multi-platform genealogy software.
The Hack programming language compiler, created at Facebook, extending PHP with static types.
The Haxe programming language compiler.
HOL Light, a formal proof assistant.
Infer, a static analyzer created at Facebook for Java, C, C++, and Objective-C, used to detect bugs in iOS and Android apps.
Lexifi Apropos, a system for modeling complex derivatives.
MirageOS, a unikernel programming framework written in pure OCaml.
MLdonkey, a peer-to-peer file sharing application based on the EDonkey network.
Ocsigen, an OCaml web framework.
Opa, a free and open-source programming language for web development.
pyre-check, a type checker for Python created at Facebook.
Tezos, a self-amending smart contract platform using XTZ as a native currency.
Unison, a file synchronization program to synchronize files between two directories.
The reference interpreter for WebAssembly, a low-level bytecode intended for execution inside web browsers.
Xen Cloud Platform (XCP), a turnkey virtualization solution for the Xen hypervisor.
Users
Several dozen companies use OCaml to some degree. Notable examples include:
Bloomberg L.P., which created BuckleScript, an OCaml compiler backend targeting JavaScript.
Citrix Systems, which uses OCaml in XenServer (rebranded as Citrix Hypervisor during 2018).
Facebook, which developed Flow, Hack, Infer, Pfff, and Reason in OCaml.
Jane Street Capital, a proprietary trading firm, which adopted OCaml as its preferred language in its early days.
References
External links
OCaml manual
OCaml Package Manager
Real World OCaml
Articles with example code
Articles with example OCaml code
Cross-platform free software
Extensible syntax programming languages
Free compilers and interpreters
Functional languages
ML programming language family
Object-oriented programming languages
OCaml programming language family
OCaml software
Pattern matching programming languages
Programming languages created in 1996
Statically typed programming languages |
31012554 | https://en.wikipedia.org/wiki/Software%20fault%20tolerance | Software fault tolerance | Software fault tolerance is the ability of computer software to continue its normal operation despite the presence of system or hardware faults. Fault-tolerant software has the ability to satisfy requirements despite failures.
Introduction
The only thing constant is change. This is certainly more true of software systems than almost any phenomenon, not all software change in the same way so software fault tolerance methods are designed to overcome execution errors by modifying variable values to create an acceptable program state. The need to control software fault is one of the most rising challenges facing software industries today. Fault tolerance must be a key consideration in the early stage of software development.
There exist different mechanisms for software fault tolerance, among which:
Recovery blocks
N-version software
Self-checking software
Operating system failure
Computer applications make a call using the application programming interface (API) to access shared resources, like the keyboard, mouse, screen, disk drive, network, and printer. These can fail in two ways.
Blocked Calls
Faults
Blocked calls
A blocked call is a request for services from the operating system that halts the computer program until results are available.
As an example, the TCP call blocks until a response becomes available from a remote server. This occurs every time you perform an action with a web browser. Intensive calculations cause lengthy delays with the same effect as a blocked API call.
There are two methods used to handle blocking.
Threads
Timers
Threading allows a separate sequence of execution for each API call that can block. This can prevent the overall application from stalling while waiting for a resource. This has the benefit that none of the information about the state of the API call is lost while other activities take place.
Threaded languages include the following.
Timers allow a blocked call to be interrupted. A periodic timer allows the programmer to emulate threading. Interrupts typically destroy any information related to the state of a blocked API call or intensive calculation, so the programmer must keep track of this information separately.
Un-threaded languages include the following.
Corrupted state will occur with timers. This is avoided with the following.
Track software state
Semaphore
Blocking
Faults
Fault are induced by signals in POSIX compliant systems, and these signals originate from API calls, from the operating system, and from other applications.
Any signal that does not have handler code becomes a fault that causes premature application termination.
The handler is a function that is performed on-demand when the application receives a signal. This is called exception handling.
The termination signal is the only signal that cannot be handled. All other signals can be directed to a handler function.
Handler functions come in two broad varieties.
Initialized
In-line
Initialized handler functions are paired with each signal when the software starts. This causes the handler function to startup when the corresponding signal arrives. This technique can be used with timers to emulate threading.
In-line handler functions are associated with a call using specialized syntax. The most familiar is the following used with C++ and Java.
try
{
API_call();
}
catch
{
signal_handler_code;
}
Hardware failure
Hardware fault tolerance for software requires the following.
Backup
Redundancy
Backup maintains information in the event that hardware must be replaced. This can be done in one of two ways.
Automatic scheduled backup using software
Manual backup on a regular schedule
Information restore
Backup requires an information-restore strategy to make backup information available on a replacement system. The restore process is usually time-consuming, and information will be unavailable until the restore process is complete.
Redundancy relies on replicating information on more than one computer computing device so that the recovery delay is brief. This can be achieved using continuous backup to a live system that remains inactive until needed (synchronized backup).
This can also be achieved by replicating information as it is created on multiple identical systems, which can eliminate recovery delay.
See also
Built-in self-test
Built-in test equipment
Fault-tolerant design
Fault-tolerant system
Fault-tolerant computer system
Immunity-aware programming
Logic built-in self-test
N-version programming
Safety engineering
OpenSAF - Service Availability API
References
Further reading
Software fault tolerance, by Chris Inacio at Carnegie Mellon University (1998)
Software quality
Software architecture
Fault tolerance |
270062 | https://en.wikipedia.org/wiki/Operational%20semantics | Operational semantics | Operational semantics is a category of formal programming language semantics in which certain desired properties of a program, such as correctness, safety or security, are verified by constructing proofs from logical statements about its execution and procedures, rather than by attaching mathematical meanings to its terms (denotational semantics). Operational semantics are classified in two categories: structural operational semantics (or small-step semantics) formally describe how the individual steps of a computation take place in a computer-based system; by opposition natural semantics (or big-step semantics) describe how the overall results of the executions are obtained. Other approaches to providing a formal semantics of programming languages include axiomatic semantics and denotational semantics.
The operational semantics for a programming language describes how a valid program is interpreted as sequences of computational steps. These sequences then are the meaning of the program. In the context of functional programming, the final step in a terminating sequence returns the value of the program. (In general there can be many return values for a single program, because the program could be nondeterministic, and even for a deterministic program there can be many computation sequences since the semantics may not specify exactly what sequence of operations arrives at that value.)
Perhaps the first formal incarnation of operational semantics was the use of the lambda calculus to define the semantics of Lisp. Abstract machines in the tradition of the SECD machine are also closely related.
History
The concept of operational semantics was used for the first time in defining the semantics of Algol 68.
The following statement is a quote from the revised ALGOL 68 report:
The meaning of a program in the strict language is explained in terms of a hypothetical computer
which performs the set of actions that constitute the elaboration of that program. (Algol68, Section 2)
The first use of the term "operational semantics" in its present meaning is attributed to
Dana Scott (Plotkin04).
What follows is a quote from Scott's seminal paper on formal semantics,
in which he mentions the "operational" aspects of semantics.
It is all very well to aim for a more ‘abstract’ and a ‘cleaner’ approach to
semantics, but if the plan is to be any good, the operational aspects cannot
be completely ignored. (Scott70)
Approaches
Gordon Plotkin introduced the structural operational semantics, Matthias Felleisen and Robert Hieb the reduction semantics, and Gilles Kahn the natural semantics.
Small-step semantics
Structural operational semantics
Structural operational semantics (SOS, also called structured operational semantics or small-step semantics) was introduced by Gordon Plotkin in (Plotkin81) as a logical means to define operational semantics. The basic idea behind SOS is to define the behavior of a program in terms of the behavior of its parts, thus providing a structural, i.e., syntax-oriented and inductive, view on operational semantics. An SOS specification defines the behavior of a program in terms of a (set of) transition relation(s). SOS specifications take the form of a set of inference rules that define the valid transitions of a composite piece of syntax in terms of the transitions of its components.
For a simple example, we consider part of the semantics of a simple programming language; proper illustrations are given in Plotkin81 and Hennessy90, and other textbooks. Let range over programs of the language, and let range over states (e.g. functions from memory locations to values). If we have expressions (ranged over by ), values and locations (), then a memory update command would have semantics:
Informally, the rule says that "if the expression in state reduces to value , then the program will update the state with the assignment ".
The semantics of sequencing can be given by the following three rules:
Informally, the first rule says that,
if program in state finishes in state , then the program in state will reduce to the program in state .
(You can think of this as formalizing "You can run , and then run
using the resulting memory store.)
The second rule says that
if the program in state can reduce to the program with state , then the program in state will reduce to the program in state .
(You can think of this as formalizing the principle for an optimizing compiler:
"You are allowed to transform as if it were stand-alone, even if it is just the
first part of a program.")
The semantics is structural, because the meaning of the sequential program , is defined by the meaning of and the meaning of .
If we also have Boolean expressions over the state, ranged over by , then we can define the semantics of the while command:
Such a definition allows formal analysis of the behavior of programs, permitting the study of relations between programs. Important relations include simulation preorders and bisimulation.
These are especially useful in the context of concurrency theory.
Thanks to its intuitive look and easy-to-follow structure,
SOS has gained great popularity and has become a de facto standard in defining
operational semantics. As a sign of success, the original report (so-called Aarhus
report) on SOS (Plotkin81) has attracted more than 1000 citations according to the CiteSeer ,
making it one of the most cited technical reports in Computer Science.
Reduction semantics
Reduction semantics is an alternative presentation of operational semantics. Its key ideas were first applied to purely functional call by name and call by value variants of the lambda calculus by Gordon Plotkin in 1975 and generalized to higher-order functional languages with imperative features by Matthias Felleisen in his 1987 dissertation. The method was further elaborated by Matthias Felleisen and Robert Hieb in 1992 into a fully equational theory for control and state. The phrase “reduction semantics” itself was first coined by Felleisen and Daniel Friedman in a PARLE 1987 paper.
Reduction semantics are given as a set of reduction rules that each specify a single potential reduction step. For example, the following reduction rule states that an assignment statement can be reduced if it sits immediately beside its variable declaration:
To get an assignment statement into such a position it is “bubbled up” through function applications and the right-hand side of assignment statements until it reaches the proper point. Since intervening expressions may declare distinct variables, the calculus also demands an extrusion rule for expressions. Most published uses of reduction semantics define such “bubble rules” with the convenience of evaluation contexts. For example, the grammar of evaluation contexts in a simple call by value language can be given as
where denotes arbitrary expressions and denotes fully-reduced values. Each evaluation context includes exactly one hole into which a term is plugged in a capturing fashion. The shape of the context indicates with this hole where reduction may occur. To describe “bubbling” with the aid of evaluation contexts, a single axiom suffices:
This single reduction rule is the lift rule from Felleisen and Hieb's lambda calculus for assignment statements. The evaluation contexts restrict this rule to certain terms, but it is freely applicable in any term, including under lambdas.
Following Plotkin, showing the usefulness of a calculus derived from a set of reduction rules demands (1) a Church-Rosser lemma for the single-step relation, which induces an evaluation function, and (2) a Curry-Feys standardization lemma for the transitive-reflexive closure of the single-step relation, which replaces the non-deterministic search in the evaluation function with a deterministic left-most/outermost search. Felleisen showed that imperative extensions of this calculus satisfy these theorems. Consequences of these theorems are that the equational theory—the symmetric-transitive-reflexive closure—is a sound reasoning principle for these languages. However, in practice, most applications of reduction semantics dispense with the calculus and use the standard reduction only (and the evaluator that can be derived from it).
Reduction semantics are particularly useful given the ease by which evaluation contexts can model state or unusual control constructs (e.g., first-class continuations). In addition, reduction semantics have been used to model object-oriented languages, contract systems, exceptions, futures, call-by-need, and many other language features. A thorough, modern treatment of reduction semantics that discusses several such applications at length is given by Matthias Felleisen, Robert Bruce Findler and Matthew Flatt in Semantics Engineering with PLT Redex.
Big-step semantics
Natural semantics
Big-step structural operational semantics is also known under the names natural semantics, relational semantics and evaluation semantics. Big-step operational semantics was introduced under the name natural semantics by Gilles Kahn when presenting Mini-ML, a pure dialect of ML.
One can view big-step definitions as definitions of functions, or more generally of relations, interpreting each language construct in an appropriate domain. Its intuitiveness makes it a popular choice for semantics specification in programming languages, but it has some drawbacks that make it inconvenient or impossible to use in many situations, such as languages with control-intensive features or concurrency.
A big-step semantics describes in a divide-and-conquer manner how final evaluation results of language constructs can be obtained by combining the evaluation results of their syntactic counterparts (subexpressions, substatements, etc.).
Comparison
There are a number of distinctions between small-step and big-step semantics that influence whether one or the other forms a more suitable basis for specifying the semantics of a programming language.
Big-step semantics have the advantage of often being simpler (needing fewer inference rules) and often directly correspond to an efficient implementation of an interpreter for the language (hence Kahn calling them "natural".) Both can lead to simpler proofs, for example when proving the preservation of correctness under some program transformation.
The main disadvantage of big-step semantics is that non-terminating (diverging) computations do not have an inference tree, making it impossible to state and prove properties about such computations.
Small-step semantics give more control over the details and order of evaluation. In the case of instrumented operational semantics, this allows the operational semantics to track and the semanticist to state and prove more accurate theorems about the run-time behaviour of the language. These properties make small-step semantics more convenient when proving type soundness of a type system against an operational semantics.
See also
Algebraic semantics
Axiomatic semantics
Denotational semantics
Formal semantics of programming languages
References
Further reading
Gilles Kahn. "Natural Semantics". Proceedings of the 4th Annual Symposium on Theoretical Aspects of Computer Science. Springer-Verlag. London. 1987.
Gordon D. Plotkin. A Structural Approach to Operational Semantics. (1981) Tech. Rep. DAIMI FN-19, Computer Science Department, Aarhus University, Aarhus, Denmark. (Reprinted with corrections in J. Log. Algebr. Program. 60-61: 17-139 (2004), preprint).
Gordon D. Plotkin. The Origins of Structural Operational Semantics. J. Log. Algebr. Program. 60-61:3-15, 2004. (preprint).
Dana S. Scott. Outline of a Mathematical Theory of Computation, Programming Research Group, Technical Monograph PRG–2, Oxford University, 1970.
Adriaan van Wijngaarden et al. Revised Report on the Algorithmic Language ALGOL 68. IFIP. 1968. ()
Matthew Hennessy. Semantics of Programming Languages. Wiley, 1990. available online.
External links
Formal specification languages
Logic in computer science
Programming language semantics |
61203368 | https://en.wikipedia.org/wiki/Automotive%20security | Automotive security | Automotive security refers to the branch of computer security focused on the cyber risks related to the automotive context. The increasingly high number of ECUs in vehicles and, alongside, the implementation of multiple different means of communication from and towards the vehicle in a remote and wireless manner led to the necessity of a branch of cybersecurity dedicated to the threats associated with vehicles. Not to be confused with automotive safety.
Causes
The implementation of multiple ECUs (Electronic Control Units) inside vehicles began in the early '70s thanks to the development of integrated circuits and microprocessors that made it economically feasible to produce the ECUs on a large scale. Since then the number of ECUs has increased to up to 100 per vehicle. These units nowadays control almost everything in the vehicle, from simple tasks such as activating the wipers to more safety-related ones like brake-by-wire or ABS (Anti-lock Braking System). Autonomous driving is also strongly reliant on the implementation of new, complex ECUs such as the ADAS, alongside sensors (lidars and radars) and their control units.
Inside the vehicle, the ECUs are connected with each other through cabled or wireless communication networks, such as CAN bus (Controller Area Network), MOST bus (Media Oriented System Transport), FlexRay or RF (Radio Frequency) as in many implementations of TPMSs (Tire Pressure Monitoring Systems). It is important to notice that many of these ECUs require data received through these networks that arrive from various sensors to operate and use such data to modify the behavior of the vehicle (e.g., the cruise control modifies the vehicle's speed depending on signals arriving from a button usually located on the steering wheel).
Since the development of cheap wireless communication technologies such as Bluetooth, LTE, Wi-Fi, RFID and similar, automotive producers and OEMs have designed ECUs that implement such technologies with the goal of improving the experience of the driver and passengers. Safety-related systems such as the OnStar from General Motors, telematic units, communication between smartphones and the vehicle's speakers through Bluetooth, Android Auto and Apple CarPlay.
Threat Model
Threat models of the automotive world are based on both real-world and theoretically possible attacks. Most real-world attacks aim at the safety of the people in and around the car, by modifying the cyber-physical capabilities of the vehicle (e.g., steering, braking, accelerating without requiring actions from the driver), while theoretical attacks have been supposed to focus also on privacy-related goals, such as obtaining GPS data on the vehicle, or capturing microphone signals and similar.
Regarding the attack surfaces of the vehicle, they are usually divided in long-range, short-range, and local attack surfaces: LTE and DSRC can be considered long-range ones, while Bluetooth and Wi-Fi are usually considered short-range although still wireless. Finally, USB, OBD-II and all the attack surfaces that require physical access to the car are defined as local. An attacker that is able to implement the attack through a long-range surface is considered stronger and more dangerous than the one that requires physical access to the vehicle. In 2015 the possibility of attacks on vehicles already on the market has been proven possible by Miller and Valasek, that managed to disrupt the driving of a Jeep Cherokee while remotely connecting to it through remote wireless communication.
Controller Area Network Attacks
The most common network used in vehicles and the one that is mainly used for safety-related communication is CAN, due to its real-time properties, simplicity, and cheapness. For this reason the majority of real-world attacks have been implemented against ECUs connected through this type of network.
The majority of attacks demonstrated either against actual vehicles or in testbeds fall in one or more of the following categories:
Sniffing
Sniffing in the computer security field generally refers to the possibility of intercepting and logging packets or more generally data from a network. In the case of CAN, since it is a bus network, every node listens to all communication on the network.
It is useful for the attacker to read data to learn the behavior of the other nodes of the network before implementing the actual attack. Usually, the final goal of the attacker is not to simply sniff the data on CAN, since the packets passing on this type of network are not usually valuable just to read.
Denial of Service
Denial of service (DoS) in information security is usually described as an attack that has the objective of making a machine or a network unavailable. DoS attacks against ECUs connected to CAN buses can be done both against the network, by abusing the arbitration protocol used by CAN to always win the arbitration, and targeting the single ECU, by abusing the error handling protocol of CAN. In this second case the attacker flags the messages of the victim as faulty to convince the victim of being broken and therefore shut itself off the network.
Spoofing
Spoofing attacks comprise all cases in which an attacker, by falsifying data, sends messages pretending to be another node of the network. In automotive security usually spoofing attacks are divided in Masquerade and Replay attacks. Replay attacks are defined as all those where the attacker pretends to be the victim and sends sniffed data that the victim sent in a previous iteration of authentication. Masquerade attacks are, on the contrary, spoofing attacks where the data payload has been created by the attacker.
Real Life Automotive Threat Example
Security researchers Charlie Miller and Chris Valasek have successfully demonstrated remote access to a wide variety of vehicle controls using a Jeep Cherokee as the target. They were able to control the radio, environmental controls, windshield wipers, and certain engine and brake functions.
The method used to hack the system was implementation of pre-programmed chip into the controller area network (CAN) bus. By inserting this chip into the CAN bus, he was able to send arbitrary message to CAN bus. One other thing that Miller has pointed out is the danger of the CAN bus, as it broadcasts the signal which the message can be caught by the hackers throughout the network.
The control of the vehicle was all done remotely, manipulating the system without any physical interaction. Miller states that he could control any of some 1.4 million vehicles in the United States regardless of the location or distance, the only thing needed is for someone to turn on the vehicle to gain access.
Security Measures
The increasing complexity of devices and networks in the automotive context requires the application of security measures to limit the capabilities of a potential attacker. Since the early 2000 many different countermeasures have been proposed and, in some cases, applied. Following, a list of the most common security measures:
Sub-networks: to limit the attacker capabilities even if he/she manages to access the vehicle from remote through a remotely connected ECU, the networks of the vehicle are divided in multiple sub-networks, and the most critical ECUs are not placed in the same sub-networks of the ECUs that can be accessed from remote.
Gateways: the sub-networks are divided by secure gateways or firewalls that block messages from crossing from a sub-network to the other if they were not intended to.
Intrusion Detection Systems (IDS): on each critical sub-network, one of the nodes (ECUs) connected to it has the goal of reading all data passing on the sub-network and detect messages that, given some rules, are considered malicious (made by an attacker). The arbitrary messages can be caught by the passenger by using IDS which will notify the owner regarding with unexpected message.
Authentication protocols: in order to implement authentication on networks where it is not already implemented (such as CAN), it is possible to design an authentication protocol that works on the higher layers of the ISO OSI model, by using part of the data payload of a message to authenticate the message itself.
Hardware Security Modules: since many ECUs are not powerful enough to keep real-time delays while executing encryption or decryption routines, between the ECU and the network it is possible to place a hardware security module that manages security for it.
Legislation
In June 2020, the United Nations Economic Commission for Europe (UNECE) World Forum for Harmonization of Vehicle Regulations released two new regulations, R155 and R156, establishing "clear performance and audit requirements for car manufacturers" in terms of automotive cybersecurity and software updates.
Notes
Automotive design
Computer security |
35369684 | https://en.wikipedia.org/wiki/Media%20consumption | Media consumption | Media consumption or media diet is the sum of information and entertainment media taken in by an individual or group. It includes activities such as interacting with new media, reading books and magazines, watching television and film, and listening to radio. An active media consumer must have the capacity for skepticism, judgement, free thinking, questioning, and understanding.
History
For as long as there have been words and pictures, the people of the world have been consuming media. Improved technology such as the printing press has fed increased consumption. Around 1600 the camera obscura was perfected. Light was inverted through a small hole or lens from outside, and projected onto a surface or screen, creating a moving image. This new medium had a very small effect on society compared to the old ones. The development of photography in the middle 19th century made those images permanent, greatly reducing the cost of pictures. By the end of the century millions of consumers were seeing new, professionally made photographs every day.
In the 1860s mechanisms such as the zoetrope, mutoscope and praxinoscope that produced two-dimensional drawings in motion were created. They were displayed in public halls for people to observe. These new media foreshadowed the mass media consumption of later years.
Around the 1880s, the development of the motion picture camera allowed individual component images to be captured and stored on a single reel. Motion pictures were projected onto a screen to be viewed by an audience. This moving camera affected the progression of the world immensely, beginning the American film industry as well as early international movements such as German Expressionism, Surrealism and the Soviet Montage. For the first time people could tell stories on film, and distribute their works to consumers worldwide.
In the 1920s electronic television was working in laboratories, and in the 1930s hundreds of receivers were in use worldwide. By 1941 the Columbia Broadcasting System (CBS) was broadcasting two 15-minute newscasts a day to a tiny audience on its New York television station. However, the television industry did not begin to boom until the general post–World War II economic expansion. Eventually television began to incorporate color, and multiple broadcasting networks were created.
Computers were developed in the middle 20th century, and commercialized in the 1960s. Apple and other companies sold computers for hobbyists in the 1970s, and in 1981 IBM released computers intended for consumers.
On August 6, 1991, the internet and World Wide Web, long in use by computer specialists, became available to the public. This was the start of the commercialized Internet that people use today.
In 1999, Friends Reunited, the first social media site, was released to the public. Since then, Myspace, Facebook, Twitter and other social networks have been created. Facebook and Twitter are the top social media sites in terms of usage.
Facebook has a total of 1,230,000,000 consumers while Twitter has 645,750,000. Both companies are worth billions of dollars, and continue to grow.
Overall media consumption has immensely increased over time, from the era of the introduction of motion pictures, to the age of social networks and the internet.
People involved
Media is the sum of information and entertainment media taken in by an individual or group. The first source of media was solely word of mouth. When written language was established, scrolls were passed, but mass communication was never an option. It wasn't until the printing press that media could be consumed on a high level. Johannes Gutenberg, a goldsmith and businessman from the mining town of Mainz in southern Germany printing press. His technology allowed books, newspapers, and flyers to be printed and distributed on a mass level.
The first newspaper written on paper was done by Benjamin Harris in the British-American Colonies. The invention of a newspaper was one of the most influential pieces in media consumption history, because it pertained to everyone.
Eventually communication reached an electronic state, and the telegraph was invented. Harrison Dyar, who sent electrical sparks through chemically treated paper tape to burn dots and dashes, invented the first telegraph in the USA. The telegraph was the first piece of equipment that allowed users to send electronic messages. A more developed version came from Samuel Morse, whose telegraph printed code on tape and was operated using a keypad and an earpiece. The pattern of communication soon became known as Morse code.
Inventors Elisha Gray and Alexander Graham Bell both independently designed the telephone. The telephone was simple enough for everyone to use and didn't require learning a code.
Soon after the telephone came the radio. Combining technology from both the telegraph and telephone, Guglielmo Marconi sent and received his first radio signal in 1895.
Finally in 1947, after a long period of development, television exploded as a medium. Not one person is responsible for the creation of the television, but Marvin Middlemark invented "rabbit ears" in 1930, which allowed for televisions to be a commercial product. The television has by far been the most influential consumed media, and allowed news to spread on a visual level.
In 1976, Apple created the first consumer computer. The computer was the start of mass written communication using email. Apple continues to be a leading company in computer use.
In 1998, the first ever social media site SixDegrees.com was created by Andrew Weinreich. It enabled users to upload a profile and make friends with other users. Shortly later in 1999, Friends Reunited was created by Steve and Julie Pankhurst and friend Jason Porter
Sites like MySpace created by Tom Anderson gained prominence in the early 2000s. By 2006, Facebook created by Mark Zuckerberg and Twitter created by Jack Dorsey both became available to users throughout the world. These sites remain some of the most popular social networks on the Internet.
Increase
Among other factors, a person's access to media technology affects the amount and quality of his or her intake. In the United States, for instance, "U.C. San Diego scientists in 2009 estimated the 'average' American consumes 34 gigabytes of media a day." The amount of media consumption among individuals is increasing as new technologies are created. According to phys.org, a new study done by a researcher at the San Diego Supercomputer Center at the University of California, says that by 2015, the sum of media asked for and delivered to consumers on mobile devices and their homes would take more than 15 hours a day to see or hear, an amount equivalent to watching nine DVDs' worth of data per person per day.
With social media networks rapidly growing such as Instagram, Facebook, and Twitter, our world of media consumption is reaching a younger and younger age group, making our consumption that much larger as a country. With mobile devices such as smartphones, news, entertainment, shopping and buying are all now at the tip of our fingers, anytime, anywhere.
Positive effects
There are a number of positive effects of media consumption. Television can have positive effects on children as they are growing up. Shows like Sesame Street teach valuable lessons to children in developmental stages, such as math, the alphabet, kindness, racial equality, and cooperation. Dora the Explorer introduces foreign language to children of all backgrounds in a fun, cooperative environment.
Mass media has a huge grasp on today's adolescents. Many young people use different types of social media daily. Mass media can be used to socialize adolescents from around the world and can help to give them a fundamental understanding of social norms.
Media relating to advertising can also have a positive effect. Some alcohol manufacturers are known to spend at least ten percent of their budget on warnings about the dangers of drinking and driving. Also, studies show that milk consumption (though controversial) shot up in children fifteen years of age and younger due to print and broadcast advertisements.
Many video games can also have positive effects. Games like Wii Tennis and Wii Fit improve hand-eye coordination as well as general mental and physical health.
Video games, including shooting games, may positively impact a child's learning, as well as physical and mental health and social skills. Even games rated for mature audiences have been found to be beneficial to the development of children according to a study that was published by the American Psychological Association (APA). The study showed that there is a need to look at the positive effects as well as the negative ones. When a child plays video games, they naturally develop problem-solving skills. Strategic video games, such as role-playing games, release statistics that the more intense game play improved in problem solving skills and there is a significant rise in school grades as well, according to a study that was taken over a several year span but was published in 2013. The study also showcased that the creativity of children was also enhanced by playing all genres of video games, including mature rated games. Research revealed that video games benefit children significantly more than other sources of technology.
The internet itself is an overwhelmingly useful resource for people of all ages, effectively serving as a personal library for any who access it. The sheer volume of educational websites, information and services offered are so immense that research has become a far easier task than it was in any previous period in human history. Social media has provided invaluable benefits for people over the course of its lifetime, and has served as an incredibly effective method of interacting and communicating with others in nearly every part of the world.
Media consumption has proven to serve as an indispensable asset in the educational field, serving both instructors and students alike. Instructors and students consume media for school curricula in Ontario. Media literacy is prominent amongst the youth who have essentially been born into an era where media is a global driving force. When a student learns to approach media sources with a critical lens, it can be observed that all forms of media have no sense of neutrality. Students who consume media are capable of questioning the validity of the media they are exposed to, in turn developing their own sense of critical thinking. To broaden their comprehension skills, students often find it useful to question an author's purpose, the reasoning for the placement of specific images or motifs, the representation of content and its meaning to individuals, and the effects of the media on individual and societal thinking. Media related to learning is typically considered a source as well as a tool. Since its start, many have successfully used Rosetta Stone (software) to assist in the process of learning a new language. Rosetta Stone is a source compatible with several platforms i.e. (iPad, Tablet, Phone Apps Websites).
Negative effects
Media consumption can have a wide range of negative behavioral and emotional effects. There are many instances of violence in movies, television, video games and websites which can affect one's level of aggression. These violent depictions can desensitize viewers to acts of violence and can also provoke mimicking of the acts. Since violence is so rampant in media, viewers believe they live in a more violent world than they actually do.
The reach of media is expanding globally and with this television has become a vice around the world. Television addiction has been labeled as the plug in drug since 1977. Over the years televisions are now located in almost every home, according to most recent estimates taken by Nielsen in the U.S. alone there are 116.4 million T.V. homes
Television can have a negative impact on adolescents and cause them to behave in a manner that is not part of normal social norm. In an article about media violence on society it states that extensive TV viewing among adolescents and young adults is associated with subsequent aggressive acts. Programs that portray violent acts can change an adolescent's view on violence and this may lead them to develop aggressive behavior. These shows usually portray a person who commits a crime or resorts to violence. They also show that these people go unpunished for their crime, creating the notion that crime is something a person can get away with. Studies show that 65% of people between the age of 8 to 18 have a television in their room. The average high-schooler watches, on average, 14 hours of television a week. Excessive television viewing and computer game playing has also been associated with many psychiatric symptoms, especially emotional and behavioral symptoms, somatic complaints, attention problems such as hyperactivity, and family interaction problems.
When adolescents watch television for long periods of time they spend less time being active and engaged in physical activity. Many adolescents who spend large amounts of time watching television see actors as role models and try to emulate them by trying to be like them this can also have a negative impact on people's body images, mostly women. After seeing beautiful and thinner than average women in the media, viewers may feel worse about themselves and sometimes develop eating disorders. Some believe that the reason obesity rates have greatly increased in the last 20 years is due to increased media consumption. This is due to the fact that children are spending much more time playing video games and watching television than exercising.
Social media is said to also cause anxiety and depression. Research suggests that young people who spend more than 2 hours per day on social media are more likely to report poor mental health, including psychological distress.
Numerous studies have also shown that media consumption has a significant association with poor sleep quality. Television and computer game exposure affect children's sleep and deteriorate verbal cognitive performance.
Another problem that has developed due to increased media consumption is that people are becoming less independent. With text messaging and social media, people want instant gratification from their friends and often feel hurt if they do not receive an immediate response. Instead of having self-validation, people often need validation from others. Another issue with independence is that since children frequently get cellphones when they are very young, they are always connected and never truly alone. Today, many children do not have the rite of passage of being on their own because they can always call their parents if they need help or are frightened.
Minorities are often put in a negative light in the media as well, with blacks being portrayed as criminals, Hispanics portrayed as illegal aliens, and people from the Middle East portrayed as terrorists. Research has shown that consuming much media with headlines that depict minorities in negative ways can affect how people think.
Effects on Self-Esteem
Media has played a huge role in society for years in selling people on the expectations of how an ideal male and female body should look. These images of the "ideal body" can have a very negative effect on self-esteem in both men and women. These images can play significant role in eating disorders in men and women as well. The idea of body comparison goes back to Festinger's (1954) Social Comparison Theory. Festinger argues that individuals make body comparisons in areas for which they relate. If someone who is over weight and is an environment that focuses on health, thinness, or body images (e.g. the gym or the beach) they may be more likely to see thinness as an ideal that can increase dissatisfaction with their own body. The more a person engages in body comparison, the more likely they may struggle with low self-esteem and a negative body image. Women are sold to believe that to be beautiful, they must be a size zero and have long legs. Men are sold the notion that they must big biceps and zero body fat. Reading magazines with images of toned muscular men has been reported to lower body and self-esteem in men and they start worrying more about their own health and physical fitness.
Social media
The amount of time spent on social media can inform people about their self-esteem. Research has shown that individuals with lower self-esteem may have an easier time expressing themselves on social media rather than in the real world. Many people use metrics such as how many people are following them and likes to measure acceptance or rejection from peers. One study from the Journal of Experimental Social Psychology, argues that individuals who feel accepted and part of the "in crowd" have a higher sense of self-esteem than those who do not feel as though they are a part of these crowds.
Semiotics of American youth media consumption
American youth have personal television sets, laptops, iPods and cell phones all at their disposal. They spend more time with media than any single activity other than sleeping. As of 2008, the average American ages 8 to 18 reported more than 6 hours of daily media use. The growing phenomenon of "media multitasking"—using several forms of media at the same time—multiplies that figure to 8.5 hours of media exposure daily. Media exposure begins early, typically increases until children begin school, then climbs to a peak of almost 8 hours daily among 11 and 12-year-old children. Media exposure is positively related to risk-taking behaviors and is negatively related to personal adjustment and school performance.
Of teenagers ages 12 to 17 in 2014, 78% had a cell phone, and 47% of those owned smartphones. 23% of teens owned a tablet computer and 93% had a computer or access to one at home. Of teenagers ages 14 to 17, 74% accessed the Internet on mobile devices occasionally. One in four teens were mostly cell phone users, consuming a majority of their media with applications on their phone.
Media consumption, particularly social media consumption, plays a major role in the socialization and social behaviors of adolescents. Socializing through media differs from socializing through school, community, family, and other social functions. Since adolescents typically have greater control over their media choices than over other social situations face-to-face, many develop self-socialization patterns. This behavior manifests itself actively in personal social development and outcomes due to the vast array of choices made available through social media. Adolescents have the ability to choose media that best suits their personalities and preferences, which in turn create youth that have a skewed view of the world and limited social interaction skills. Socialization can consequently grow increasingly difficult for youth. Media, parents and peers may each convey conflicting messages to adolescents. With vastly differing views of how to approach various situations, confusion can be apparent and youth may avoid or internalize their social weaknesses.
Social semiotics represent a significant role in how adolescents learn and employ social interaction. Impressionable adolescents regularly imitate the sign systems seen in the media. These semiotic systems affect their behavior through connotations, narratives, and myths. Adolescents are shaped by the sign systems in the media they consume. For example, many young girls in the 1990s dressed and acted like the Spice Girls, a pop band that gathered prolific and critical acclaim at the time. Similarly, boy bands created a trend of many teenage boys frosting their hair in the early 2000s. With more exposure to the media and images of models, young women are more likely to conform to the ideals of specific body images. Anorexia, bulimia and models smoking convey to girls that a feminine person is thin, beautiful, and must do certain things to her body to be attractive. A code of femininity (see media and gender) implies today that a "true" woman is thin, girlish, frail, passive, and focused on serving others. On the other hand, the code of masculinity for a young males raised within the past several decades may include the ideals of profusely individualistic and self-sufficient natures, oft personified in film characters such as cowboys and outlaw bikers. The images, myths, and narratives of these ideas imply that a "true" man is a relentless problem solver, physically strong, emotionally inexpressive, and at times, a daredevil with little regard for societal expectations and the law of the land.
The never ceasing flood of signs, images, narratives, and myths surrounding consumers of media have the capability to influence behavior through the use of codes. Codes are maps of meaning, systems of signs that are used to interpret behavior. Codes connect semiotic systems of meaning with social structure and values. The idea of being judged on femininity or clothing relates to experiences later in life, including job interviews and the emphasis placed on reaching financial success.
Media consumption has become an integral part of modern culture, and has shaped younger generations through socialization and the interpretations provided for the signs and world around them.
Effect on Public Attitudes Regarding Crime and Justice
Media consumption affects the public's perception of the justice system through the relationship of fear regarding crime, the perceived effectiveness of law enforcement, and the general attitudes about punishment for crime. The justice system has been consistently portrayed in mass media in negative tandem through the portrayal of criminals, deviants, and law enforcement officials, in turn affecting their overall perception by the public.
A 2003 study by Dowler showed the effects of media consumption influences public attitudes regarding crime and justice. In this study, a relationship between media and crime was found to be dependent on characteristics of the message and receiving audience, where substantial amounts of local crimes reported raised fear, while lower crime amounts lead to a feeling of safety. George Gerbner's empirical studies of the impact of media consumption discovered that television viewers of crime-based shows are more fearful of crime than those who are not consuming that type of media.
A study conducted by Chermak, McGarrell, & Gruenewald focused on media coverage of police misconduct, producing results where greater consumption of media portraying dishonesty amongst law enforcement led to increasing confirmation bias in the direction of the officer's guilt.
See also
References
Further reading
1990s
2000s
2010s
Media diets of notable people
. (Notables include Barney Frank, Aaron Sorkin, David Brooks, Clay Shirky, Peggy Noonan)
. (Lists of titles in "personal libraries of famous readers" such as Harry Houdini, Ralph Ellison, Susan B. Anthony)
External links
Mass media
Reading (process)
Metaphors referring to food and drink
Information society |
14619242 | https://en.wikipedia.org/wiki/VEDIT | VEDIT | Vedit is a commercial text editor for 8080/Z-80-based systems, Microsoft Windows and MS-DOS from Greenview Data, Inc.
Vedit was one of the pioneers in visual editing. It used a command set resembling TECO. Today, it is a powerful and feature-rich general-purpose text editor.
Vedit can edit any file, including binary files and huge multi-gigabyte files. Still it is compact and extremely fast,
perhaps because it is written mostly in Assembly language.
History
Vedit (Visual Editor) was created by Ted Green in 1979. It was commercially published by CompuView in 1980 for CP/M operating system running on 8080 / Z80 based computers. When the IBM-PC was introduced, Vedit was one of the first applications available for it in 1982. Versions of Vedit were available for MS-DOS, CP/M-86 and CSP DOS.
During the following years, versions were developed for OS/2, Xenix, SCO Unix and QNX. On QNX, Vedit was supplied as standard editor. Vedit was sold in three versions: Vedit Jr, Vedit and Vedit Plus. Later, the first two were dropped and Vedit Plus was renamed to just Vedit. CompuView was shut down in 1989, but a new company, Greenview Data, continued the development of Vedit starting from 1990. The first Windows version (Vedit Plus 5.0) was published in 1997. 32-bit Windows version (v5.1) was published in 1998. 64-bit Vedit Pro64 was published in 2003. It uses 64-bit addresses and data handling to support files larger than 2GB, but does not require a 64-bit processor or 64-bit OS.
Development and marketing of Unix, QNX etc. versions were gradually stopped. DOS-version has still been developed in parallel with Windows version and both have the same functions (as much as possible). DOS version is no longer sold separately or supported, but it is still packaged with the Windows versions. On February 2008, Greenview Data announced that the old CP/M and CP/M-86 versions of Vedit can be freely shared.
With version 6.20.1 (May 2011) the old Windows Help system was replaced with HTML Help system in to support 64-bit versions of Windows (Windows Vista, Windows 7, Windows 8/8.1 and Windows 10).
Greenview Data, Inc was purchased in March of 2017, by Zix Corp. The status of Vedit is unknown at this time, but there has not been an update since 2015.
Technology
The CP/M and DOS versions of Vedit were written 100% in Assembly language. The DOS-exe file size is only 158k.
The Windows version was written mostly in Assembly, but the user interface has been written in C.
The size of exe file is 573k, and no DLLs are used.
Vedit uses its own file buffering which is faster than the virtual memory of Windows.
In addition, it uses very little of Windows' resources.
When editing large files, only part of the file is loaded in the memory at a time and temporary files are created only as needed.
Thus, dozens of gigabyte files can be open simultaneously on 32-bit Windows, and even a multi-gigabyte file is opened in a fraction of a second.
Therefore Vedit loads very fast and executes all operations fast.
And since it does not utilize Windows resources, it does not slow down the other applications.
Features
Vedit can edit any file, including database, binary and EBCDIC files and huge files. The largest file size for standard version of Vedit is 2 GB. Vedit Pro64 can edit files of unlimited size.
DOS, Unix and Mac files can be edited and are automatically detected.
FTP editing allows editing files on remote computer.
Multiple files can be edited using tabbed document interface or Multiple document interface or any mixture of them. A special feature in Vedit is that a document window can be 'full size'. The size of such window is adjusted automatically (as with maximized windows), but overlapping windows can be used at the same time.
Vedit has project support. Opening a project automatically loads all the files, file list, settings and session details. You can instantly switch from one project to another by double-clicking on the project name in the sidebar.
Vedit's search function supports both regular expressions and its own pattern matching codes (which are faster and easier to use).
Wildfile function allows you to perform searches, search/replace operations, filtering, run commands or even run complex macros on large set of files on disk recursively.
Other search functions include Incremental search, Search block/word, Search all buffers, Search all show/select.
Block operations can be performed using Windows Clipboard or one of Vedit's 100 internal text registers. Or you can copy a block directly to another part of the file or to another file. Columnar blocks are supported. A special feature of Vedit is the persistent blocks that stay selected even if you move cursor.
For programmers, Vedit has features such as syntax highlighting, bracket matching, template editing, auto indent, compiler support, function select and Ctags support. More than 50 programming languages and compilers are supported, and it is quite easy to add more.
Vedit has C-like macro language. It is interpreted, so there is no need for compiling. This makes it easy to automate your tasks or to add new features to Vedit. In fact, many of the built-in functions of Vedit have been done using the macro language. The macros can be called from a file on disk, or you can add them to User or Tools menu or to any keyboard macro.
Event macros can be executed automatically, for example on file open and close, mouse double-click etc. These can be used for example for automatic check-out / check-in from revision control systems.
Special command mode window allows entering any macro command sequence directly, and doubles as on-line calculator.
Keyboard macros can be recorded or typed in, or you can edit the whole keyboard configuration.
Vedit is highly configurable with more than 200 settings, most of which can be selected from the Config menu. The keyboard is fully configurable, too.
Vedit can be installed on and run directly from USB flash drive or CD-ROM.
Documentation
Vedit comes with comprehensive Online help and interactive Tutorial.
In addition, there are two PDF manuals: User's Manual (449 pages) and Macro Language Manual (305 pages).
The manuals are available as printed books, too.
More support can be found from the User Forum.
Limitations
Current version of Vedit does not support Unicode editing. However, Vedit can convert Unicode files to Windows or OEM (DOS) character set and vice versa.
See also
List of text editors
Comparison of text editors
Wikipedia:Text editor support
References
External links
VEDIT User Forum
QNX man page for Vedit
Windows text editors
CP/M software
Portable software
Hex editors |
25231 | https://en.wikipedia.org/wiki/QuickTime | QuickTime | QuickTime is an extensible multimedia framework developed by Apple Inc., capable of handling various formats of digital video, picture, sound, panoramic images, and interactivity. Created in 1991, the latest Mac version, QuickTime X, is available for Mac OS X Snow Leopard up to macOS Mojave. Apple ceased support for the Windows version of QuickTime in 2016, and ceased support for QuickTime 7 on macOS in 2018.
As of Mac OS X Lion, the underlying media framework for QuickTime, QTKit, was deprecated in favor of a newer graphics framework, AVFoundation, and completely discontinued as of macOS Catalina.
Overview
QuickTime is bundled with macOS. QuickTime for Microsoft Windows is downloadable as a standalone installation, and was bundled with Apple's iTunes prior to iTunes 10.5, but is no longer supported and therefore security vulnerabilities will no longer be patched. Already, at the time of the Windows version's discontinuation, two such zero-day vulnerabilities (both of which permitted arbitrary code execution) were identified and publicly disclosed by Trend Micro; consequently, Trend Micro strongly advised users to uninstall the product from Windows systems.
Software development kits (SDK) for QuickTime are available to the public with an Apple Developer Connection (ADC) subscription.
It is available free of charge for both macOS and Windows operating systems. There are some other free player applications that rely on the QuickTime framework, providing features not available in the basic QuickTime Player. For example, iTunes can export audio in WAV, AIFF, MP3, AAC, and Apple Lossless. In addition, macOS has a simple AppleScript that can be used to play a movie in full-screen mode, but since version 7.2 full-screen viewing is now supported in the non-Pro version.
QuickTime Pro
QuickTime Player 7 is limited to only basic playback operations unless a QuickTime Pro license key is purchased from Apple. Until Catalina, Apple's professional applications (e.g. Final Cut Studio, Logic Studio) included a QuickTime Pro license. Pro keys are specific to the major version of QuickTime for which they are purchased and unlock additional features of the QuickTime Player application on macOS or Windows. The Pro key does not require any additional downloads; entering the registration code immediately unlocks the hidden features.
QuickTime 7 is still available for download from Apple, but as of mid-2016, Apple stopped selling registration keys for the Pro version.
Features enabled by the Pro license include, but are not limited to:
Editing clips through the cut, copy and paste functions, merging separate audio and video tracks, and freely placing the video tracks on a virtual canvas with the options of cropping and rotation.
Saving and exporting (encoding) to any of the codecs supported by QuickTime. QuickTime 7 includes presets for exporting video to a video-capable iPod, Apple TV, and the iPhone.
Saving existing QuickTime movies from the web directly to a hard disk drive. This is often, but not always, either hidden or intentionally blocked in the standard mode. Two options exist for saving movies from a web browser:
Save as source – This option will save the embedded video in its original format. (I.e. not limited to .mov files.)
Save as QuickTime movie – This option will save the embedded video in a .mov file format no matter what the original container is/was.
Mac OS X Snow Leopard includes QuickTime X. QuickTime Player X lacks cut, copy and paste and will only export to four formats, but its limited export feature is free. Users do not have an option to upgrade to a Pro version of QuickTime X, but those who have already purchased QuickTime 7 Pro and are upgrading to Snow Leopard from a previous version of Mac OS X will have QuickTime 7 stored in the Utilities or user defined folder. Otherwise, users will have to install QuickTime 7 from the "Optional Installs" directory of the Snow Leopard DVD after installing the OS.
Mac OS X Lion and later also include QuickTime X. No installer for QuickTime 7 is included with these software packages, but users can download the QuickTime 7 installer from the Apple support site. QuickTime X on later versions of macOS support cut, copy and paste functions similarly to the way QuickTime 7 Pro did; the interface has been significantly modified to simplify these operations, however.
On September 24, 2018, Apple ended support for QuickTime 7 and QuickTime Pro, and updated many download and support pages on their website to state that QuickTime 7 "will not be compatible with future macOS releases."
QuickTime framework
The QuickTime framework provides the following:
Encoding and transcoding video and audio from one format to another. Command-line utilities afconvert (to convert audio formats), avconvert (to convert video formats) and qtmodernizer (to automatically convert older formats to H.264/AAC) are provided with macOS for power users.
Decoding video and audio, then sending the decoded stream to the graphics or audio subsystem for playback. In macOS, QuickTime sends video playback to the Quartz Extreme (OpenGL) Compositor.
A "component" plug-in architecture for supporting additional 3rd-party codecs (such as DivX).
As of early 2008, the framework hides many older codecs listed below from the user although the option to "Show legacy encoders" exists in QuickTime Preferences to use them. The framework supports the following file types and codecs natively:
Due to macOS Mojave being the last version to include support for 32-bit APIs and Apple's plans to drop 32-bit application support in future macOS releases, many codecs will no longer be supported in newer macOS releases, starting with macOS Catalina, which was released on October 7, 2019.
PictureViewer
PictureViewer is a component of QuickTime for Microsoft Windows and the Mac OS 8 and Mac OS 9 operating systems. It is used to view picture files from the still image formats that QuickTime supports. In macOS, it is replaced by Preview.
As of version 7.7.9, the Windows version requires one to go to their "Windows Uninstall Or Change A Program" screen to "modify" their installation of QuickTime 7 to include the "Legacy QuickTime Feature" of "QuickTime PictureViewer."
File formats
The native file format for QuickTime video, QuickTime File Format, specifies a multimedia container file that contains one or more tracks, each of which stores a particular type of data: audio, video, effects, or text (e.g. for subtitles). Each track either contains a digitally encoded media stream (using a specific format) or a data reference to the media stream located in another file. The ability to contain abstract data references for the media data, and the separation of the media data from the media offsets and the track edit lists means that QuickTime is particularly suited for editing, as it is capable of importing and editing in place (without data copying).
Other file formats that QuickTime supports natively (to varying degrees) include AIFF, WAV, DV-DIF, MP3, and MPEG program stream. With additional QuickTime Components, it can also support ASF, DivX Media Format, Flash Video, Matroska, Ogg, and many others.
QuickTime and MPEG-4
On February 11, 1998, the ISO approved the QuickTime file format as the basis of the MPEG‑4 file format. The MPEG-4 file format specification was created on the basis of the QuickTime format specification published in 2001. The MP4 (.mp4) file format was published in 2001 as the revision of the MPEG-4 Part 1: Systems specification published in 1999 (ISO/IEC 14496-1:2001). In 2003, the first version of MP4 format was revised and replaced by MPEG-4 Part 14: MP4 file format (ISO/IEC 14496-14:2003). The MP4 file format was generalized into the ISO Base Media File Format ISO/IEC 14496-12:2004, which defines a general structure for time-based media files. It in turn is used as the basis for other multimedia file formats (for example 3GP, Motion JPEG 2000). A list of all registered extensions for ISO Base Media File Format is published on the official registration authority website www.mp4ra.org. This registration authority for code-points in "MP4 Family" files is Apple Computer Inc. and it is named in Annex D (informative) in MPEG-4 Part 12.
By 2000, MPEG-4 formats became industry standards, first appearing with support in QuickTime 6 in 2002. Accordingly, the MPEG-4 container is designed to capture, edit, archive, and distribute media, unlike the simple file-as-stream approach of MPEG-1 and MPEG-2.
Profile support
QuickTime 6 added limited support for MPEG-4, specifically encoding and decoding using Simple Profile (SP). Advanced Simple Profile (ASP) features, like B-frames, were unsupported (in contrast with, for example, encoders such as XviD or 3ivx). QuickTime 7 supports the H.264 encoder and decoder.
Container benefits
Because both MOV and MP4 containers can use the same MPEG-4 codecs, they are mostly interchangeable in a QuickTime-only environment. MP4, being an international standard, has more support. This is especially true on hardware devices, such as the Sony PSP and various DVD players, on the software side, most DirectShow / Video for Windows codec packs include a MP4 parser, but not one for MOV.
In QuickTime Pro's MPEG-4 Export dialog, an option called "Passthrough" allows a clean export to MP4 without affecting the audio or video streams. QuickTime 7 now supports multi-channel AAC-LC and HE-AAC audio (used, for example, in the high-definition trailers on Apple's site), for both .MOV and .MP4 containers.
History
Apple released the first version of QuickTime on December 2, 1991 as a multimedia add-on for System 6 and later. The lead developer of QuickTime, Bruce Leak, ran the first public demonstration at the May 1991 Worldwide Developers Conference, where he played Apple's famous 1984 advertisement in a window at 320×240 pixels resolution.
QuickTime 1.x
The original video codecs included:
the Animation codec, which used run-length encoding and was better suited to cartoon-type images with large areas of flat color
the Apple Video codec (also known as "Road Pizza"), suited to normal live-action video.
the Graphics codec, for 8-bit images, including ones that had undergone dithering
The first commercial project produced using QuickTime 1.0 was the CD-ROM From Alice to Ocean. The first publicly visible use of QuickTime was Ben & Jerry's interactive factory tour (dubbed The Rik & Joe Show after its in-house developers). The Rik and Joe Show was demonstrated onstage at MacWorld in San Francisco when John Sculley announced QuickTime.
Apple released QuickTime 1.5 for Mac OS in the latter part of 1992. This added the SuperMac-developed Cinepak vector-quantization video codec (initially known as Compact Video). It could play video at 320×240 resolution at 30 frames per second on a 25 MHz Motorola 68040 CPU. It also added text tracks, which allowed for captioning, lyrics and other potential uses.
Apple contracted San Francisco Canyon Company to port QuickTime to the Windows platform. Version 1.0 of QuickTime for Windows provided only a subset of the full QuickTime API, including only movie playback functions driven through the standard movie controller.
QuickTime 1.6 came out the following year. Version 1.6.2 first incorporated the "QuickTime PowerPlug" which replaced some components with PowerPC-native code when running on PowerPC Macs.
QuickTime 2.x
Apple released QuickTime 2.0 for System Software 7 in June 1994—the only version never released for free. It added support for music tracks, which contained the equivalent of MIDI data and which could drive a sound-synthesis engine built into QuickTime itself (using a limited set of instrument sounds licensed from Roland), or any external MIDI-compatible hardware, thereby producing sounds using only small amounts of movie data.
Following Bruce Leak's departure to Web TV, the leadership of the QuickTime team was taken over by Peter Hoddie.
QuickTime 2.0 for Windows appeared in November 1994 under the leadership of Paul Charlton. As part of the development effort for cross-platform QuickTime, Charlton (as architect and technical lead), along with ace individual contributor Michael Kellner and a small highly effective team including Keith Gurganus, ported a subset of the Macintosh Toolbox to Intel and other platforms (notably, MIPS and SGI Unix variants) as the enabling infrastructure for the QuickTime Media Layer (QTML) which was first demonstrated at the Apple Worldwide Developers Conference (WWDC) in May 1996. The QTML later became the foundation for the Carbon API which allowed legacy Macintosh applications to run on the Darwin kernel in Mac OS X.
The next versions, 2.1 and 2.5, reverted to the previous model of giving QuickTime away for free. They improved the music support and added sprite tracks which allowed the creation of complex animations with the addition of little more than the static sprite images to the size of the movie. QuickTime 2.5 also fully integrated QuickTime VR 2.0.1 into QuickTime as a QuickTime extension. On January 16, 1997, Apple released the QuickTime MPEG Extension (PPC only) as an add-on to QuickTime 2.5, which added software MPEG-1 playback capabilities to QuickTime.
Lawsuit against San Francisco Canyon
In 1994, Apple filed suit against software developer San Francisco Canyon for intellectual property infringement and breach of contract. Apple alleged that San Francisco Canyon had helped develop Video for Windows using several hundred lines of unlicensed QuickTime source code. They were contracted by Intel to help make Video for Windows better use system resources on Intel processors, which was subsequently unilaterally removed. Microsoft and Intel were added to the lawsuit in 1995. The suit ended in a settlement in 1997.
QuickTime 3.x
The release of QuickTime 3.0 for Mac OS on March 30, 1998 introduced the now-standard revenue model of releasing the software for free, but with additional features of the Apple-provided MoviePlayer application that end-users could only unlock by buying a QuickTime Pro license code. Since the "Pro" features were the same as the existing features in QuickTime 2.5, any previous user of QuickTime could continue to use an older version of the central MoviePlayer application for the remaining lifespan of Mac OS to 2002; indeed, since these additional features were limited to MoviePlayer, any other QuickTime-compatible application remained unaffected.
QuickTime 3.0 added support for graphics importer components that could read images from GIF, JPEG, TIFF and other file formats, and video output components which served primarily to export movie data via FireWire. Apple also licensed several third-party technologies for inclusion in QuickTime 3.0, including the Sorenson Video codec for advanced video compression, the QDesign Music codec for substantial audio compression, and the complete Roland Sound Canvas instrument set and GS Format extensions for improved playback of MIDI music files. It also added video effects which programmers could apply in real-time to video tracks. Some of these effects would even respond to mouse clicks by the user, as part of the new movie interaction support (known as wired movies).
QuickTime interactive
During the development cycle for QuickTime 3.0, part of the engineering team was working on a more advanced version of QuickTime to be known as QuickTime interactive or QTi. Although similar in concept to the wired movies feature released as part of QuickTime 3.0, QuickTime interactive was much more ambitious. It allowed any QuickTime movie to be a fully interactive and programmable container for media. A special track type was added that contained an interpreter for a custom programming language based on 68000 assembly language. This supported a comprehensive user interaction model for mouse and keyboard event handling based in part on the AML language from the Apple Media Tool.
The QuickTime interactive movie was to have been the playback format for the next generation of HyperCard authoring tool. Both the QuickTime interactive and the HyperCard 3.0 projects were canceled in order to concentrate engineering resources on streaming support for QuickTime 4.0, and the projects were never released to the public.
QuickTime 4.x
Apple released QuickTime 4.0 on June 8, 1999 for Mac OS 7.5.5 through 8.6 (later Mac OS 9) and Windows 95, Windows 98, and Windows NT. Three minor updates (versions 4.0.1, 4.0.2, and 4.0.3) followed.
It introduced features that most users now consider basic:
Graphics exporter components, which could write some of the same formats that the previously introduced importers could read. (GIF support was omitted, possibly because of the LZW patent.)
Support for the QDesign Music 2 and MPEG-1 Layer 3 audio (MP3).
QuickTime 4 was the first version to support streaming. It was accompanied by the release of the free QuickTime Streaming Server version 1.0.
QuickTime 4 Player introduced brushed metal to the Macintosh user interface.
On December 17, 1999, Apple provided QuickTime 4.1, this version's first major update. Two minor versions (4.1.1 and 4.1.2) followed. The most notable improvements in the 4.1.x family were:
Support for files larger than 2.0 GB in Mac OS 9. (This is a consequence of Mac OS 9 requiring the HFS Plus filesystem.)
Variable bit rate (VBR) support for MPEG-1 Layer 3 (MP3) audio.
Support for Synchronized Multimedia Integration Language (SMIL).
Introduction of AppleScript support in Mac OS.
The requirement of a PowerPC processor for Mac OS systems. QuickTime 4.1 dropped support for Motorola 68k Macintosh systems.
QuickTime 5.x
QuickTime 5 was one of the shortest-lived versions of QuickTime, released in April 2001 and superseded by QuickTime 6 a little over a year later. This version was the last to have greater capabilities under Mac OS 9 than under Mac OS X, and the last version of QuickTime to support Mac OS versions 7.5.5 through 8.5.1 on a PowerPC Mac and Windows 95. Version 5.0 was initially only released for Mac OS and Mac OS X on April 14, 2001, and version 5.0.1 followed shortly thereafter on April 23, 2001, supporting the classic Mac OS, Mac OS X, and Windows. Three more updates to QuickTime 5 (versions 5.0.2, 5.0.4, and 5.0.5) were released over its short lifespan.
QuickTime 5 delivered the following enhancements:
MPEG-1 playback for Windows, and updated MPEG-1 Layer 3 audio support for all systems.
Sorenson Video 3 playback and export (added with the 5.0.2 update).
Realtime rendering of effects & transitions in DV files, including enhancements to DV rendering, multiprocessor support, and Altivec enhancements for PowerPC G4 systems.
Flash 4 playback and export.
A new QuickTime VR engine, adding support for cubic VR panoramas.
QuickTime 6.x
On July 15, 2002, Apple released QuickTime 6.0, providing the following features:
MPEG-4 playback, import, and export, including MPEG-4 Part 2 video and AAC Audio.
Support for Flash 5, JPEG 2000, and improved Exif handling.
Instant-on streaming playback.
MPEG-2 playback (via the purchase of Apple's MPEG-2 Playback Component).
Scriptable ActiveX control.
QuickTime 6 was initially available for Mac OS 8.6 – 9.x, Mac OS X (10.1.5 minimum), and Windows 98, Me, 2000, and XP. Development of QuickTime 6 for Mac OS slowed considerably in early 2003, after the release of Mac OS X v10.2 in August 2002. QuickTime 6 for Mac OS continued on the 6.0.x path, eventually stopping with version 6.0.3.
QuickTime 6.1 & 6.1.1 for Mac OS X v10.1 and Mac OS X v10.2 (released October 22, 2002) and QuickTime 6.1 for Windows (released March 31, 2003) offered ISO-Compliant MPEG-4 file creation and fixed the CAN-2003-0168 vulnerability.
Apple released QuickTime 6.2 exclusively for Mac OS X on April 29, 2003 to provide support for iTunes 4, which allowed AAC encoding for songs in the iTunes library. (iTunes was not available for Windows until October 2003.)
On June 3, 2003, Apple released QuickTime 6.3, delivering the following:
Support for 3GPP, including 3G Text, video, and audio (AAC and AMR codecs).
Support for the .3gp, .amr, and .sdv file formats via separate component.
QuickTime 6.4, released on October 16, 2003 for Mac OS X v10.2, Mac OS X v10.3, and Windows, added the following:
Addition of the Apple Pixlet codec (only for Mac OS X v10.3 and later).
ColorSync support.
Integrated 3GPP.
On December 18, 2003, Apple released QuickTime 6.5, supporting the same systems as version 6.4. Versions 6.5.1 and 6.5.2 followed on April 28, 2004 and October 27, 2004. These versions would be the last to support Windows 98 and Me. The 6.5 family added the following features:
3GPP2 and AMC mobile multimedia formats.
QCELP voice code.
Apple Lossless (in version 6.5.1).
QuickTime 6.5.3 was released on October 12, 2005 for Mac OS X v10.2.8 after the release of QuickTime 7.0, fixing a number of security issues.
QuickTime 7.x
Initially released on April 29, 2005 in conjunction with Mac OS X v10.4 (for version 10.3.9 and 10.4.x), QuickTime 7.0 featured the following:
Improved MPEG-4 compliance.
A H.264/MPEG-4 AVC codec (does not support the AVCHD H.264 AVC format from Sony HD camcorders).
Support for Core Audio, a set of Application programming interfaces that supports high resolution sound and replaces Sound Manager.
Support for using Core Image filters in Mac OS X v10.4 on live video (Not to be confused with Core Video).
Support for Quartz Composer (.qtz) animations.
Support for distinct decode order and display order.
QuickTime Kit Framework (QTKit), a Cocoa framework for QuickTime.
After a couple of preview Windows releases, Apple released 7.0.2 as the first stable release on September 7, 2005 for Windows 2000 and Windows XP. Version 7.0.4, released on January 10, 2006 was the first universal binary version. But it suffered numerous bugs, including a buffer overrun, which is more problematic to most users.
Apple dropped support for Windows 2000 with the release of QuickTime 7.2 on July 11, 2007. The last version available for Windows 2000, 7.1.6, contains numerous security vulnerabilities. References to this version have been removed from the QuickTime site, but it can be downloaded from Apple's support section. Apple has not indicated that they will be providing any further security updates for older versions. QuickTime 7.2 is the first version for Windows Vista.
Apple dropped support for Flash content in QuickTime 7.3, breaking content that relied on Flash for interactivity, or animation tracks. Security concerns seem to be part of the decision. Flash flv files can still be played in QuickTime if the free Perian plugin is added.
In QuickTime 7.3, a processor that supports SSE is required. QuickTime 7.4 does not require SSE. Unlike versions 7.2 and 7.3, QuickTime 7.4 cannot be installed on Windows XP without service packs or with Service Pack 1/1A installed (its setup program checks if Service Pack 2 is installed).
QuickTime 7.5 was released on June 10, 2008. QuickTime 7.5.5 was released on September 9, 2008, which requires Mac OS X v10.4 or higher, dropping 10.3 support. QuickTime 7.6 was released on January 21, 2009. QuickTime 7.7 was released on August 3, 2011.
QuickTime 7.6.6 is available for OS X, 10.6.3 Snow Leopard until 10.14 Mojave, as 10.15 Catalina will only support 64-bit applications. There is a 7.7 release of QuickTime 7 for OS X, but it is only for Leopard 10.5.
QuickTime 7.7.6 is the last release for Windows XP. As it's since version 7.4, they can be installed here only when Service Pack 2 or 3 is installed.
QuickTime 7.7.9 is the last Windows release of QuickTime. Apple stopped supporting QuickTime on Windows afterwards.
Safari 12, released on September 17, 2018 for macOS Sierra and macOS High Sierra (and the default browser included on macOS Mojave released on September 24, 2018), which drops support for NPAPI plug-ins (except for Adobe Flash) dropped its support for QuickTime 7's web plugin. On September 24, 2018, Apple dropped support for the macOS version of QuickTime 7. This effectively marked the end of the technology in Apple's codec and web development.
Starting with macOS Catalina, QuickTime 7 applications, image, audio and video codecs will no longer be compatible with macOS or supported by Apple.
QuickTime X (QuickTime Player v10.x)
QuickTime X (pronounced QuickTime Ten) was initially demonstrated at WWDC on June 8, 2009, and shipped with Mac OS X v10.6.
It includes visual chapters, conversion, sharing to YouTube, video editing, capture of video and audio streams, screen recording, GPU acceleration, and live streaming.
But it removed support for various widely used formats, in particular the omission of MIDI caused significant inconvenience and trouble to many musicians and their potential audiences.
In addition, a screen recorder is featured which records whatever is on the screen. However it is not possible to capture certain Digital rights management protected content. This includes iTunes/Apple TV video purchases, or any content protected by Apple's FairPlay DRM technology. While Safari uses FairPlay, Google Chrome, and Firefox use Widevine for DRM, whose content is not protected from QuickTime screen capturing.
The reason for the jump in numbering from 7 to 10 (X) was to indicate a similar break with the previous versions of the product that Mac OS X indicated. QuickTime X is fundamentally different from previous versions, in that it is provided as a Cocoa (Objective-C) framework and breaks compatibility with the previous QuickTime 7 C-based APIs that were previously used. QuickTime X was completely rewritten to implement modern audio video codecs in 64-bit. QuickTime X is a combination of two technologies: QuickTime Kit Framework (QTKit) and QuickTime X Player. QTKit is used by QuickTime player to display media. QuickTime X does not implement all of the functionality of the previous QuickTime as well as some of the codecs. When QuickTime X attempts to operate with a 32-bit codec or perform an operation not supported by QuickTime X, it will start a 32-bit helper process to perform the requested operation. The website Ars Technica revealed that QuickTime X uses QuickTime 7.x via QTKit to run older codecs that have not made the transition to 64-bit.
QuickTime X does not support .SRT subtitle files. It has been suggested using the program Subler to interleave the MP4 and SRT files will fix this oversight, which can be downloaded at Bitbucket.
QuickTime 7 may still be required to support older formats on Snow Leopard such as QTVR, interactive QuickTime movies, and MIDI files. In such cases, a compatible version of QuickTime 7 is included on Snow Leopard installation disc and may be installed side-by-side with QuickTime X. Users who have a Pro license for QuickTime 7 can then activate their license.
A Snow Leopard compatible version of QuickTime 7 may also be downloaded from Apple Support website.
The software got an increment with the release of Mavericks, and as of August 2018, the current version is v10.5. It contains more sharing options (email, YouTube, Facebook, Flickr etc.), more export options (including web export in multiple sizes, and export for iPhone 4/iPad/Apple TV (but not Apple TV 2). It also includes a new way of fast forwarding through a video and mouse support for scrolling.
Starting with macOS Catalina, Apple only provides QuickTime X, as QuickTime 7 was never updated to 64-bit, affecting many applications, image, audio, and video formats utilizing QuickTime 7, and compatibility with these codecs in QuickTime X.
Platform support
Creating software that uses QuickTime
QuickTime X
QuickTime X previously provided the QTKit Framework on Mac OS 10.6 until 10.14.
Since the release of macOS 10.15, AVKit and AVFoundation are used instead (due to the removal of 32-bit audio and video codecs, as well as image formats and APIs supported by QuickTime 7).
Previous versions
QuickTime consists of two major subsystems: the Movie Toolbox and the Image Compression Manager. The Movie Toolbox consists of a general API for handling time-based data, while the Image Compression Manager provides services for dealing with compressed raster data as produced by video and photo codecs.
Developers can use the QuickTime software development kit (SDK) to develop multimedia applications for Mac or Windows with the C programming language or with the Java programming language (see QuickTime for Java), or, under Windows, using COM/ActiveX from a language supporting this.
The COM/ActiveX option was introduced as part of QuickTime 7 for Windows and is intended for programmers who want to build standalone Windows applications using high-level QuickTime movie playback and control with some import, export, and editing capabilities. This is considerably easier than mastering the original QuickTime C API.
QuickTime 7 for Mac introduced the QuickTime Kit (aka QTKit), a developer framework that is intended to replace previous APIs for Cocoa developers. This framework is for Mac only, and exists as Objective-C abstractions around a subset of the C interface. Mac OS X v10.5 extends QTKit to full 64-bit support. The QTKit allows multiplexing between QuickTime X and QuickTime 7 behind the scenes so that the user need not worry about which version of QuickTime they need to use.
Bugs and vulnerabilities
QuickTime 7.4 was found to disable Adobe's video compositing program, After Effects. This was due to the DRM built into version 7.4 since it allowed movie rentals from iTunes. QuickTime 7.4.1 resolved this issue.
Versions 4.0 through 7.3 contained a buffer overflow bug which could compromise the security of a PC using either the QuickTime Streaming Media client, or the QuickTime player itself. The bug was fixed in version 7.3.1.
QuickTime 7.5.5 and earlier are known to have a list of significant vulnerabilities that allow a remote attacker to execute arbitrary code or cause a denial of service (out-of-bounds memory access and application crash) on a targeted system. The list includes six types of buffer overflow, data conversion, signed vs. unsigned integer mismatch, and uninitialized memory pointer.
QuickTime 7.6 has been found to disable Mac users' ability to play certain games, such as Civilization IV and The Sims 2. There are fixes available from the publisher, Aspyr.
QuickTime 7 lacks support for H.264 Sample Aspect Ratio. QuickTime X does not have this limitation, but many Apple products (such as Apple TV) still use the older QuickTime 7 engine. iTunes previously utilized QuickTime 7, but as of October 2019, iTunes no longer utilizes the older QuickTime 7 engine.
QuickTime 7.7.x on Windows fails to encode H.264 on multi-core systems with more than approximately 20 threads, e.g. HP Z820 with 2× 8-core CPUs. A suggested solution is to disable hyper-threading/limit CPU cores. Encoding speed and stability depends on the scaling of the player window.
On April 14, 2016, Christopher Budd of Trend Micro announced that Apple has ceased all security patching of QuickTime for Windows, and called attention to two Zero Day Initiative advisories, ZDI-16-241
and ZDI-16-242, issued by Trend Micro's subsidiary TippingPoint on that same day. Also on that same day, the United States Computer Emergency Readiness Team issued alert TA16-105A, encapsulating Budd's announcement and the Zero Day Initiative advisories. Apple responded with a statement that QuickTime 7 for Windows is no longer supported by Apple.
See also
AVFoundation
Comparison of audio player software
Comparison of video player software
Perian
Qtch
QuickTime Alternative
QuickTime Broadcaster
QuickTime File Format
QuickTime Streaming Server
Windows Media Components for QuickTime
Xiph QuickTime Components
References
External links
QuickTime Reference
All versions of QuickTime
Introduction To QuickTime Overview
1991 software
Graphics software
macOS APIs
Image viewers
Macintosh media players
MacOS media players
Multimedia frameworks
QTKit Framework
Windows media players |
59639426 | https://en.wikipedia.org/wiki/ZeroTier | ZeroTier | ZeroTier Inc. is a software company with a freemium business model based in Irvine, California. ZeroTier provides proprietary software, SDKs and commercial products and services to create and manage virtual software defined networks. The company's flagship end-user product ZeroTier One is a client application that enables devices such as PCs, phones, servers and embedded devices to securely connect to peer-to-peer virtual networks.
Software tools
ZeroTier provides a suite of proprietary tools, licensed under a Business Source License 1.1, intended to support development and deployment of virtual data centers:
The main product line consists of these following tools:
ZeroTier One, first released in 2014, a portable client application that provides connectivity to public or private virtual networks.
Central, a web-based UI portal for managing virtual networks.
libzt (SDK), a linkable library that provides the functionality of ZeroTier One but that can be embedded in applications or services.
LF (pronounced "aleph"), a fully decentralized fully replicated key/value store.
Client
The ZeroTier client is used to connect to virtual networks previously created in the ZeroTier Central web-based UI. Endpoint connections are peer-to-peer and end-to-end encrypted. STUN and hole punching are used to establish direct connections between peers behind NAT. Direct connection route discovery is made with the help of a global network of root servers via a mechanism similar to ICE in WebRTC.
Controller
Virtual networks are created and managed using a ZeroTier controller. Management is done using an API, proprietary web-based UI (ZeroTier Central), open-source web-based or CLI alternative. Using root servers other than those hosted by ZeroTier Inc. is impeded by the software's license.
Security
The following considerations apply to ZeroTier's use as an SDWAN or VPN application:
Asymmetric public key encryption is Curve25519, a 256-bit elliptic curve variant.
All traffic is encrypted end to end on OSI layer 1 using 256-bit Salsa20 and authenticated using the Poly1305 message authentication (MAC) algorithm. MAC is computed after encryption (encrypt-then-MAC) and the cipher/MAC composition used is identical to the NaCl reference implementation.
Packages
ZeroTier One is available on multiple platforms and in multiple forms:
Microsoft Windows installer (.msi)
Apple Macintosh (.pkg)
iOS for iPhone/iPad/iPod
Docker
Source code on GitHub
Linux binaries (DEB & RPM)
Linux snap package (works across distributions)
Linux library
Android App on Google Play
Qnap (.qpkg)
Synology packages (.spk)
Western Digital MyCloud NAS EX2, EX4, EX2 Ultra (.bin)
FreeBSD has a port and a package
OpenWRT has a community maintained port on GitHub
MikroTik's RouterOS
Similar projects
FreeLAN
GNUnet
IPOP
LogMeIn Hamachi
OpenVPN
tinc
WireGuard
Netmaker
Twingate
See also
ICE
WebRTC
VPN
References
External links
TeamViewer VPN Linux-to-Windows equivalent
Virtual private networks
Anonymity networks
Tunneling software
Internet software for Linux
MacOS Internet software
Windows Internet software |
37683818 | https://en.wikipedia.org/wiki/Max%20Wittek | Max Wittek | Max Nolan Wittek (born July 31, 1993) is a former American football quarterback. He played at USC from 2011 to 2013, and transferred to Hawaii, sitting out the 2014 season.
Early years
Born in Contra Costa County, California, Wittek grew up in Norwalk, Connecticut but later moved to Santa Ana, California where he attended Mater Dei High School. His immediate predecessor as starting quarterback for Mater Dei was future USC teammate Matt Barkley. As a senior, he completed 153 of 282 passes for 2,252 yards with 24 touchdowns and 15 interceptions. He was ranked as the third best pro-style quarterback recruit in his class by Rivals.com. He committed to USC in April 2010.
College career
Wittek was redshirted as a freshman in 2011. As a redshirt freshman in 2012, Wittek won the backup job to Matt Barkley. Wittek made his first career start on November 24, 2012 after Barkley suffered a sprained AC joint in his right shoulder. Prior to that game, he had completed eight of nine passes for 95 yards with a touchdown.
Scheduled to start his first USC game against rival and No. 1 ranked Notre Dame, Wittek made a stir by asserting his confidence in a Trojans victory: "I'm going to go out there, and I'm going to play within myself, within the system, and we're gonna win this ballgame." Notre Dame defeated USC 22–13 to advance to the 2013 BCS National Championship Game. Wittek threw for 186 yards, one touchdown and two interceptions.
For the 2013 season, he and Cody Kessler competed for the starting job. He eventually lost the quarterback competition by the second week of the season.
In January 2014, Wittek decided to pursue a master's degree and compete for a QB position at another university. In August, Wittek eventually announced his intention to transfer to Hawaii.
In September 2015 for Hawaii, Wittek threw for 202 yards and three touchdown passes in a 28–20 win over Colorado, in the season opener for both teams. Wittek started in 8 of Hawaii's first 9 games before injuring his right knee and later being diagnosed with chondromalacia, a degenerative knee condition. In those games he completed less than 50 percent of his passes and threw just 7 touchdowns against 13 interceptions. His season ended in November when he was scheduled for surgery on his right knee. Wittek reportedly was slowed by sore knees as well as a foot injury. He finished the season with those statistics, including 1,542 yards passing as Hawaii completed the 2015 season with a 3–10 record.
Professional career
On May 1, 2016, Wittek signed with the Jacksonville Jaguars. He was waived from their roster August 23, 2016.
References
External links
USC Trojans bio
1993 births
Living people
American football quarterbacks
USC Trojans football players
Sportspeople from Newport Beach, California
Sportspeople from Norwalk, Connecticut
Players of American football from California
Players of American football from Connecticut
People from San Ramon, California
Hawaii Rainbow Warriors football players |
42093361 | https://en.wikipedia.org/wiki/Gordon%20Eugene%20Martin | Gordon Eugene Martin | Gordon Eugene Martin (born August 22, 1925) is an American physicist and author in the field of piezoelectric materials for underwater sound transducers. He wrote early computer software automating iterative evaluation of direct computer models through a Jacobian matrix of complex numbers. His software enabled the Navy Electronics Laboratory (NEL) to accelerate design of sonar arrays for tracking Soviet Navy submarines during the Cold War.
Early years
Gordon was born 22 August 1925 in San Diego. He was the third of five sons of Carl Martin and Ruth (Fountain) Martin. His older brother Harold enlisted in the Army National Guard and was serving on Oahu in 1941. Gordon communicated with his brother's anti-aircraft facility by amateur radio prior to the attack on Pearl Harbor, and relayed information to and from other San Diego families with National Guard members on Oahu.
United States Navy
Martin enlisted in the V-12 Navy College Training Program at Kansas State Teachers College in 1943 and transferred to the University of Texas Naval Reserve Officer Training Corps. Following commissioning in 1945, Ensign Martin served as cryptography officer aboard the destroyer USS Higbee (DD-806). Following release to reserve status after World War II, he completed electrical engineering degree requirements at University of California, Berkeley and in 1947 joined the NEL team in San Diego continuing underwater sound research begun in 1942 by Glen Camp at the University of California, San Diego campus. His early work involved measurement of piezoelectric characteristics of ammonium dihydrogen phosphate (ADP) and Rochelle salt. Lieutenant (junior grade) Martin was recalled to active duty during the Korean War as the first executive officer of the prototype SOSUS station on the island of Eleuthera. As the SOSUS network expanded Lieutenant Martin moved to the United States Navy Underwater Sound Laboratory in New London, Connecticut. Martin's 1954 publication describing relationships of circuit coefficients and critical frequencies of maximum and minimum admittance in piezoelectric materials was later cited in the Institute of Electrical and Electronics Engineers (IEEE) standard on piezoelectricity. From 1954 to 1960 he led the NEL development team for a variable magnetic reluctance transducer intended for a low-frequency array.
Software development
Early sonar transducers had been developed from simplistic design assumptions followed by a trial and error design modification if the transducer failed to meet performance goals. That design approach became impractical for the large number of variables involved in optimized electrical coupling of array elements coupled acoustically by the physics of fluid water. NEL explored transducer theory with tensor analysis and continuum mechanics to determine viscous and hysteretic dissipative effects of transducer materials and radiation impedance of transducers in the water medium. NEL's mathematical models for mutual radiation impedance of transducer elements overwhelmed mechanical calculators and taxed capabilities of contemporary electronic computers.
In 1961, the United States and United Kingdom undertook a joint effort to develop digital computer software for analysis and design using the ALGOL-based Navy Electronics Laboratory International Algorithmic Compiler (NELIAC). Early software used direct models to determine critical resonance and antiresonance frequencies of piezoelectric materials and immitances at those frequencies. Results were graphed and solutions were determined to the desired accuracy by visual comparison of successive runs of the direct model software. Martin developed "find parameters" software evaluating capacitance, dissipation, resonance, and antiresonance with a Jacobian matrix and its inverse to determine losses separately for dielectric, elastic, and piezoelectric properties of individual barium titanate ceramic components. He completed the software in the summer of 1964 and it was announced at the September, 1964, seminar of the Office of Naval Research. His software was translated from NELIAC to Fortran and distributed in 1965. His automated approach to inverse modeling was subsequently presented at the 1974 IEEE Ultrasonic Manufacturers Association conference and the 1980 meeting of the Acoustical Society of America.
Martin completed a doctoral dissertation on lateral effects in piezoelectric systems at the University of Texas from 1964 to 1966; and continued working at NEL until his retirement in 1980. Shortly before retirement, he was awarded a patent (assigned to the United States Navy) for discrete amplitude shading for lobe-suppression in a discrete transducer array.
Martin founded Martin Analysis Software Technology Company following retirement; and contracted with the Navy for high-resolution beamforming with generalized eigenvector/eigenvalue (GEVEV) digital signal processing from 1985 through 1987 and for personal computer aided engineering (PC CAE) of underwater transducers and arrays from 1986 through 1989. Martin published an expanded theory of matrices in 2012 entitled A New Approach to Matrix Analysis, Complex Symmetric Matrices, and Physically Realizable Systems.
Publications
Variable-frequency Oscillator Circuits Possessing Exceptional Stability (1951)
Determination of Equivalent‐Circuit Constants of Piezoelectric Resonators of Moderately Low Q by Absolute‐Admittance Measurements (1954)
Directional Properties of Continuous Plane Radiators with Bizonal Amplitude Shading (1955 with Hickman)
Broad-Band, High-Power, Low-Frequency Variable-Reluctance Projector Array (1956 with Byrnes & Hickman)
Magnetic Materials for Electromagnetic Transducer Applications (1958)
An Investigation of Electroacoustic Reciprocity in the Near Field (1961)
Reciprocity Calibration in the Near Field (1961)
Near Field of a Shaded Radiator (1961)
Vibrations of Longitudinally Polarized Ferroelectric Cylindrical Tubes (1963)
New Standard for Measurements of Certain Piezoelectric Ceramics (1963)
Radiation Impedances of Plane‐Array Elements (1963)
Velocity Control of Transducer Arrays (1963)
On the Properties of Segmented Ferroelectric Ceramic Systems (1964)
On the Theory of Segmented Electromechanical Systems (1964)
Vibrations of Coaxially Segmented, Longitudinally Polarized Ferroelectric Tubes (1964)
Computer Design of Transducers (1964)
Measurement of the Gross Properties of Large Segmented Ceramic Tubes (1965)
Effects of Static Stress on the Dielectric, Elastic, and Piezoelectric Properties of Ceramics (1965)
Dielectric, Piezoelectric, and Elastic Losses in Longitudinally Polarized Segmented Ceramic Tubes (1965)
On the propagation of longitudinal stress waves in finite solid elastic horns (1967)
Comments on the Possible Resurgence of Magnetostriction Transducers for Large Ship Sonars (1967 with Berlincourt, Schenck & Smith)
Near‐Field and Far‐Field Radiation from an Experimental Electrically Steered Planar Array (1967)
Dielectric, Elastic and Piezoelectric Losses in Piezoelectric Materials (1974)
Vibrations of plates and cylindrical shells in an acoustic medium (1976)
Thirty years' progress in transducer source and receive arrays (1977)
Economical computation of array gain of large lattice acoustic arrays in anisotropic sea noise (1977)
Effects of dissipation in piezoelectric materials: Reminiscence (1980)
Discrete amplitude shading for lobe‐suppression in discrete array (1982)
The 3‐3 parameters for piezoelectric ceramics: New parameter‐measurement relations and transducer design implications (1982 with Johson)
Analysis of intermodal coupling in piezoelectric ceramic rings (1983 with Benthien)
Degradation of angular resolution for eigenvector-eigenvalue (EVEV) high-resolution processors with inadequate estimation of noise coherence (1984)
Analyses of large arrays: Brief theory and some techniques used in 1954–1985 (1985)
Transducer longitudinal‐vibrator equivalent circuits and related topics (1990)
Limits of dissipative coefficients in piezoelectric transverse isotropic materials (2011)
A New Approach to Matrix Analysis, Complex Symmetric Matrices, and Physically Realizable Systems (2012)
References
21st-century American physicists
Amateur radio people
People from San Diego
1925 births
Living people
American United Methodists
UC Berkeley College of Engineering alumni
University of California, Los Angeles alumni
San Diego State University alumni
University of Texas alumni
United States Navy personnel of World War II
United States Navy officers
United States Navy reservists
20th-century American physicists
Military personnel from California |
3093575 | https://en.wikipedia.org/wiki/Jon%20Rubinstein | Jon Rubinstein | Jonathan J. "Jon" Rubinstein (born October 1956) is an American electrical engineer who played an instrumental role in the development of the iMac and iPod, the portable music and video device first sold by Apple Computer Inc. in 2001. He left his position as senior vice president of Apple's iPod division on April 14, 2006.
He became executive chairman of the board at Palm, Inc., after private equity firm Elevation Partners completed a significant investment in the handheld manufacturer in October 2007. He became CEO of Palm in 2009, replacing former CEO Ed Colligan.
Following Hewlett-Packard Co.'s purchase of Palm on July 1, 2010, Rubinstein became an executive at HP. On January 27, 2012, Rubinstein announced he had officially left HP.
Rubinstein has served on the board of directors of online retailer Amazon.com since December 2010. From May 2013 to May 2016, he was also on the board of semiconductor manufacturer Qualcomm From March 2016 to March 2017, he was co-CEO of investment firm Bridgewater Associates.
In 2005, he was elected a member of the National Academy of Engineering for the design of innovative personal computers and consumer electronics that have defined and led new industries. He is also a senior member of the Institute of Electrical and Electronics Engineers.
Early years and education
Rubinstein was born and raised in New York City. He is a graduate of the Horace Mann School, class of 1975. He went to college and graduate school at Cornell University in Ithaca, N.Y., where he received a B.S. in electrical engineering in 1978 and a master’s in the same field a year later. He later earned a M.S. in computer science from Colorado State University in Fort Collins, Colorado.
Rubinstein’s first jobs in the computer industry were in Ithaca, where he worked at a local computer retailer and also served as a design consultant to an area computer company.
Career
Hewlett-Packard, Ardent
After graduating school, Rubinstein took a job with Hewlett-Packard in Colorado. He spent about two years in the company’s manufacturing engineering division, developing quality-control techniques and refining manufacturing processes. Later, Rubinstein worked on HP workstations.
Rubinstein left HP in 1986 to join a startup, Ardent Computer Corp., in Silicon Valley. While at Ardent, later renamed Stardent, he played an integral role in launching a pair of machines, the Titan Graphics Supercomputer and the Stardent 3000 Graphics Supercomputer.
Steve Jobs and NeXT
In 1990, Apple co-founder Steve Jobs approached Rubinstein to run hardware engineering at his latest venture, NeXT. Rubinstein headed work on NeXT’s RISC workstation – a graphics powerhouse that was never released because in 1993, the company abandoned its floundering hardware business in favor of a software-only approach.
After helping to dismantle NeXT’s manufacturing operations, Rubinstein went on to start another company, Power House Systems. That company, later renamed Firepower Systems, was backed by Canon Inc. and used technology developed at NeXT. It developed and built high-end systems using the PowerPC chip. Motorola bought the business in 1996.
Apple Computer
After Apple's purchase of NeXT, Rubinstein had planned on an extended vacation to travel. But Jobs, now an unpaid consultant for Apple, invited Rubinstein to work with him for Apple.
At the time, Apple was losing industry support. The company's reputation as an innovator was waning, as were profits (Rubinstein's arrival in February 1997 came on the heels of a year in which Apple lost US$816 million). Rubinstein joined Apple anyway because, as he told The New York Times, "Apple was the last innovative high-volume computer maker in the world."
Rubinstein joined Apple as Senior Vice President of Hardware Engineering, and a member of its executive staff. He was responsible for hardware development, industrial design and low-level software development, and contributed heavily to Apple's technology roadmap and product strategy.
Rubinstein took on an immense workload upon his arrival. The company sold over 15 product lines, nearly all of which were derided as inferior to other computers available at the time. Internally, Apple also suffered from extreme mismanagement of its hardware teams. Multiple teams often worked on the same product independently of each other, and very little attention was directed towards making all of the product lines fully compatible with each other. With Jobs, Rubinstein helped towards fixing both of these problems.
He also helped initiate an extensive cost-cutting plan affecting research projects and engineers. Expenses were eventually cut in half. After critically examining all projects currently in the pipeline, the G3, a fast PowerPC-based desktop machine, was chosen to be Apple's next released product. Upon its release at the end of 1997, Apple finally had what it hadn't had in years: a cutting-edge desktop machine that could compete with its Intel-based competitors.
In 1997, Jobs cancelled almost all of the product lines, and introduced a new product strategy focusing only on desktop and laptop computers for both consumer and professional customer. With the Power Macintosh G3 filling the role of a desktop computer marketed at professional customers, Apple began to focus on an entry-level desktop computer suitable for consumers. The result was the iMac released in 1998, a computer with an innovative design intended to be friendly and easily accessible for average computer users. For the iMac's development, Rubinstein assembled a team and with a deadline of only 11 months (a timeline they considered impossible). The iMac was an immediate success, not only helping to revitalise Apple as a company, but also popularising new technologies at the time, such as USB, which would then go on to become an industry standard. The iMac also shipped without a floppy disk drive (rare for computers of the era), relying solely on the optical drive and new technologies such as USB and Firewire for data transfer. Rubinstein was responsible for both of these decisions.
Future rollouts under Rubinstein's management included all subsequent upgrades (the G4 and G5) of the Power Mac series. While they were technically powerful computers, the Power Mac series suffered from the perception that they were slower than their Intel-based counterparts because their PowerPC CPUs listed slower clock speeds. Rubinstein and Apple popularised a term known as the Megahertz myth, to describe how the PowerPC architecture could not be compared to the Intel architecture simply on their clock speeds (the PowerPC CPUs, despite their lower clock speeds, were generally comparable to Intel CPUs of the era).
iPod development
Due to the relatively low sales of its Mac computer brand, Apple decided to expand its ecosystem in order to increase its consumer awareness. The iPod came from Apple's "digital hub" category, when the company began creating software for the growing market of personal digital devices. Digital cameras, camcorders and organizers had well-established mainstream markets, but the company found existing digital music players "big and clunky or small and useless" with user interfaces that were "unbelievably awful", so Apple decided to develop its own. Even though it was a space with immense market potential, previous products had not enjoyed any notable market penetration.
By 2000, Steve Jobs expressed interest in developing a portable music player. But Rubinstein demurred, saying the necessary components were not yet available. While on a routine supplier visit to Toshiba Corp. in February, 2001, however, Rubinstein first saw the tiny, 1.8-inch hard disk drive that became a critical component of the iPod. While Toshiba engineers had developed the drive, they were not sure how it could be used. At a Tokyo hotel later that evening, Rubinstein met with Jobs, who was in Japan on separate business. "I know how to do it now. All I need is a $10 million check," he told Jobs.
Jobs agreed, and Rubinstein assembled and managed a team of hardware and software engineers to ready the product on a rushed, eight-month schedule. The team’s engineers needed to overcome a number of hurdles, including figuring out how to play music off a spinning hard drive for more than 10 hours without wiping out a battery charge. Rubinstein’s production contacts proved invaluable, too; the iPod’s sleek, minimalist design, with its high-gloss, engraveable metal back, was a mass-manufacturing triumph. The success of the first-generation iPod was almost overnight. By 2004 the business became so important to Apple that the iPod was spun off into its own division, which Rubinstein took over.
Other iPod models were released on a regular basis, increasing the device’s capacity, decreasing its size, and adding features including color screens, photo display and video playback. By early 2008, more than 119 million iPods had been sold, making it not only the most successful portable media player on the market but one of the most popular consumer electronics products of all time.
Rubinstein - sometimes called the "Podfather" because of his role in developing the iPod - was also instrumental in creating a robust secondary market for accessories such as speakers, chargers, docking ports, backup batteries, and other add-ons. That gear, produced by a network of independent companies that came to be known as "The iPod Ecosystem", by 2006 generated more than $1 billion in annual sales. In the 2007 fiscal year, the iPod generated $8.3 billion in revenue, or about a third of Apple's sales.
By around the fall of 2005, Rubinstein had become upset by Tim Cook’s increasing leadership role as COO and his frequent clashes with SVP of Industrial Design, Jony Ive, who was very close with Jobs. Ive kept designing costly or difficult to engineer products, which Rubinstein balked at. Jobs told his biographer Walter Isaacson “In the end, Ruby’s from HP, and he never delved deep, he wasn’t aggressive.” Eventually, Ive told Jobs “It’s him, or me.”, and Jobs decided to keep Ive instead.
In October 2005, Apple announced that Rubinstein would be retiring on March 31, 2006, and he was succeeded as iPod chief by Tony Fadell. It was later announced that he would make himself available for up to 20% of his workweek on a consulting basis. It is said that with the approaching release of an upcoming hand-held device (which would become the iPhone), Steve Jobs started paying lesser attention to Rubinstein and more attention to young engineers. Rubinstein was given a promotion which actually reduced his power at Apple. Jobs's focus shifted to newer engineers which ultimately resulted in Rubinstein's departure.
Palm
In 2007, Rubinstein joined Palm as executive chairman of its board of directors; at about the same time, he stepped down as chairman of Immersion Corp., a developer of haptic technology. Rubinstein took control of Palm’s product development and led its research, development, and engineering efforts. One of his first tasks included winnowing the company's product lines and restructuring R&D teams. He was instrumental in developing the webOS platform and the Palm Pre. Rubinstein debuted both on January 8, 2009, at the Consumer Electronics Show (CES) in Las Vegas. On June 10, 2009, just four days after the successful release of his brainchild, the Palm Pre, Rubinstein was named the CEO of Palm.
The Pre first launched on the Sprint network. Reports at the time of the launch noted that it was a record for Sprint, with 50,000 units sold its opening weekend. A follow-up phone, the Palm Pixi, was announced on September 8, 2009, and released on Sprint on November 15, 2009. Rubinstein had said that one of Palm’s keys moving forward would be to "bring on more carriers and more regions," and the company launched its Palm Pre Plus and Pixi Plus phones on Verizon Wireless in January 2010. In the same month, AT&T announced plans to launch a pair of Palm’s webOS devices later in 2010.
But the addition of Verizon Wireless did not help as much as expected. By February 2010, Palm warned that its products were not selling as quickly as hoped.
Rubinstein’s visibility in the mainstream tech community grew upon joining Palm. He was the featured guest in September 2009 at the first episode of "The Engadget Show," a web videocast produced by the technology weblog. In December 2009, the magazine Fast Company named Rubinstein one of its Geeks of the Year, along with people such as Facebook founder Mark Zuckerberg and writer/director/producer J. J. Abrams; Fast Company also named Rubinstein to its list of the "100 most creative people in business."
Hewlett-Packard (second stint)
Rubinstein rejoined HP in 2010, when the latter bought Palm for $1.2 billion. The deal gave HP another chance to enter the mobile-device market while sending a lifeline to Palm, which some analysts expected to run out of cash within two years. Rubinstein agreed to remain with the company for 12 to 24 months after the merger.
At the time, HP said it would utilize webOS across a spectrum of products, including phones, printers and other devices. HP’s strategy was to keep consumers connected to all of their information through the cloud, regardless of which device they were on.
On July 1, 2011, HP released the webOS-based TouchPad. Shortly after, Rubinstein stepped down from the webOS unit and assumed a "product innovation role" elsewhere within HP. While Rubinstein had pledged to be patient in building demand for the device, HP abandoned it quickly in the face of soft sales: The TouchPad was on the market for only seven weeks when then-CEO Leo Apotheker announced in August that the company would discontinue all hardware devices running webOS. (HP subsequently slashed the price of the least expensive TouchPad to $99, setting off a buying frenzy and leading technology-research firm Canalys to call it the "must-have technology product of 2011.)
Apotheker himself was gone less than a month later, when the HP board replaced him with former eBay CEO Meg Whitman. She announced plans to make webOS open source in December 2011.
On January 27, 2012, Jon Rubinstein left HP after his 24 months contract ended. In an interview, he said he would not retire, but take a break - and while he had no plans at the time, he added "the future is mobile."
Bridgewater
In May 2013, Rubinstein joined the board of Qualcomm, a leading provider of chips used in mobile devices. He also currently sits on the board of Amazon.com, to which he was elected in December 2010.
Rubinstein's appointment as co-CEO at Bridgewater Associates, the world's largest hedge fund, was announced in a letter to clients in March 2016. In the note, Bridgewater officials noted that "because technology is so important to us, we wanted one of our co-C.E.O.s to be very strong in that area." Rubinstein replaced Greg Jensen, who moved to concentrate on his role as co-chief investment officer. Less than one year later, it was announced that Rubinstein was leaving the company because he and Bridgewater founder Ray Dalio "mutually agree that he is not a cultural fit for Bridgewater".
Personal life
Rubinstein is married to Karen Richardson, a technology-industry veteran who is currently on the board of BT Group plc.
Affiliations
Member, National Academy of Engineering
Senior Member, IEEE
Director, Amazon.com
Member, Cornell Silicon Valley Advisors
Former director, Immersion Corp.
Former member, Cornell Alumni Council
Former member, Consumer Electronics Association Board of Industry Leaders
References
External links
Jon Rubinstein Appointed CEO of Palm, June 10, 2009
Apple, press release, October 14, 2005
USA Today, “Apple turns a profit – And a corner,” Oct. 17, 1996
Wall Street Journal, “Designing Duo Helps Shape Apple’s Fortunes” July 18, 2001
IEEE Spectrum 2008-09 "From Podfather to Palm's Pilot"
The Engadget Show, Episode 019, March 28, 2011
1956 births
Living people
Apple Inc. executives
Cornell University College of Engineering alumni
Palm, Inc.
Senior Members of the IEEE
Horace Mann School alumni
Scientists from New York City
Colorado State University alumni
Hewlett-Packard people
Members of the United States National Academy of Engineering
Amazon (company) people
American technology chief executives |
30875225 | https://en.wikipedia.org/wiki/Reverse%20auction | Reverse auction | A reverse auction (also known as buyer-determined auction or procurement auction) is a type of auction in which the traditional roles of buyer and seller are reversed. Thus, there is one buyer and many potential sellers. In an ordinary auction also known as a forward auction, buyers compete to obtain goods or services by offering increasingly higher prices. In contrast, in a reverse auction, the sellers compete to obtain business from the buyer and prices will typically decrease as the sellers underbid each other.
A reverse auction is similar to a unique bid auction because the basic principle remains the same; however, a unique bid auction follows the traditional auction format more closely as each bid is kept confidential and one clear winner is defined after the auction finishes.
For business auctions, the term refers to a specific type of auction process (also called e-auction, sourcing event, e-sourcing or eRA, eRFP, e-RFO, e-procurement, B2B Auction). Open procurement processes, which are a form of reverse auction, have been commonly used in government procurement and in the private sector in many countries for many decades.
For consumer auctions, the term is often used to refer to sales processes that share some characteristics with auctions, but are not necessarily auctions in the traditional sense.
Context
One common example of reverse auctions is, in many countries, the procurement process in the public sector. Governments often purchase goods or services through an open procurement process by issuing a public tender. Public procurement arrangements for large projects or service programs are often quite complex, frequently involving dozens of individual procurement activities.
Another common application of reverse auctions is for e-procurement, a purchasing strategy used for strategic sourcing and other supply management activities. E-procurement arrangements enable suppliers to compete online in real time and is changing the way firms and their consortia select and behave with their suppliers worldwide. It can help improve the effectiveness of the sourcing process and facilitate access to new suppliers. This may in the future lead to a standardization of sourcing procedures, reduced order cycle, which can enable businesses to reduce prices and generally provide a higher level of service.
In a traditional auction, the seller offers an item for sale. Potential buyers are then free to bid on the item until the time period expires. The buyer with the highest offer wins the right to purchase the item for the price determined at the end of the auction.
A reverse auction is different in that a single buyer offers a contract out for bidding. (In an e-procurement arrangement this is done either by using specialized software or through an on-line marketplace.) Multiple sellers are invited to offer bids on the contract.
E-procurement
In the case of e-procurement, When real-time e-bidding is permitted, the price decreases as sellers compete to offer lower bids than their competitors whilst still meeting all of the specifications of the original contract.
Bidding performed in real-time via the Internet results in a dynamic, competitive process. This helps achieve rapid downward price pressure that is not normally attainable using traditional static paper-based bidding processes. Many reverse auction software companies or service providers report an average price reduction of 18–20 percent following the initial auction's completion.
The buyer may award the contract to the seller who bid the lowest price. Or, a buyer may award contracts to suppliers who bid higher prices depending on the buyer's specific needs with regard to quality, lead-time, capacity, or other value-adding capabilities.
The use of optimization software has become popular since 2002 to help buyers determine which supplier is likely to provide the best value in providing goods or services. The software includes relevant buyer and seller business data, including constraints.
Reverse auctions are used to fill both large and small value contracts for both public sector and private commercial organizations. In addition to items traditionally thought of as commodities, reverse auctions are also used to source buyer-designed goods and services; and they have even been used to source reverse auction providers. The first time this occurred was in August 2001, when America West Airlines (which later became US Airways) used FreeMarkets software and awarded the contract to MaterialNet.
One form of reverse auction is static auction (RFQ or tender). Static auction is alternative to dynamic auction and regular negotiation process in commerce especially on B2B electronic marketplace.
In 2003, researchers claimed an average of five percent of total corporate spending was sourced using reverse auctions. They have been found to be more appropriate and suitable in industries and sectors like advertising, auto components, bulk chemicals, consumer durables, computers and peripherals, contract manufacturing, courier services, FMCG, healthcare, hospitality, insurance, leasing, logistics, maritime shipping, MRO, retail, software licensing, textiles, tourism, transport and warehousing.
History of internet-based reverse auctions
The pioneer of online e-procurement reverse auctions in the United States, FreeMarkets, was founded in 1995 by former McKinsey & Company consultant and General Electric executive Glen Meakem after he failed to find internal backing for the idea of a reverse auction division at General Electric. Meakem hired McKinsey colleague Sam Kinney, who developed much of the intellectual property behind FreeMarkets. Headquartered in Pittsburgh, FreeMarkets built teams of "market makers" and "commodity managers" to manage the process of running the online tender process and set up market operations to manage auctions on a global basis.
The company's growth was aided greatly by the hype of the dot-com boom era. FreeMarkets customers included BP, United Technologies, Visteon, Heinz, Phelps Dodge, ExxonMobil, and Royal Dutch Shell, to name a few. Dozens of competing start-up reverse auction service providers and established companies such as General Motors (an early FreeMarkets customer) and SAP, rushed to join the reverse auction marketspace.
Although FreeMarkets survived the winding down of the dot-com boom, by the early-2000s, it was apparent that its business model was really like an old-economy consulting firm with some sophisticated proprietary software. Online reverse auctions started to become mainstream and the prices that FreeMarkets had commanded for its services dropped significantly. This led to a consolidation of the reverse auction service marketplace. In January 2004, Ariba announced its purchase of FreeMarkets for US$493 million.
Fortune published an article in March 2000, describing the early days of internet-based reverse auctions.
In the past few years mobile reverse auction have evolved. Unlike business-to-business (B2B) reverse auctions, mobile reverse auctions are business-to-consumer (B2C) and allow consumers to bid on products for pennies. The lowest unique bid wins.
Very recently business-to-consumer auctions with a twist have started to evolve; they are more similar to the original business-to-business auctions than mobile reverse auctions in that they offer consumers the option of placing a specification before retailers or resellers and allowing them to publicly bid for their business.
In congressional testimony on the 2008 proposed legislative package to use federal funds to buy toxic assets from troubled financial firms, Federal Reserve Chairman Ben Bernanke proposed that a reverse auction could be used to price the assets.
In 2004, the White House Office of Federal Procurement Policy (OFPP) issued a memorandum encouraging increased use of commercially available online procurement tools, including reverse auctions. In 2005, both the Government Accountability Office and Court of Federal Claims upheld the legality of federal agency use of online reverse auctions. In 2008, OFPP issued a government-wide memorandum encouraging agencies to improve and increase competitive procurement and included specific examples of competition best practices, including reverse auctions. In 2010, The White House Office of Management and Budget cited "continued implementation of innovative procurement methods, such as the use of web-based electronic reverse auctions" as one of the contracting reforms helping agencies meet acquisition savings goals.
Terminology
A common form of procurement auction is known as a scoring auction. In that auction form, the score that the buyer gives each bidder depends on well-defined attributes of the offer and the bidder. This scoring function is formulated and announced prior to the start of the auction.
More commonly, many procurement auctions are “buyer determined” in that the buyer reserves the right to select the winner on any basis following the conclusion of the auction. The literature on buyer-determined auctions is often empirical in nature and is concerned with identifying the unannounced implicit scoring function the buyer uses. This is typically done through a discrete choice model, wherein the econometrician uses the observed attributes, including price, and maps them to the probability of being chosen as the winner. This allows the econometrician to identify the weight on each attribute.
Conceptually and theoretically, the effect of this format on buyer-supplier relationships is of paramount importance.
Theoretically, the factors that determine under what circumstances a buyer would prefer this format have been explored.
Demsetz auction
A Demsetz auction is a system which awards an exclusive contract to the agent bidding the lowest price named after Harold Demsetz. This is sometimes referred to "competition for the field." It is in contrast to "competition in the field," which calls for two or more agents to be granted the contract and provide the good or service competitively. Martin Ricketts writes that "under competitive conditions, the bid prices should fall until they just enable firms to achieve a normal return on capital." Disadvantages of a Demsetz auction include the fact that the entire risk associated with falling demand is borne by one agent and that the winner of the bid, once locked into the contract, may accumulate non-transferable know-how that can then be used to gain leverage for contract renewal.
Demsetz auctions are often used to award contracts for public-private partnerships for highway construction.
Spectrum auction
In the United States, the Federal Communications Commission created FCC auction 1001 as a reverse auction in order to get back much of the 600MHz band from television broadcasting. The remaining TV stations would then be repacked onto the lower UHF and even VHF TV channels. After the reverse auction in June 2016, a forward spectrum auction (FCC auction 1002) will then be held, with mostly mobile phone carriers as the buyers.
Dutch reverse auctions
While a traditional Dutch Auction starts at a high bid which will then decrease, a Reverse Dutch Auction works the opposite way as it starts at a low price and then gradually increases over time. It contains a list of items that buyers want to procure and the price rises after fixed intervals until a reserved price is reached. Before the reserved price is reached, if a supplier places a bid for the item, it is allocated to the supplier and the item closes for bidding.
In this auction, the buyer specifies a starting price, price change value, time interval between price changes, and the reserved price.
The auction opens with the first item with a specified start price and increases by the price change value (amount or percentage) after a fixed interval. The start price keeps on increasing until any supplier places a bid or the start price reaches the reserved price. After the bidding is closed for the item it moves to another item sequentially.
Auction is closed when the bidding for all items is completed.
Japanese reverse auctions
Although the history of the Japanese reverse auction is unknown, they are widely used in the world of business-to-business procurement as a form of cost negotiation.
A Japanese auction is where the host of the auction states an opening price and participants have to accept that price level or withdraw from the auction. Acceptance indicates that the participant is prepared to supply at the stated price. When all participants reply to a certain price, the software lowers the price level by a predetermined amount and again asks participants to accept or decline at the new price level.
This kind of auction continues until there are no more participants bidding.
Comparison of Japanese and Dutch reverse auctions
The major difference between Japanese and Dutch reverse auctions is in putting suppliers in two different position. While in Dutch reverse auctions suppliers opt-in at intended price point and thus end the auction immediately, in reverse Japanese auctions suppliers explicitly opt-out of a given market at their intended price point.
The benefits of the Japanese reverse auction are based on the fact that the buying organization has greater transparency to the actual market itself. In this regard, the format more closely mirrors that of a traditional reverse auction by providing greater visibility to each participant's lowest offer.
But in contrast to a Dutch auction format, Japanese auctions do not put what one Dutch auction users describes as "maximum psychological pressure" on the supply base and especially on the incumbent suppliers. This can put the buyer in a better position regarding with potentially earning more than he should based on the market.
Strategy in Reverse auctions
The suppliers should firstly determine the lowest price for which they are willing to sell their products. To do this effectively they must be able to compute their true marginal cost and identify extra-auctions costs and benefits. However, that does not mean that the best strategy is to bid the lowest price. In the analysis of extra-auction costs and benefits, they should examine areas where winning or losing can generate unexpected benefits or avoided costs. Some examples include:
Winning or losing changes their volume discount, rebates and incentives with key suppliers,
Losing requires laying off personnel with its associated termination costs,
Winning opens a new account more inexpensively than hiring a sales representative.
Based on this analysis, the supplier should choose his goals for the auction. The obvious goal is to win the auction at a profitable price. However, that is not always the best goal. Because of the examples of reasons mentioned above the supplier might choose as a goal for example:
To come in second (or third) while keeping the price high,
To come in second while driving the price down to unprofitable levels for the winner,
To bid down to a certain price and stop, regardless of winning position and potential profitability.
After this preparation, the supplier should try to achieve his goal within the specific auction protocols and characteristics of the auction interface. The important characteristics that differ between auctions are the ability to see the current low bid and the knowing of their current relative position.
See also
Tendering
Request for Quotation
Request For Tender
Request For Information
Request For Proposal
Optimization (mathematics)
Operations research
References
Further reading
Schoenherr, T., and Mabert, V.A. (2007), "Online reverse auctions: common myths versus evolving reality", Business Horizons, 50, 373-384.
Bounds, G., "Toyota Supplier Development", in Cases in Quality, G. Bounds, Editor, R.D. Irwin Co., Chicago, IL, 1996, pp. 3–25
Shalev, E. Moshe and Asbjornsen, S., "Electronic Reverse Auctions and the Public Sector – Factors of Success", Journal of Public Procurement, 10(3) 428-452.
Bounds, G., Shaw, A., and Gillard, J., "Partnering the Honda Way", in Cases in Quality, G. Bounds, Editor, R.D. Irwin Co., Chicago, IL, 1996, pp. 26–56
Dyer, J. and Nobeoka, K., "Creating and Managing a High-Performance Knowledge Sharing Network: The Toyota Case," Strategic Management Journal, Vol. 21, 2000, pp. 345–367
Liker, J. and Choi, T., "Building Deep Supplier Relationships", Harvard Business Review, Vol. 82, No. 12, December 2004, pp. 104–113
Womack, J., Jones, D., and Roos, D., The Machine that Changed the World, Rawson Associates, New York, 1990, Chapter 6
Jap, Sandy D. (2007), "The Impact of Online Reverse Auction Design on Buyer-Supplier Relationships", Journal of Marketing, 71(1), 146-50
.
Procurement
Types of auction |
31567964 | https://en.wikipedia.org/wiki/Gradle | Gradle | Gradle is a build automation tool for multi-language software development. It controls the development process in the tasks of compilation and packaging to testing, deployment, and publishing. Supported languages include Java (as well as Kotlin, Groovy, Scala), C/C++, and JavaScript. The other, if not the major function of Gradle is to collect statistical data about the usage of software libraries around the globe.
Gradle builds on the concepts of Apache Ant and Apache Maven, and introduces a Groovy- & Kotlin-based domain-specific language contrasted with the XML-based project configuration used by Maven. Gradle uses a directed acyclic graph to determine the order in which tasks can be run, through providing dependency management. Gradle runs on the JVM.
Gradle was designed for multi-project builds, which can grow to be large. It operates based on a series of build tasks that can run serially or in parallel. Incremental builds are supported by determining the parts of the build tree that are already up to date; any task dependent only on those parts does not need to be re-executed. It also supports caching of build components, potentially across a shared network using the Gradle Build Cache. It produces web-based build visualization called Gradle Build Scans. The software is extensible for new features and programming languages with a plugin subsystem.
Gradle is distributed as open-source software under the Apache License 2.0, and was first released in 2008.
History
As of 2016 the initial plugins were primarily focused on Java, Groovy, and Scala development and deployment.
Major versions
See also
List of build automation software
References
Further reading
External links
Official Gradle Enterprise website
With Gradle founder Hans Dockter and Aleksandar Gargenta
Compiling tools
Java development tools
Build automation
Cross-platform software
Software using the Apache license
2007 software
Directed acyclic graphs |
12498297 | https://en.wikipedia.org/wiki/B%26C%20Records | B&C Records | B&C Records (which stood for Beat & Commercial) was a British record label run by Trojan Records' owner, Lee Gopthal. It existed primarily between May 1969 and September 1972.
In 1971, the progressive and folk artists that were still signed to the label were moved over to B&C's new Pegasus Records imprint (which later became Peg), though singles continued to be issued on the B&C label until 1972. Pegasus Records released just 14 albums before closing down in 1972, when most of the artists moved over to the newly formed Mooncrest Records label. Mooncrest had started out as Charisma Records's publishing company, but had become a record label in its own right in 1973. It reissued a fair number of the original Pegasus releases. The company continued after this point in its original format as a record manufacturing, distribution and marketing company, continuing to distribute records by Charisma and Mooncrest. Between 1971 and 1974, B&C and Charisma shared their CB-100 series for singles.
In 1974 B&C got into financial trouble and was finally sold, along with Trojan and Mooncrest, to Marcel Rodd, head of Allied/Saga Records. Trojan and Mooncrest continued to issue records marketed by B&C, though Charisma moved its operations over to Phonogram Inc. in May 1975. The B&C label was resurrected as a label between 1977 and 1981, releasing just a few new singles and reissuing several classic tracks as singles or EPs.
B&C was originally intended to reissue gospel/soul artists such as James Carr, but did branch out to other genres eventually. B&C released Atomic Rooster's first two albums, Atomic Roooster (1970) and Death Walks Behind You, Steeleye Span's Please to See the King (1971), Nazareth's Loud 'n' Proud (1974), Andy Roberts' Home Grown (1971) and Everyone (1971), and one self-titled LP by the Newcastle-based band Ginhouse, Ginhouse (1971).
The label was also prominent in the early "revival" period of 1950s rock and roll. The Wild Angels, one of the first of these groups had two albums released on B&C in 1970, Live At The Revolution and Red Hot N Rockin. They both had "gatefold" sleeves. The company also released an album called Battle Of The Bands, which featured an early recording by Shakin' Stevens, and also Gene Vincent, and acts such as The Impalas and The Houseshakers. There was also an album by The Rock N Roll Allstars entitled Red China Rocks.
References
External links
The B&C Discography: 1968 to 1975 - all UK non-reggae releases on the B&C family of labels
Brief history
B&C Discography on 45cat
Record labels established in 1969
British record labels
Rock record labels
Pop record labels |
20655490 | https://en.wikipedia.org/wiki/Trojan%20horse%20%28business%29 | Trojan horse (business) | In business, a trojan horse is an advertising offer made by a company that is designed to draw potential customers by offering them cash or something of value for acceptance, but following acceptance, the buyer is forced to spend a much larger amount of money, either by being signed into a lengthy contract, from which exit is difficult, or by having money automatically drawn in some other method. The harmful consequences faced by the customer may include spending far above market rate, large amount of debt, or identity theft.
The term, which originated in New England during the 2000s, and has spread to some other parts of the United States, is also sometimes misused in reference to an item offered seemingly at a bargain price. But through fine print and other hidden trick, the item is ultimately sold at above market rate.
Some of the items involved in trojan horse sales include cash, gift cards or merchandise viewed as a high-ticket item, but the item actually being given away is made cheaply, has a very low value, and does not satisfy the expectations of the recipient. Meanwhile, the victim of the trojan horse is likely to end up spending far more money over time, either through continual withdrawals from the customer's bank account, charges to a debit or credit card, or add-ons to a bill that must be paid in order to avoid loss of an object or service of prime importance (such as a house, car, or phone line).
Victims of trojan horses include poor people or those who are searching for bargains or the best price on an item. Many of these victims end up with overdrawn accounts or over-the-limit on their credit cards due to fees that are automatically charged.
Some of the businesses using trojan horse marketing include banks, internet and cell phone service providers, record and book clubs and other companies in which the customer will be expected to have a continuing relationship. Banks often offer cash initially for opening an account, but later charge fees in much larger amounts to the account holder. Auto-manufacturers and car dealerships will often advertise free or subsidized gas to car buyers for a certain amount of time, but increase the cost of the car in other ways. Cell phone companies use trojan horse marketing by attempting to sell items like ringtones to customers, who unknowingly are sold many more ringtones over time.
See also
Bait-and-switch
Freebie marketing
Teaser rate
References
Advertising techniques
Business terms
Deception |
9003039 | https://en.wikipedia.org/wiki/MADYMO | MADYMO | MADYMO (MAthematical DYnamic MOdels) is a software package for the analysis of occupant safety systems in the automotive and transport industries. The software was developed by the Netherlands Organization for Applied Scientific Research (TNO) and is owned and distributed by TASS International Software and Services, headquartered in Helmond, the Netherlands. By one author's estimation, "MADYMO is probably the most widely used multi-body system program for occupant safety systems."
Application areas include automotive crash safety, train interior safety, motorcycle safety, aircraft and helicopter safety, consumer product safety, crash reconstruction, and vehicle handling.
Product modules
MADYMO has a range of product modules with different functionality
MADYMO/Solver - The MADYMO simulation engine which includes Multi-body, Finite Element and Computational Fluid Dynamics capabilities to drive the simulation of occupant restraint systems as well as the MADYMO dummy models and MADYMO Human models.
MADYMO/XMADgic - A Pre-processor for MADYMO. It is a XML-editor with dedicated functionality to support you in editing an XMLinput deck for the MADYMO solver. The editor fully complies with the XML standard.
MADYMO/MADpost - MADPost is a multi-platform post processor for the MADYMO solver. It has been designed to facilitate optimal use of the MADYMO solver output – both for viewing animations and creating time-history plots. MADPost also supports import and display of some foreign FE code output formats and physical test data formats, such as video formats and ISO formatted data.
MADYMO/Exchange - MADYMO/Exchange is a MADYMO/Workspace tool that automates and simplifies the use of the MADYMO software, allowing controlled model modification and controlled exchange of components, such that a consistent and efficient MADYMO modelling process is achieved. MADYMO/Exchange consists of a GUI that guides the user step-by-step through the modelling tasks. All steps are represented in the GUI via individual tabs.
MADYMO/Exchange Assistant - MADYMO/Exchange Assistant is an authoring tool for the Exchange super-user, allowing controlled model modification and controlled exchange of components such that a consistent and efficient MADYMO modelling process is achieved. It helps the super-user in defining the component definitions, the analysis type file and the project file
MADYMO/Objective Rating - The objective rating tool provides the user with a means to rate pairs of curves against each other, based on a set of predefined rating criteria and hence provides an immediate overview of how well signals correlate, indicated by colours and values. The user can define a rating matrix and save this as a template that can be reloaded in a later stage, either in GUI or in batch mode.
MADYMO/Protocol Rating - The protocol rating tool calculates and presents occupant safety ratings according to different vehicle safety assessment protocols. The tool allows for directly importing input data (injury criteria values) from MADYMO peak file output. Alternatively the user can fill and modify injury criteria and input other values manually
MADYMO/Converter - Converter is a flexible foreign-code-to-MADYMO-code converter. Due to the use of Perl as converter language, the converter can be easily enhanced by adding new scripts. The tool has its own GUI and can be used stand-alone and/or in combination with XMADgic4.0 and onwards.
Coupling/Assistant - The Coupling Assistant is introduced to enable users of FE codes as PAM-CRASH, RADIOSS and DYNA to work with MADYMO Dummies and MADYMO models in general. There is no need to know the MADYMO input format as the Coupling Assistant completely hides it from the user.
Product features
Fast & accurate simulations
Accurate crash dummy & human body models
State-of-the-art restraint system modelling techniques
Reliable predictions of safety performance & injury risks
Multibody, Finite Element & CFD combined in one code
References
External links
TASS International website - makers of MADYMO - www.tassinternational.com.
TNO website - www.tno.nl.
Simulation software
Computer-aided engineering software
Computer-aided engineering software for Linux |
17861917 | https://en.wikipedia.org/wiki/OpenCL | OpenCL | OpenCL (Open Computing Language) is a framework for writing programs that execute across heterogeneous platforms consisting of central processing units (CPUs), graphics processing units (GPUs), digital signal processors (DSPs), field-programmable gate arrays (FPGAs) and other processors or hardware accelerators. OpenCL specifies programming languages (based on C99, C++14 and C++17) for programming these devices and application programming interfaces (APIs) to control the platform and execute programs on the compute devices. OpenCL provides a standard interface for parallel computing using task- and data-based parallelism.
OpenCL is an open standard maintained by the non-profit technology consortium Khronos Group. Conformant implementations are available from Altera, AMD, ARM, Creative, IBM, Imagination, Intel, Nvidia, Qualcomm, Samsung, Vivante, Xilinx, and ZiiLABS.
Overview
OpenCL views a computing system as consisting of a number of compute devices, which might be central processing units (CPUs) or "accelerators" such as graphics processing units (GPUs), attached to a host processor (a CPU). It defines a C-like language for writing programs. Functions executed on an OpenCL device are called "kernels". A single compute device typically consists of several compute units, which in turn comprise multiple processing elements (PEs). A single kernel execution can run on all or many of the PEs in parallel. How a compute device is subdivided into compute units and PEs is up to the vendor; a compute unit can be thought of as a "core", but the notion of core is hard to define across all the types of devices supported by OpenCL (or even within the category of "CPUs"), and the number of compute units may not correspond to the number of cores claimed in vendors' marketing literature (which may actually be counting SIMD lanes).
In addition to its C-like programming language, OpenCL defines an application programming interface (API) that allows programs running on the host to launch kernels on the compute devices and manage device memory, which is (at least conceptually) separate from host memory. Programs in the OpenCL language are intended to be compiled at run-time, so that OpenCL-using applications are portable between implementations for various host devices. The OpenCL standard defines host APIs for C and C++; third-party APIs exist for other programming languages and platforms such as Python, Java, Perl, D and .NET. An implementation of the OpenCL standard consists of a library that implements the API for C and C++, and an OpenCL C compiler for the compute device(s) targeted.
In order to open the OpenCL programming model to other languages or to protect the kernel source from inspection, the Standard Portable Intermediate Representation (SPIR) can be used as a target-independent way to ship kernels between a front-end compiler and the OpenCL back-end.
More recently Khronos Group has ratified SYCL, a higher-level programming model for OpenCL as a single-source DSEL based on pure C++17 to improve programming productivity. In addition to that C++ features can also be used when implementing compute kernel sources in C++ for OpenCL language.
Memory hierarchy
OpenCL defines a four-level memory hierarchy for the compute device:
global memory: shared by all processing elements, but has high access latency ();
read-only memory: smaller, low latency, writable by the host CPU but not the compute devices ();
local memory: shared by a group of processing elements ();
per-element private memory (registers; ).
Not every device needs to implement each level of this hierarchy in hardware. Consistency between the various levels in the hierarchy is relaxed, and only enforced by explicit synchronization constructs, notably barriers.
Devices may or may not share memory with the host CPU. The host API provides handles on device memory buffers and functions to transfer data back and forth between host and devices.
OpenCL kernel language
The programming language that is used to write compute kernels is called kernel language. OpenCL adopts C/C++-based languages to specify the kernel computations performed on the device with some restrictions and additions to facilitate efficient mapping to the heterogeneous hardware resources of accelerators. Traditionally OpenCL C was used to program the accelerators in OpenCL standard, later C++ for OpenCL kernel language was developed that inherited all functionality from OpenCL C but allowed to use C++ features in the kernel sources.
OpenCL C language
OpenCL C is a C99-based language dialect adapted to fit the device model in OpenCL. Memory buffers reside in specific levels of the memory hierarchy, and pointers are annotated with the region qualifiers , , , and , reflecting this. Instead of a device program having a function, OpenCL C functions are marked to signal that they are entry points into the program to be called from the host program. Function pointers, bit fields and variable-length arrays are omitted, and recursion is forbidden. The C standard library is replaced by a custom set of standard functions, geared toward math programming.
OpenCL C is extended to facilitate use of parallelism with vector types and operations, synchronization, and functions to work with work-items and work-groups. In particular, besides scalar types such as and , which behave similarly to the corresponding types in C, OpenCL provides fixed-length vector types such as (4-vector of single-precision floats); such vector types are available in lengths two, three, four, eight and sixteen for various base types. Vectorized operations on these types are intended to map onto SIMD instructions sets, e.g., SSE or VMX, when running OpenCL programs on CPUs. Other specialized types include 2-d and 3-d image types.
Example: matrix-vector multiplication
The following is a matrix-vector multiplication algorithm in OpenCL C.
// Multiplies A*x, leaving the result in y.
// A is a row-major matrix, meaning the (i,j) element is at A[i*ncols+j].
__kernel void matvec(__global const float *A, __global const float *x,
uint ncols, __global float *y)
{
size_t i = get_global_id(0); // Global id, used as the row index
__global float const *a = &A[i*ncols]; // Pointer to the i'th row
float sum = 0.f; // Accumulator for dot product
for (size_t j = 0; j < ncols; j++) {
sum += a[j] * x[j];
}
y[i] = sum;
}
The kernel function computes, in each invocation, the dot product of a single row of a matrix and a vector :
.
To extend this into a full matrix-vector multiplication, the OpenCL runtime maps the kernel over the rows of the matrix. On the host side, the function does this; it takes as arguments the kernel to execute, its arguments, and a number of work-items, corresponding to the number of rows in the matrix .
Example: computing the FFT
This example will load a fast Fourier transform (FFT) implementation and execute it. The implementation is shown below. The code asks the OpenCL library for the first available graphics card, creates memory buffers for reading and writing (from the perspective of the graphics card), JIT-compiles the FFT-kernel and then finally asynchronously runs the kernel. The result from the transform is not read in this example.
#include <stdio.h>
#include <time.h>
#include "CL/opencl.h"
#define NUM_ENTRIES 1024
int main() // (int argc, const char* argv[])
{
// CONSTANTS
// The source code of the kernel is represented as a string
// located inside file: "fft1D_1024_kernel_src.cl". For the details see the next listing.
const char *KernelSource =
#include "fft1D_1024_kernel_src.cl"
;
// Looking up the available GPUs
const cl_uint num = 1;
clGetDeviceIDs(NULL, CL_DEVICE_TYPE_GPU, 0, NULL, (cl_uint*)&num);
cl_device_id devices[1];
clGetDeviceIDs(NULL, CL_DEVICE_TYPE_GPU, num, devices, NULL);
// create a compute context with GPU device
cl_context context = clCreateContextFromType(NULL, CL_DEVICE_TYPE_GPU, NULL, NULL, NULL);
// create a command queue
clGetDeviceIDs(NULL, CL_DEVICE_TYPE_DEFAULT, 1, devices, NULL);
cl_command_queue queue = clCreateCommandQueue(context, devices[0], 0, NULL);
// allocate the buffer memory objects
cl_mem memobjs[] = { clCreateBuffer(context, CL_MEM_READ_ONLY | CL_MEM_COPY_HOST_PTR, sizeof(float) * 2 * NUM_ENTRIES, NULL, NULL),
clCreateBuffer(context, CL_MEM_READ_WRITE, sizeof(float) * 2 * NUM_ENTRIES, NULL, NULL) };
// create the compute program
// const char* fft1D_1024_kernel_src[1] = { };
cl_program program = clCreateProgramWithSource(context, 1, (const char **)& KernelSource, NULL, NULL);
// build the compute program executable
clBuildProgram(program, 0, NULL, NULL, NULL, NULL);
// create the compute kernel
cl_kernel kernel = clCreateKernel(program, "fft1D_1024", NULL);
// set the args values
size_t local_work_size[1] = { 256 };
clSetKernelArg(kernel, 0, sizeof(cl_mem), (void *)&memobjs[0]);
clSetKernelArg(kernel, 1, sizeof(cl_mem), (void *)&memobjs[1]);
clSetKernelArg(kernel, 2, sizeof(float)*(local_work_size[0] + 1) * 16, NULL);
clSetKernelArg(kernel, 3, sizeof(float)*(local_work_size[0] + 1) * 16, NULL);
// create N-D range object with work-item dimensions and execute kernel
size_t global_work_size[1] = { 256 };
global_work_size[0] = NUM_ENTRIES;
local_work_size[0] = 64; //Nvidia: 192 or 256
clEnqueueNDRangeKernel(queue, kernel, 1, NULL, global_work_size, local_work_size, 0, NULL, NULL);
}
The actual calculation inside file "fft1D_1024_kernel_src.cl" (based on Fitting FFT onto the G80 Architecture):
R"(
// This kernel computes FFT of length 1024. The 1024 length FFT is decomposed into
// calls to a radix 16 function, another radix 16 function and then a radix 4 function
__kernel void fft1D_1024 (__global float2 *in, __global float2 *out,
__local float *sMemx, __local float *sMemy) {
int tid = get_local_id(0);
int blockIdx = get_group_id(0) * 1024 + tid;
float2 data[16];
// starting index of data to/from global memory
in = in + blockIdx; out = out + blockIdx;
globalLoads(data, in, 64); // coalesced global reads
fftRadix16Pass(data); // in-place radix-16 pass
twiddleFactorMul(data, tid, 1024, 0);
// local shuffle using local memory
localShuffle(data, sMemx, sMemy, tid, (((tid & 15) * 65) + (tid >> 4)));
fftRadix16Pass(data); // in-place radix-16 pass
twiddleFactorMul(data, tid, 64, 4); // twiddle factor multiplication
localShuffle(data, sMemx, sMemy, tid, (((tid >> 4) * 64) + (tid & 15)));
// four radix-4 function calls
fftRadix4Pass(data); // radix-4 function number 1
fftRadix4Pass(data + 4); // radix-4 function number 2
fftRadix4Pass(data + 8); // radix-4 function number 3
fftRadix4Pass(data + 12); // radix-4 function number 4
// coalesced global writes
globalStores(data, out, 64);
}
)"
A full, open source implementation of an OpenCL FFT can be found on Apple's website.
C++ for OpenCL language
In 2020 Khronos announced the transition to the community driven C++ for OpenCL programming language that provides features from C++17 in combination with the traditional OpenCL C features. This language allows to leverage a rich variety of language features from standard C++ while preserving backward compatibility to OpenCL C. This opens up a smooth transition path to C++ functionality for the OpenCL kernel code developers as they can continue using familiar programming flow and even tools as well as leverage existing extensions and libraries available for OpenCL C.
The language semantics is described in the documentation published in the releases of OpenCL-Docs repository hosted by the Khronos Group but it is currently not ratified by the Khronos Group. The C++ for OpenCL language is not documented in a stand-alone document and it is based on the specification of C++ and OpenCL C. The open source Clang compiler has supported C++ for OpenCL since release 9.
C++ for OpenCL has been originally developed as a Clang compiler extension and appeared in the release 9. As it was tightly coupled with OpenCL C and did not contain any Clang specific functionality its documentation has been re-hosted to the OpenCL-Docs repository from the Khronos Group along with the sources of other specifications and reference cards. The first official release of this document describing C++ for OpenCL version 1.0 has been published in December 2020. C++ for OpenCL 1.0 contains features from C++17 and it is backward compatible with OpenCL C 2.0. A work in progress draft of its documentation can be found on the Khronos website.
Features
C++ for OpenCL supports most of the features (syntactically and semantically) from OpenCL C except for nested parallelism and blocks. However, there are minor differences in some supported features mainly related to differences in semantics between C++ and C. For example, C++ is more strict with the implicit type conversions and it does not support the type qualifier. The following C++ features are not supported by C++ for OpenCL: virtual functions, operator, non-placement / operators, exceptions, pointer to member functions, references to functions, C++ standard libraries. C++ for OpenCL extends the concept of separate memory regions (address spaces) from OpenCL C to C++ features - functional casts, templates, class members, references, lambda functions, operators. Most of C++ features are not available for the kernel functions e.g. overloading or templating, arbitrary class layout in parameter type.
Example: complex number arithmetic
The following code snippet illustrates how kernels with complex number arithmetic can be implemented in C++ for OpenCL language with convenient use of C++ features.// Define a class Complex, that can perform complex number computations with
// various precision when different types for T are used - double, float, half.
template<typename T>
class complex_t {
T m_re; // Real component.
T m_im; // Imaginary component.
public:
complex_t(T re, T im): m_re{re}, m_im{im} {};
// Define operator for complex number multiplication.
complex_t operator*(const complex_t &other) const
{
return {m_re * other.m_re - m_im * other.m_im,
m_re * other.m_im + m_im * other.m_re};
}
int get_re() const { return m_re; }
int get_im() const { return m_im; }
};
// A helper function to compute multiplication over complex numbers read from
// the input buffer and to store the computed result into the output buffer.
template<typename T>
void compute_helper(__global T *in, __global T *out) {
auto idx = get_global_id(0);
// Every work-item uses 4 consecutive items from the input buffer
// - two for each complex number.
auto offset = idx * 4;
auto num1 = complex_t{in[offset], in[offset + 1]};
auto num2 = complex_t{in[offset + 2], in[offset + 3]};
// Perform complex number multiplication.
auto res = num1 * num2;
// Every work-item writes 2 consecutive items to the output buffer.
out[idx * 2] = res.get_re();
out[idx * 2 + 1] = res.get_im();
}
// This kernel is used for complex number multiplication in single precision.
__kernel void compute_sp(__global float *in, __global float *out) {
compute_helper(in, out);
}
#ifdef cl_khr_fp16
// This kernel is used for complex number multiplication in half precision when
// it is supported by the device.
#pragma OPENCL EXTENSION cl_khr_fp16: enable
__kernel void compute_hp(__global half *in, __global half *out) {
compute_helper(in, out);
}
#endif
Tooling and Execution Environment
C++ for OpenCL language can be used for the same applications or libraries and in the same way as OpenCL C language is used. Due to the rich variety of C++ language features, applications written in C++ for OpenCL can express complex functionality more conveniently than applications written in OpenCL C and in particular generic programming paradigm from C++ is very attractive to the library developers.
C++ for OpenCL sources can be compiled by OpenCL drivers that support cl_ext_cxx_for_opencl extension. Arm has announced support for this extension in December 2020. However, due to increasing complexity of the algorithms accelerated on OpenCL devices, it is expected that more applications will compile C++ for OpenCL kernels offline using stand alone compilers such as Clang into executable binary format or portable binary format e.g. SPIR-V. Such an executable can be loaded during the OpenCL applications execution using a dedicated OpenCL API.
Binaries compiled from sources in C++ for OpenCL 1.0 can be executed on OpenCL 2.0 conformant devices. Depending on the language features used in such kernel sources it can also be executed on devices supporting earlier OpenCL versions or OpenCL 3.0.
Aside from OpenCL drivers kernels written in C++ for OpenCL can be compiled for execution on Vulkan devices using clspv compiler and clvk runtime layer just the same way as OpenCL C kernels.
Contributions
C++ for OpenCL is an open language developed by the community of contributors listed in its documentation. New contributions to the language semantic definition or open source tooling support are accepted from anyone interested as soon as they are aligned with the main design philosophy and they are reviewed and approved by the experienced contributors.
History
OpenCL was initially developed by Apple Inc., which holds trademark rights, and refined into an initial proposal in collaboration with technical teams at AMD, IBM, Qualcomm, Intel, and Nvidia. Apple submitted this initial proposal to the Khronos Group. On June 16, 2008, the Khronos Compute Working Group was formed with representatives from CPU, GPU, embedded-processor, and software companies. This group worked for five months to finish the technical details of the specification for OpenCL 1.0 by November 18, 2008. This technical specification was reviewed by the Khronos members and approved for public release on December 8, 2008.
OpenCL 1.0
OpenCL 1.0 released with Mac OS X Snow Leopard on August 28, 2009. According to an Apple press release:
Snow Leopard further extends support for modern hardware with Open Computing Language (OpenCL), which lets any application tap into the vast gigaflops of GPU computing power previously available only to graphics applications. OpenCL is based on the C programming language and has been proposed as an open standard.
AMD decided to support OpenCL instead of the now deprecated Close to Metal in its Stream framework. RapidMind announced their adoption of OpenCL underneath their development platform to support GPUs from multiple vendors with one interface. On December 9, 2008, Nvidia announced its intention to add full support for the OpenCL 1.0 specification to its GPU Computing Toolkit. On October 30, 2009, IBM released its first OpenCL implementation as a part of the XL compilers.
Acceleration of calculations with factor to 1000 are possible with OpenCL in graphic cards against normal CPU.
Some important features of next Version of OpenCL are optional in 1.0 like double precision or half precision operations.
OpenCL 1.1
OpenCL 1.1 was ratified by the Khronos Group on June 14, 2010 and adds significant functionality for enhanced parallel programming flexibility, functionality, and performance including:
New data types including 3-component vectors and additional image formats;
Handling commands from multiple host threads and processing buffers across multiple devices;
Operations on regions of a buffer including read, write and copy of 1D, 2D, or 3D rectangular regions;
Enhanced use of events to drive and control command execution;
Additional OpenCL built-in C functions such as integer clamp, shuffle, and asynchronous strided copies;
Improved OpenGL interoperability through efficient sharing of images and buffers by linking OpenCL and OpenGL events.
OpenCL 1.2
On November 15, 2011, the Khronos Group announced the OpenCL 1.2 specification, which added significant functionality over the previous versions in terms of performance and features for parallel programming. Most notable features include:
Device partitioning: the ability to partition a device into sub-devices so that work assignments can be allocated to individual compute units. This is useful for reserving areas of the device to reduce latency for time-critical tasks.
Separate compilation and linking of objects: the functionality to compile OpenCL into external libraries for inclusion into other programs.
Enhanced image support (optional): 1.2 adds support for 1D images and 1D/2D image arrays. Furthermore, the OpenGL sharing extensions now allow for OpenGL 1D textures and 1D/2D texture arrays to be used to create OpenCL images.
Built-in kernels: custom devices that contain specific unique functionality are now integrated more closely into the OpenCL framework. Kernels can be called to use specialised or non-programmable aspects of underlying hardware. Examples include video encoding/decoding and digital signal processors.
DirectX functionality: DX9 media surface sharing allows for efficient sharing between OpenCL and DX9 or DXVA media surfaces. Equally, for DX11, seamless sharing between OpenCL and DX11 surfaces is enabled.
The ability to force IEEE 754 compliance for single precision floating point math: OpenCL by default allows the single precision versions of the division, reciprocal, and square root operation to be less accurate than the correctly rounded values that IEEE 754 requires. If the programmer passes the "-cl-fp32-correctly-rounded-divide-sqrt" command line argument to the compiler, these three operations will be computed to IEEE 754 requirements if the OpenCL implementation supports this, and will fail to compile if the OpenCL implementation does not support computing these operations to their correctly-rounded values as defined by the IEEE 754 specification. This ability is supplemented by the ability to query the OpenCL implementation to determine if it can perform these operations to IEEE 754 accuracy.
OpenCL 2.0
On November 18, 2013, the Khronos Group announced the ratification and public release of the finalized OpenCL 2.0 specification. Updates and additions to OpenCL 2.0 include:
Shared virtual memory
Nested parallelism
Generic address space
Images (optional, include 3D-Image)
C11 atomics
Pipes
Android installable client driver extension
half precision extended with optional cl_khr_fp16 extension
cl_double: double precision IEEE 754 (optional)
OpenCL 2.1
The ratification and release of the OpenCL 2.1 provisional specification was announced on March 3, 2015 at the Game Developer Conference in San Francisco. It was released on November 16, 2015. It introduced the OpenCL C++ kernel language, based on a subset of C++14, while maintaining support for the preexisting OpenCL C kernel language. Vulkan and OpenCL 2.1 share SPIR-V as an intermediate representation allowing high-level language front-ends to share a common compilation target. Updates to the OpenCL API include:
Additional subgroup functionality
Copying of kernel objects and states
Low-latency device timer queries
Ingestion of SPIR-V code by runtime
Execution priority hints for queues
Zero-sized dispatches from host
AMD, ARM, Intel, HPC, and YetiWare have declared support for OpenCL 2.1.
OpenCL 2.2
OpenCL 2.2 brings the OpenCL C++ kernel language into the core specification for significantly enhanced parallel programming productivity. It was released on May 16, 2017. Maintenance Update released in May 2018 with bugfixes.
The OpenCL C++ kernel language is a static subset of the C++14 standard and includes classes, templates, lambda expressions, function overloads and many other constructs for generic and meta-programming.
Uses the new Khronos SPIR-V 1.1 intermediate language which fully supports the OpenCL C++ kernel language.
OpenCL library functions can now use the C++ language to provide increased safety and reduced undefined behavior while accessing features such as atomics, iterators, images, samplers, pipes, and device queue built-in types and address spaces.
Pipe storage is a new device-side type in OpenCL 2.2 that is useful for FPGA implementations by making connectivity size and type known at compile time, enabling efficient device-scope communication between kernels.
OpenCL 2.2 also includes features for enhanced optimization of generated code: applications can provide the value of specialization constant at SPIR-V compilation time, a new query can detect non-trivial constructors and destructors of program scope global objects, and user callbacks can be set at program release time.
Runs on any OpenCL 2.0-capable hardware (only a driver update is required).
OpenCL 3.0
The OpenCL 3.0 specification was released on September 30, 2020 after being in preview since April 2020. OpenCL 1.2 functionality has become a mandatory baseline, while all OpenCL 2.x and OpenCL 3.0 features were made optional. The specification retains the OpenCL C language and deprecates the OpenCL C++ Kernel Language, replacing it with the C++ for OpenCL language based on a Clang/LLVM compiler which implements a subset of C++17 and SPIR-V intermediate code.
Version 3.0.7 of C++ for OpenCL with some Khronos openCL extensions were presented at IWOCL 21.
Nvidia improves with Khronos Vulkan Interop with semaphores and memory sharing.
Roadmap
When releasing OpenCL 2.2, the Khronos Group announced that OpenCL would converge where possible with Vulkan to enable OpenCL software deployment flexibility over both APIs. This has been now demonstrated by Adobe's Premiere Rush using the clspv open source compiler to compile significant amounts of OpenCL C kernel code to run on a Vulkan runtime for deployment on Android. OpenCL has a forward looking roadmap independent of Vulkan, with 'OpenCL Next' under development and targeting release in 2020. OpenCL Next may integrate extensions such as Vulkan / OpenCL Interop, Scratch-Pad Memory Management, Extended Subgroups, SPIR-V 1.4 ingestion and SPIR-V Extended debug info. OpenCL is also considering Vulkan-like loader and layers and a ‘Flexible Profile’ for deployment flexibility on multiple accelerator types.
Open source implementations
OpenCL consists of a set of headers and a shared object that is loaded at runtime. An installable client driver (ICD) must be installed on the platform for every class of vendor for which the runtime would need to support. That is, for example, in order to support Nvidia devices on a Linux platform, the Nvidia ICD would need to be installed such that the OpenCL runtime (the ICD loader) would be able to locate the ICD for the vendor and redirect the calls appropriately. The standard OpenCL header is used by the consumer application; calls to each function are then proxied by the OpenCL runtime to the appropriate driver using the ICD. Each vendor must implement each OpenCL call in their driver.
The Apple, Nvidia, ROCm, RapidMind and Gallium3D implementations of OpenCL are all based on the LLVM Compiler technology and use the Clang compiler as their frontend.
MESA Gallium Compute An implementation of OpenCL (actual 1.1 incomplete, mostly done AMD Radeon GCN) for a number of platforms is maintained as part of the Gallium Compute Project, which builds on the work of the Mesa project to support multiple platforms. Formerly this was known as CLOVER., actual development: mostly support for running incomplete framework with actual LLVM and CLANG, some new features like fp16 in 17.3, Target complete OpenCL 1.0, 1.1 and 1.2 for AMD and Nvidia. New Basic Development is done by Red Hat with SPIR-V also for Clover. New Target is modular OpenCL 3.0 with full support of OpenCL 1.2. Actual state is available in Mesamatrix. Image supports are here in the focus of development.
BEIGNET An implementation by Intel for its Ivy Bridge + hardware was released in 2013. This software from Intel's China Team, has attracted criticism from developers at AMD and Red Hat, as well as Michael Larabel of Phoronix. Actual Version 1.3.2 support OpenCL 1.2 complete (Ivy Bridge and higher) and OpenCL 2.0 optional for Skylake and newer. support for Android has been added to Beignet., actual development targets: only support for 1.2 and 2.0, road to OpenCL 2.1, 2.2, 3.0 is gone to NEO.
NEO An implementation by Intel for Gen. 8 Broadwell + Gen. 9 hardware released in 2018. This driver replaces Beignet implementation for supported platforms (not older 6.gen to Haswell). NEO provides OpenCL 2.1 support on Core platforms and OpenCL 1.2 on Atom platforms. Actual in 2020 also Graphic Gen 11 Ice Lake and Gen 12 Tiger Lake are supported. New OpenCL 3.0 is available for Alder Lake, Tiger Lake to Broadwell with Version 20.41+. It includes now optional OpenCL 2.0, 2.1 Features complete and some of 2.2.
ROCm
Created as part of AMD's GPUOpen, ROCm (Radeon Open Compute) is an open source Linux project built on OpenCL 1.2 with language support for 2.0. The system is compatible with all modern AMD CPUs and APUs (actual partly GFX 7, GFX 8 and 9), as well as Intel Gen7.5+ CPUs (only with PCI 3.0). With version 1.9 support is in some points extended experimental to Hardware with PCIe 2.0 and without atomics. An overview of actual work is done on XDC2018. ROCm Version 2.0 supports Full OpenCL 2.0, but some errors and limitations are on the todo list. Version 3.3 is improving in details. Version 3.5 does support OpenCL 2.2. Version 3.10 was with improvements and new APIs. Announced at SC20 is ROCm 4.0 with support of AMD Compute Card Instinct MI 100. Actual documentation of 4.3.1 is available at github. OpenCL 3.0 is work in progress.
POCL A portable implementation supporting CPUs and some GPUs (via CUDA and HSA). Building on Clang and LLVM. With version 1.0 OpenCL 1.2 was nearly fully implemented along with some 2.x features. Version 1.2 is with LLVM/CLANG 6.0, 7.0 and Full OpenCL 1.2 support with all closed tickets in Milestone 1.2. OpenCL 2.0 is nearly full implemented. Version 1.3 Supports Mac OS X. Version 1.4 includes support for LLVM 8.0 and 9.0. Version 1.5 implements LLVM/Clang 10 support. Version 1.6 implements LLVM/Clang 11 support and CUDA Acceleration. Actual targets are complete OpenCL 2.x, OpenCL 3.0 and improvement of performance. POCL 1.6 is with manual optimization at the same level of Intel compute runtime. Version 1.7 implements LLVM/Clang 12 support and some new OpenCL 3.0 features.
Shamrock A Port of Mesa Clover for ARM with full support of OpenCL 1.2, no actual development for 2.0.
FreeOCL A CPU focused implementation of OpenCL 1.2 that implements an external compiler to create a more reliable platform, no actual development.
MOCL An OpenCL implementation based on POCL by the NUDT researchers for Matrix-2000 was released in 2018. The Matrix-2000 architecture is designed to replace the Intel Xeon Phi accelerators of the TianHe-2 supercomputer. This programming framework is built on top of LLVM v5.0 and reuses some code pieces from POCL as well. To unlock the hardware potential, the device runtime uses a push-based task dispatching strategy and the performance of the kernel atomics is improved significantly. This framework has been deployed on the TH-2A system and is readily available to the public. Some of the software will next ported to improve POCL.
VC4CL An OpenCL 1.2 implementation for the VideoCore IV (BCM2763) processor used in the Raspberry Pi before its model 4.
Vendor implementations
Timeline of vendor implementations
June, 2008: During Apple’s WWDC conference an early beta of Mac OS X Snow Leopard was made available to the participants, it included the first beta implementation of OpenCL, about 6 months before the final version 1.0 specification was ratified late 2008. They also showed two demos. One was a grid of 8x8 screens rendered, each displaying the screen of an emulated Apple II machine — 64 independent instances in total, each running a famous karate game. This showed task parallelism, on the CPU. The other demo was a N-body simulation running on the GPU of a Mac Pro, a data parallel task.
December 10, 2008: AMD and Nvidia held the first public OpenCL demonstration, a 75-minute presentation at SIGGRAPH Asia 2008. AMD showed a CPU-accelerated OpenCL demo explaining the scalability of OpenCL on one or more cores while Nvidia showed a GPU-accelerated demo.
March 16, 2009: at the 4th Multicore Expo, Imagination Technologies announced the PowerVR SGX543MP, the first GPU of this company to feature OpenCL support.
March 26, 2009: at GDC 2009, AMD and Havok demonstrated the first working implementation for OpenCL accelerating Havok Cloth on AMD Radeon HD 4000 series GPU.
April 20, 2009: Nvidia announced the release of its OpenCL driver and SDK to developers participating in its OpenCL Early Access Program.
August 5, 2009: AMD unveiled the first development tools for its OpenCL platform as part of its ATI Stream SDK v2.0 Beta Program.
August 28, 2009: Apple released Mac OS X Snow Leopard, which contains a full implementation of OpenCL.
September 28, 2009: Nvidia released its own OpenCL drivers and SDK implementation.
October 13, 2009: AMD released the fourth beta of the ATI Stream SDK 2.0, which provides a complete OpenCL implementation on both R700/R800 GPUs and SSE3 capable CPUs. The SDK is available for both Linux and Windows.
November 26, 2009: Nvidia released drivers for OpenCL 1.0 (rev 48).
October 27, 2009: S3 released their first product supporting native OpenCL 1.0 – the Chrome 5400E embedded graphics processor.
December 10, 2009: VIA released their first product supporting OpenCL 1.0 – ChromotionHD 2.0 video processor included in VN1000 chipset.
December 21, 2009: AMD released the production version of the ATI Stream SDK 2.0, which provides OpenCL 1.0 support for R800 GPUs and beta support for R700 GPUs.
June 1, 2010: ZiiLABS released details of their first OpenCL implementation for the ZMS processor for handheld, embedded and digital home products.
June 30, 2010: IBM released a fully conformant version of OpenCL 1.0.
September 13, 2010: Intel released details of their first OpenCL implementation for the Sandy Bridge chip architecture. Sandy Bridge will integrate Intel's newest graphics chip technology directly onto the central processing unit.
November 15, 2010: Wolfram Research released Mathematica 8 with OpenCLLink package.
March 3, 2011: Khronos Group announces the formation of the WebCL working group to explore defining a JavaScript binding to OpenCL. This creates the potential to harness GPU and multi-core CPU parallel processing from a Web browser.
March 31, 2011: IBM released a fully conformant version of OpenCL 1.1.
April 25, 2011: IBM released OpenCL Common Runtime v0.1 for Linux on x86 Architecture.
May 4, 2011: Nokia Research releases an open source WebCL extension for the Firefox web browser, providing a JavaScript binding to OpenCL.
July 1, 2011: Samsung Electronics releases an open source prototype implementation of WebCL for WebKit, providing a JavaScript binding to OpenCL.
August 8, 2011: AMD released the OpenCL-driven AMD Accelerated Parallel Processing (APP) Software Development Kit (SDK) v2.5, replacing the ATI Stream SDK as technology and concept.
December 12, 2011: AMD released AMD APP SDK v2.6 which contains a preview of OpenCL 1.2.
February 27, 2012: The Portland Group released the PGI OpenCL compiler for multi-core ARM CPUs.
April 17, 2012: Khronos released a WebCL working draft.
May 6, 2013: Altera released the Altera SDK for OpenCL, version 13.0. It is conformant to OpenCL 1.0.
November 18, 2013: Khronos announced that the specification for OpenCL 2.0 had been finalized.
March 19, 2014: Khronos releases the WebCL 1.0 specification
August 29, 2014: Intel releases HD Graphics 5300 driver that supports OpenCL 2.0.
September 25, 2014: AMD releases Catalyst 14.41 RC1, which includes an OpenCL 2.0 driver.
January 14, 2015: Xilinx Inc. announces SDAccel development environment for OpenCL, C, and C++, achieves Khronos Conformance
April 13, 2015: Nvidia releases WHQL driver v350.12, which includes OpenCL 1.2 support for GPUs based on Kepler or later architectures. Driver 340+ support OpenCL 1.1 for Tesla and Fermi.
August 26, 2015: AMD released AMD APP SDK v3.0 which contains full support of OpenCL 2.0 and sample coding.
November 16, 2015: Khronos announced that the specification for OpenCL 2.1 had been finalized.
April 18, 2016: Khronos announced that the specification for OpenCL 2.2 had been provisionally finalized.
November 3, 2016 Intel support for Gen7+ of OpenCL 2.1 in SDK 2016 r3
February 17, 2017: Nvidia begins evaluation support of OpenCL 2.0 with driver 378.66.
May 16, 2017: Khronos announced that the specification for OpenCL 2.2 had been finalized with SPIR-V 1.2.
May 14, 2018: Khronos announced Maintenance Update for OpenCL 2.2 with Bugfix and unified headers.
April 27, 2020: Khronos announced provisional Version of OpenCL 3.0
June 1, 2020: Intel Neo Runtime with OpenCL 3.0 for new Tiger Lake
June 3, 2020: AMD announced RocM 3.5 with OpenCL 2.2 Support
September 30, 2020: Khronos announced that the specifications for OpenCL 3.0 had been finalized (CTS also available).
October 16, 2020: Intel announced with Neo 20.41 support for OpenCL 3.0 (includes mostly of optional OpenCL 2.x)
April 6, 2021: Nvidia supports OpenCL 3.0 for Ampere. Maxwell and later GPUs also supports OpenCL 3.0 with Nvidia driver 465+.
Devices
As of 2016, OpenCL runs on Graphics processing units, CPUs with SIMD instructions, FPGAs, Movidius Myriad 2, Adapteva epiphany and DSPs.
Khronos Conformance Test Suite
To be officially conformant, an implementation must pass the Khronos Conformance Test Suite (CTS), with results being submitted to the Khronos Adopters Program. The Khronos CTS code for all OpenCL versions has been available in open source since 2017.
Conformant products
The Khronos Group maintains an extended list of OpenCL-conformant products.
All standard-conformant implementations can be queried using one of the clinfo tools (there are multiple tools with the same name and similar feature set).
Version support
Products and their version of OpenCL support include:
OpenCL 3.0 support
All hardware with OpenCL 1.2+ is possible, OpenCL 2.x only optional, Khronos Test Suite available since 2020-10
(2020) Intel NEO Compute: 20.41+ for Gen 12 Tiger Lake to Broadwell (include full 2.0 and 2.1 support and parts of 2.2)
(2020) Intel 6th, 7th, 8th, 9th, 10th, 11th gen processors (Skylake, Kaby Lake, Coffee Lake, Comet Lake, Ice Lake, Tiger Lake) with latest Intel Windows graphics driver
(2021) Intel 11th, 12th gen processors (Rocket Lake, Alder Lake) with latest Intel Windows graphics driver
(2022) Intel 13th gen processors (Raptor Lake) with latest Intel Windows graphics driver
(2021) Nvidia Maxwell, Pascal, Volta, Turing and Ampere with Nvidia graphics driver 465+
OpenCL 2.2 support
None yet: Khronos Test Suite ready, with Driver Update all Hardware with 2.0 and 2.1 support possible
Intel NEO Compute: Work in Progress for actual products
ROCm: Version 3.5+ mostly
OpenCL 2.1 support
(2018+) Support backported to Intel 5th and 6th gen processors (Broadwell, Skylake)
(2017+) Intel 7th, 8th, 9th, 10th gen processors (Kaby Lake, Coffee Lake, Comet Lake, Ice Lake)
Khronos: with Driver Update all Hardware with 2.0 support possible
OpenCL 2.0 support
(2011+) AMD GCN GPU's (HD 7700+/HD 8000/Rx 200/Rx 300/Rx 400/Rx 500/Rx 5000-Series), some GCN 1st Gen only 1.2 with some Extensions
(2013+) AMD GCN APU's (Jaguar, Steamroller, Puma, Excavator & Zen-based)
(2014+) Intel 5th & 6th gen processors (Broadwell, Skylake)
(2015+) Qualcomm Adreno 5xx series
(2018+) Qualcomm Adreno 6xx series
(2017+) ARM Mali (Bifrost) G51 and G71 in Android 7.1 and Linux
(2018+) ARM Mali (Bifrost) G31, G52, G72 and G76
(2017+) incomplete Evaluation support: Nvidia Kepler, Maxwell, Pascal, Volta and Turing GPU's (GeForce 600, 700, 800, 900 & 10-series, Quadro K-, M- & P-series, Tesla K-, M- & P-series) with Driver Version 378.66+
OpenCL 1.2 support
(2011+) for some AMD GCN 1st Gen some OpenCL 2.0 Features not possible today, but many more Extensions than Terascale
(2009+) AMD TeraScale 2 & 3 GPU's (RV8xx, RV9xx in HD 5000, 6000 & 7000 Series)
(2011+) AMD TeraScale APU's (K10, Bobcat & Piledriver-based)
(2012+) Nvidia Kepler, Maxwell, Pascal, Volta and Turing GPU's (GeForce 600, 700, 800, 900, 10, 16, 20 series, Quadro K-, M- & P-series, Tesla K-, M- & P-series)
(2012+) Intel 3rd & 4th gen processors (Ivy Bridge, Haswell)
(2013+) Qualcomm Adreno 4xx series
(2013+) ARM Mali Midgard 3rd gen (T760)
(2015+) ARM Mali Midgard 4th gen (T8xx)
OpenCL 1.1 support
(2008+) some AMD TeraScale 1 GPU's (RV7xx in HD4000-series)
(2008+) Nvidia Tesla, Fermi GPU's (GeForce 8, 9, 100, 200, 300, 400, 500-series, Quadro-series or Tesla-series with Tesla or Fermi GPU)
(2011+) Qualcomm Adreno 3xx series
(2012+) ARM Mali Midgard 1st and 2nd gen (T-6xx, T720)
OpenCL 1.0 support
mostly updated to 1.1 and 1.2 after first Driver for 1.0 only
Portability, performance and alternatives
A key feature of OpenCL is portability, via its abstracted memory and execution model, and the programmer is not able to directly use hardware-specific technologies such as inline Parallel Thread Execution (PTX) for Nvidia GPUs unless they are willing to give up direct portability on other platforms. It is possible to run any OpenCL kernel on any conformant implementation.
However, performance of the kernel is not necessarily portable across platforms. Existing implementations have been shown to be competitive when kernel code is properly tuned, though, and auto-tuning has been suggested as a solution to the performance portability problem, yielding "acceptable levels of performance" in experimental linear algebra kernels. Portability of an entire application containing multiple kernels with differing behaviors was also studied, and shows that portability only required limited tradeoffs.
A study at Delft University from 2011 that compared CUDA programs and their straightforward translation into OpenCL C found CUDA to outperform OpenCL by at most 30% on the Nvidia implementation. The researchers noted that their comparison could be made fairer by applying manual optimizations to the OpenCL programs, in which case there was "no reason for OpenCL to obtain worse performance than CUDA". The performance differences could mostly be attributed to differences in the programming model (especially the memory model) and to NVIDIA's compiler optimizations for CUDA compared to those for OpenCL.
Another study at D-Wave Systems Inc. found that "The OpenCL kernel’s performance is between about 13% and 63% slower, and the end-to-end time is between about 16% and 67% slower" than CUDA's performance.
The fact that OpenCL allows workloads to be shared by CPU and GPU, executing the same programs, means that programmers can exploit both by dividing work among the devices. This leads to the problem of deciding how to partition the work, because the relative speeds of operations differ among the devices. Machine learning has been suggested to solve this problem: Grewe and O'Boyle describe a system of support-vector machines trained on compile-time features of program that can decide the device partitioning problem statically, without actually running the programs to measure their performance.
In a comparison of actual graphic cards of AMD RDNA 2 and Nvidia RTX Series there is an undecided result by OpenCL-Tests. Possible performance increases from the use of Nvidia CUDA or OptiX were not tested.
See also
Advanced Simulation Library
AMD FireStream
BrookGPU
C++ AMP
Close to Metal
CUDA
DirectCompute
GPGPU
HIP
Larrabee
Lib Sh
List of OpenCL applications
OpenACC
OpenGL
OpenHMPP
OpenMP
Metal
RenderScript
SequenceL
SIMD
SYCL
Vulkan
WebCL
References
External links
for WebCL
International Workshop on OpenCL (IWOCL) sponsored by The Khronos Group
2009 software
Application programming interfaces
Cross-platform software
GPGPU
OpenCL
Parallel computing |
52135759 | https://en.wikipedia.org/wiki/The%20Lorax%20%28musical%29 | The Lorax (musical) | The Lorax is a stage adaptation of the children's novel of the same name by Dr. Seuss, adapted by David Greig and featuring songs by Charlie Fink.
The play made its world premiere for on 4 December 2015 at The Old Vic in London.
Productions
The Old Vic, London (2015 & 17)
In April 2015 it was announced that a stage adaptation of Dr. Seuss' The Lorax would be performed for the following Christmas as part of Matthew Warchus' first season as artistic director at The Old Vic. It was announced to be adapted by David Greig and directed by Max Webster.
The production began on 4 December 2015 and finished on 16 January 2016. The production's creative team also consisted of Noah and the Whale frontman Charlie Fink writing music and lyrics, Drew McOnie as choreographer, Rob Howell as designer, John Clark as lighting designer, Tom Gibbons as sound designer, Phil Bateman as musical director/arranger and Nick Barnes and Finn Caldwell as puppetry designers. The cast included Simon Lipkin as the title role of 'The Lorax' (assisted by Laura Cubitt and Ben Thompson as puppeteers) and Simon Paisley Day as 'The Once-ler'.
The production will return to The Old Vic for three weeks only from 15 October to 7 November 2017. Casting is to be announced.
North America (2017–18)
The production will transfer to the Royal Alexandra Theatre in Toronto, Canada for a Christmas run from December 9 to January 21, 2018.
Following the Toronto run, the show will be produced in partnership with The Old Vic and Children's Theatre Company in Minneapolis where it will be performed from April 17 to June 10, 2018, before transferring to the Old Globe Theatre in San Diego from July 3 to August 12, 2018.
Old Vic, Virtual (2021)
From 14 to 17 April 2021 an "inventively transformed ... semi-staged pint-size version" was streamed live from the Old Vic stage, during the UK's Covid-19 lockdown.
Synopsis
Act I
On a gray street at the end of town, there is a house where the Once-ler lives. Nearby there’s a broken statue
of the Lorax with the word ‘unless’ engraved into it ("Life is Tough"). A kid wants to find out more about the Lorax. After being paid,
the Once-ler who is now very old tells her a story, starting with his own childhood…
His family used to run a mill, but the Once-ler used to daydream about things he could invent instead. His family
are poor and have to rent out the Once-ler’s room to a lodger, so he decides to go and find his fortune elsewhere.
While travelling, the Once-ler dreams that he’ll become rich as long as he has just one good idea ("I Could Be a Great Man").
He arrives in Paradise Valley and is delighted by all the new things he sees there. He sets to work, but when he cuts down a
truffula tree the Lorax appears. The Lorax speaks for the trees and is angry one has been cut down. The Once-ler
explains it was to make a "thneed", which the Lorax thinks is useless ("It's a Thneed"). He shows the Once-ler the beauty of the valley,
and how he doesn’t need to create anything new, everything he needs to live is in abundance in the valley ("Everything You Need’s Right Here").
The Lorax leaves on his summer break after planting a new truffula tree seed. The Once-ler begins to see how silly
his thneed idea is, until a businessman buys one. He throws himself into creating a new thneed business, and invites
his family to join him and set up a thneed knitting factory. They are all very excited about becoming rich ("When We Get Rich").
The Lorax returns to find half the forest has been chopped down and confronts the Once-ler, arguing that the wildlife needs
the trees to live as well. They agree that only trees in the area called "Once-ler Nook" will be cut down.
Soon after, the factory runs out of trees. At first, the Once-ler says they must stop making thneeds, but his family
and the people of the town pressure him into starting work again and cut down other trees in the valley ("Great Man").
They say he also made a promise to them that they’d be rich, and persuade the Once-ler that if he continues he’ll become a
powerful man. As a compromise, the Once-ler creates a nature reserve. The Lorax hates the idea and is upset about
the pollution which is killing the animals, he just wants the forest to return to how it was before. Instead of stopping,
the Once-ler creates a super axe hacker which cuts down trees even faster ("Super Axe Hacker").
Act II
Two factory workers are opening the factory, meanwhile the Lorax sneaks inside with the animals of the forest.
They start a protest to save the trees, and a TV news crew turns up to report it. The Once-ler tries to impress
Samelore the reporter with his machinery, but the Lorax exposes all the pollution the factory is creating and how it’s
affecting the wildlife ("We Are One").
The Once-ler starts to apologize to the viewers at home. Just as he agrees to shut the factory, he announces the new version of the thneed which makes it even more popular ("Thneed 2.0"). The Lorax sits alone on a stump and watches all the animals leave the area. The Once-ler treks up to visit him, to ask if they can still be friends. The Lorax says he’s leaving because the forest has gone. He leaves the Once-ler with one word that he doesn’t understand – "unless". At that moment, the last truffula tree is cut down.
The Once-ler’s family pack up their things and leave along with the rest of the town ("When We Get Rich" (reprise)). The Once-ler is left on his own, the story ends. The kid who has been listening to the story says that it can’t be the end – she wants to know what "unless" means. The old Once-ler has been thinking about it for years, but doesn’t understand and can’t think of a way to bring the Lorax back when there’s no forest. The kid has an idea, they need to plant a new truffula tree. The Once-ler says it won’t work without a Lorax, but the kid persuades him to let her try. She plants the seed, waters it and waits, and finally it begins to grow ("Take It Wherever You Go"). The Once-ler is thrilled, and the kid asks if the Lorax will come back now. They finally realise that a Lorax is just someone, anyone, who looks after trees. His last word meant that nothing will get better unless someone like the kid cares enough to protect them and keep planting ("Take It Wherever You Go" (reprise)).
Musical numbers
Act I
"Life is Tough" – Ensemble
"I Could Be a Great Man" – The Once-ler and Ensemble
"It’s a Thneed" – The Once-ler
"Everything You Need’s Right Here" – The Lorax, The Once-ler and Ensemble
"When We Get Rich" – The Once-ler and Once-ler Family
"Great Man" – Von Goo, McGee, and McGann
"Super Axe Hacker" – The Once-ler and Ensemble
Act II
"We Are One" – The Lorax and Ensemble
"Thneed 2.0" – The Once-ler and Ensemble
"When We Get Rich" (reprise) – Once-ler Family
"Take It Wherever You Go" – The Lorax
"Take It Wherever You Go" (reprise) – Ensemble
Critical reception
The production received rave reviews and was nominated for Best Entertainment and Family at the 2016 Laurence Olivier Awards.
References
External links
Page for premiere on The Old Vic site
Dr. Seuss
Adaptations of works by Dr. Seuss
2015 plays
Children's theatre
Plays based on books |
1108276 | https://en.wikipedia.org/wiki/FLEX%20%28operating%20system%29 | FLEX (operating system) | The FLEX single-tasking operating system was developed by Technical Systems Consultants (TSC) of West Lafayette, Indiana, for the Motorola 6800 in 1976.
Overview
The original version was for 8" floppy disks and the (smaller) version for 5.25" floppies was called mini-Flex. It was also later ported to the Motorola 6809; that version was called Flex09. All versions were text-based and intended for use on display devices ranging from printing terminals like the Teletype Model 33 ASR to smart terminals. While no graphic displays were supported by TSC software, some hardware manufacturers supported elementary graphics and pointing devices.
It was a disk-based operating system, using 256-byte sectors on soft-sectored floppies; the disk structure used linkage bytes in each sector to indicate the next sector in a file or free list. The directory structure was much simplified as a result. TSC (and others) provided several programming languages including BASIC in two flavors (standard and extended) and a tokenizing version of extended BASIC called Pre-compiled BASIC, FORTH, C, FORTRAN, and PASCAL.
TSC also wrote a version of FLEX, Smoke Signal DOS, for the California hardware manufacturer Smoke Signal Broadcasting; this version used forward and back linkage bytes in each sector which increased disk reliability at the expense of compatibility and speed.
Later, TSC introduced the multitasking, multi-user, Unix-like UniFLEX operating system, which required DMA disk controllers, 8" disk, and so sold in only small numbers. Several of the TSC computer languages were ported to UniFLEX.
During the early 1980s, FLEX was offered by Compusense Ltd as an operating system for the 6809-based Dragon 64 home computer.
Commands
The following commands are supported by different versions of the FLEX operating system.
APPEND
ASN
BACKUP
BUILD
CAT
COPY
COPYNEW
C4MAT
CLEAN
DATE
DELETE
ECHO
EXEC
FIX
GET
I
JUMP
LINK
LIST
MEMTEST1
MON
N
NEWDISK
O
P
P.COR
PO
PRINT
PROT
PSP
Q
QCHECK
READPROM
RENAME
RM
S
SAVE
SAVE.LOW
SBOX
SP
STARTUP
TOUCH
TTYSET
UCAL
USEMF
VER
VERIFY
VERSION
WRITPROM
XOUT
Y
See also
Microsoft BASIC-68 for FLEX
Microsoft BASIC-69 for FLEX
References
External links
FLEX User Group
FLEX User Group
SWTPC 6800 FLEX 2 and 6809 FLEX 9 / UniFLEX / OS9 Level 1 emulator
Windows-based 6809 Emulator + Flex09 and 6809 applications
AmigaDOS-based 6809 Emulator + Flex09 and 6809 applications
The Missing 6809 UniFLEX Archive
DragonWiki
SWTPC documentation collection
FLEX Software Archive
Discontinued operating systems
Disk operating systems
TRS-80 Color Computer
1976 software |
15587993 | https://en.wikipedia.org/wiki/Online%20focus%20group | Online focus group | An online focus group is one type of focus group, and is a sub-set of online research methods. They are typically an appropriate research method for consumer research, business-to-business research and political research.
Typical operation
A moderator invites pre-screened, qualified respondents who represent the target of interest to log on to conferencing software at a pre-arranged time and to take part in an online focus group. It is common for respondents to receive an incentive for participating. Discussions generally last one hour to 90 minutes. The moderator guides the discussion using a combination of predetermined questions and unscripted probes. In the best discussions, as with face to face groups, respondents interact with each other as well as the moderator in real time to generate deeper insights about the topic.
Appropriateness as a research method and advantages
Online focus groups are appropriate for consumer research, business to business research and political research. Interacting over the web avoids a significant amount of travel expense. It allows respondents from all over the world to gather, electronically for a more representative sample. Often respondents open up more online than they would in person, which is valuable for sensitive subjects. Like in-person focus groups, online groups are usually limited to 8-10 participants. 'Whiteboard' exercises and the ability to mark up concepts or other visual stimuli simulate many of the characteristics of in-person groups.
In addition to the savings on travel, online focus groups often can be accomplished faster than traditional groups because respondents are recruited from online panel members who are often qualified to match research criteria.
Software options
There are a variety of software options, most of which offer similar features but can vary significantly in price. It is important to choose your software carefully, ensuring that it is easy enough to use by both you as a researcher and your participants. Of course, you should also choose a software that will meet your research needs effectively. Software is just one aspect of online groups, just as facilities are just one aspect of face to face groups. As with in-person groups, the skill of the moderator, the quality of the recruiting and the ability to tie the results to research objectives and business decisions is critical to the value of the research to the client.
A new emerging type of online focus group is one where there are only single participants, with no moderator (unmoderated online focus groups). A system invites prescreened, qualified respondents to participate on a "first come, first served" basis, and to conduct a task or series of tasks such as interacting with a website or website prototype, reacting to an online ad or concept, viewing videos, commercials (whether for TV or online production), etc., while at their home or workplace. While the participant is conducting the assigned task, his or her own webcam is recording their face, and at the same time, every action taking place on the screen is being recorded. After the task is completed the participant is asked to answer a series of post task survey questions such as "What was the message being conveyed by that ad? Why did you stop viewing that video? Why were you unable to complete the goal?" etc.
The results are composited into a 360° Video in Video (ViV), such that a synchronized recording of the desktop, both browser and non browser related, is played (what the participants did), synchronized with a web cam recording of the participant in their home or workplace (what the participant said, who they are, and what is their context) playing simultaneously.
The first recorded online focus group was led by Bruce Hall (President, Eureka! Inventing) and Doug Brownstone (Rutgers University), both marketers at Novartis Consumer Health at the time based in Summit, New Jersey. While at a conference in Scottsdale, Arizona in June, 1995 they led an online focus group on the Perdiem laxative brand which included ten women recruited from the brand's customer database. While the online tools were primitive at the time it was deemed to be valuable in collecting consumer insights.
This service was first brought to market by www.userlytics.com, and initially focused on the website usability and user experience field. However, its uses have since expanded to hosted prototype testing, ad and campaign optimization prior to multivariate testing, understanding analytics results, desktop and enterprise user interface (UI) testing, and software as a service (SaaS) testing.
Patent Information
U.S. Patent No. 6,256,663 is summarized as 'System and Method For Conducting Focus Groups Using Remotely Located Participants Over A Computer Network.' and was filed on January 22, 1999 by Greenfield Online, Inc.
The market research technology provider Itracks (Interactive Tracking Systems Inc.) later acquired the patent in 2001 from Greenfield Online.
See also
Online interviews
References
Qualitative research
Marketing techniques |
405835 | https://en.wikipedia.org/wiki/John%20C.%20Slater | John C. Slater | John Clarke Slater (December 22, 1900 – July 25, 1976) was a noted American physicist who made major contributions to the theory of the electronic structure of atoms, molecules and solids. He also made major contributions to microwave electronics. He received a B.S. in Physics from the University of Rochester in 1920 and a Ph.D. in Physics from Harvard in 1923, then did post-doctoral work at the universities of Cambridge (briefly) and Copenhagen. On his return to the U.S. he joined the Physics Department at Harvard.
In 1930, Karl Compton, the President of MIT, appointed Slater as Chairman of the MIT Department of Physics. He recast the undergraduate physics curriculum, wrote 14 books between 1933 and 1968, and built a department of major international prestige. During World War II, his work on microwave transmission, done partly at the Bell Laboratories and in association with the MIT Radiation Laboratory, was of major importance in the development of radar.
In 1950, Slater founded the Solid State and Molecular Theory Group (SSMTG) within the Physics Department. The following year, he resigned the chairmanship of the department and spent a year at the Brookhaven National Laboratory of the Atomic Energy Commission. He was appointed Institute Professor of Physics and continued to direct work in the SSMTG until he retired from MIT in 1965, at the mandatory retirement age of 65.
He then joined the Quantum Theory Project of the University of Florida as Research Professor, where the retirement age allowed him to work for another five years. The SSMTG has been regarded as the precursor of the MIT Center for Materials Science and Engineering (CMSE). His scientific autobiography and three interviews present his views on research, education and the role of science in society.
In 1964, Slater and his then-92 year-old father, who had headed the Department of English at the University of Rochester many years earlier, were awarded honorary degrees by that university. Slater's name is part of the terms Bohr-Kramers-Slater theory, Slater determinant and Slater orbital.
Early life and education
Slater's father, born in Virginia, who had been an undergraduate at Harvard, became head of the English Department at the University of Rochester, which would also be Slater's undergraduate alma mater. Slater's youthful interests were with things mechanical, chemical, and electrical. A family helper, a college girl, finally put a name (then little-known as a subject) to his set of interests: physics. When Slater entered the University of Rochester in 1917 he took physics courses and as a senior assisted in the physics laboratory and did his first independent research for a special honors thesis, a measurement of the dependence on pressure of the intensities of the Balmer lines of hydrogen.
He was accepted into Harvard graduate school, with the choice of a fellowship or assistantship. He chose the assistantship, during which he worked for Percy W. Bridgman. He followed Bridgman's courses in fundamental physics and was introduced into the then-new quantum physics with the courses of E. C. Kemble. He completed the work for the Ph.D. in three years by publishing his (1924) paper Compressibility of the Alkali Halides, which embodied the thesis work he had done under Bridgman. His heart was in theory, and his first publication was not his doctor's thesis, but a note (1924) to Nature on Radiation and Atoms.
After receiving his Ph.D., Slater held a Hamard Sheldon Fellowship for study in Europe. He spent a period in Cambridge, England, before going to Copenhagen. On returning to America, Slater joined the Harvard Physics Department.
Professional career
Chairing the Department of Physics at MIT
When he became President of MIT, Karl Compton "courted" Slater to chair the Physics Department. "Administration (of the Department) took up a good deal of time, more time than he (Slater) would have preferred. John was a good chairman." The following items from the successive issues of the annual MIT President's Report trace the growth and visibility of the Department under Slater's leadership, before World War II, and the ability of the Department to contribute to defense during the war. The first two quotations are from chapters written by Compton in the successive Reports. The other quotations come from the sections about the department, that Slater wrote. These include statements affecting policies in physics education and research at large, and show his deep commitment to both.
1930: "The selection of Dr. John C. Slater as head of the (Physics) Department will strengthen ... undergraduate and graduate work ... the limitation of space has retarded the development of graduate work ... the total number of undergraduates being 53 and ... graduate students 16." (p. 21)
1931: "This has been the first year of the Department in charge of its new Head, Professor John C. Slater ... the subjects actively (researched include) Spectroscopy, Applied Optics, Discharge of Electricity in Gases, Magneto-Optical Phenomena, Studies of Dielectrics, and various aspects of modern and classical theoretical physics." (p. 42)
1932: In the list of papers published by MIT faculty, items 293 to 340 are listed under Department of Physics. (p. 206-208)
1933: "The George Eastman Research Laboratory, into which the Department moved at the beginning of the year, provides for the first time a suitable home for research in Physics at the Institute". Slater states that outside recognition is shown by holders of six National, an International, and a Rockefeller Research Fellowship choosing to come to the Department. Slater describes the dedication of the Laboratory, the hosting of meetings of the International Astronomical Union, the American Physical Society, and a Spectroscopic conference, and ends: "In general the year has been one of settling down to work under satisfactory conditions, after the more difficult transition of the preceding year." (p. 96-98)
1934: "A number of advances in undergraduate teaching have been made or planned." Among the "most conspicuous events" in the department, "we acted as host" to meetings of the National Academy of Sciences, the American Association for the Advancement of Science, the American Physical Society, and a national Spectroscopic Conference, where "the main topic was relation to biology and related fields." Advances in research have been "taking advantage of the unusual facilities" in the Department, and include the work of Warren on structure of liquids, Mueller on dielectric properties, Stockbarger on crystal physics, Harrison on automating spectroscopic measurement, Wulff on hyperfine structure, Boyce on spectra of nebulae, Van der Graaff on high voltage and nuclear research, and Stratton and Morse on ellipsoidal wave functions. (p. 104-106)
1935: Considerable attention is given to major improvements in undergraduate teaching. The extensive comments on research mention the arrival of Robley Evans and his work on a field new to the department—radioactivity, with special attention to nuclear medicine. (p. 102-103)
1936: "The most important development of the year in the Department has been the growing recognition of the significance of applied physics. There has been a tendency in the past among physicists to take interest only in the direct line of development of their science, and to neglect its applications." Slater develops this theme at length, and describes actions within the undergraduate, graduate and faculty work of the Department and at the national level to develop Applied Physics. The description of the flourishing basic research refers to ten different areas, including the upsurge in work on radioactivity. (p. 131-134).
1937 to 1941: These continue in the same vein. But world affairs begin to impact. The 1941 report ends: "The X-ray branch had as a guest Professor Rose C. L. Mooney of Newcomb College, who was prevented by the war from carrying on research in Europe under a Guggenheim Fellowship ... As the year ends, the National Defense effort is beginning to claim the services of a number of staff members. Presumably the coming year will see a large intensification of the effort, though it is hoped that the interference with the regular research and teaching will not be too severe." (p. 129)
1942: This told a very different story. The defense effort had begun to "involve a considerable number of personnel, as well as a good deal of administrative work. With the opening of the Radiation Laboratory of the National Defense Research Committee at the Institute, a number of members of the Department's staff have become associated with that laboratory" followed by a list of over 10 senior faculty who had, and several more gone to other defense projects. (p. 110-111)
1943 to 1945: Slater took leave of absence as Chair, to work on topics of importance in radar. The American Mathematical Society selected him as the Josiah Willards Gibbs lecturer for 1945.
1946: Slater had returned as Chair. He starts his report: "The year of reconversion from war to peace has been one of the very greatest activity. ... Physics during the war achieved an importance which has probably never before been attained by any other science. The Institute, as the leading technical institution of the country and probably the world, should properly have a physics department unequaled anywhere." He lists plans to meet this objective, that proliferate his administrative responsibilities. (p. 133-143)
Setting up interdepartmental laboratories, by restructuring existing laboratories using, as a model, the conversion of the Radiation Laboratory into the Research Laboratory of Electronics (RLE) by Julius Stratton and Albert Hill.
Financing student assistantships and helping shape the role of government financing on an unprecedented scale.
Overseeing Robley Evans' Radioactivity Center (containing a cyclotron) and Van de Graaff's High Voltage Laboratory.
Recruiting physicists familiar with the Manhattan project to build the Laboratory for Nuclear Science and Engineering. This was directed by Jerrold Zacharias. Its first members included Bruno Rossi and Victor Weisskopf.
Setting up the Acoustics Laboratory, directed by Richard Bolt, and the Spectroscopy Laboratory directed by the chemist Richard Lord.
1947: With the hiring of staff and building of laboratories well in hand, Slater begins: "The year in the Physics Department, as in the rest of the Institute, was one of starting the large-scale teaching of returned veterans and other students whose academic careers had been interrupted by the war." He goes on to discuss the needs of students, in the entire Institute, for Physics courses and laboratories, with particular mention of the upsurge in electronics and nuclear science, and he reports briefly on the developments following from his previous report. (p. 139-141)
1948: Slater begins "The current year is the first since the war in which the department has approached normal operation. No new major projects or changes of policy have been introduced." But the department that he has built is vastly different from what it was when he started. Sixteen master's degrees and 47 doctor's degrees were granted. Twenty-five Ph.D. recipients got academic appointments in MIT and other universities. Research flourished, and many scientists visited from European universities and elsewhere in the U.S. (p. 141-143)
1949: The new-styled 'normalcy' continued. "The approach to a steady postwar state continued with few unusual occurrences." The graduate curriculum has been revised and cryogenics enhanced. The continued growth of staff, research grants, industrial contacts and volume of publication are treated as matters of continuity, recognizing at the end, that: "The administrative load of the department has grown so much (it became) wise to appoint an executive officer". Nathaniel Frank, who had worked with John Slater for nearly two decades accepted the post. (p. 149-153)
1950: The future of the Department had been set. There were "few unexpected changes". And with the continued growth, "almost every research project in the Department has concerned itself with undergraduate research". (p. 189-191)
1951: Jay Stratton writes "Professor John C. Slater resigned as Head of the Department of Physics and has been appointed Harry B. Higgins Professor of the Solid State, the first appointment which will carry the title Institute Professor. Professor Slater has been granted a leave of absence for the coming year to carry on research at Brookhaven National Laboratory." (p. 30)
Throughout his Chairmanship, Slater taught, wrote books, produced ideas of major scientific importance, and interacted with colleagues throughout the local, national and international scientific communities. At the personal level, Morse states: "Through most of (the 1930s) he looked more like an undergraduate than a department head ... he could render his guests weak with laughter simply by counting ... in Danish." Much later, S.B. Trickey wrote "While I got to know him reasonably well, I was never able to call J.C. Slater by his given name. His seeming aloofness turned out more to be shyness."
Research
Atoms, molecules and solids: research preceding World War II
Returning in time to 1920, Slater had gone to Harvard to work for a Ph.D. with Percy Bridgman, who studied the behaviour of substances under very high pressures. Slater measured the compressibility of common salt and ten other alkali halides—compounds of lithium, sodium, potassium and rubidium, with fluorine, chlorine and bromine. He described the results as "exactly in accord with Bohr's recent views of the relation between electron structure and the periodic table". This brought Slater's observation concerning the mechanical properties of ionic crystals into line with the theory that Bohr had based on the spectroscopy of gaseous elements. He wrote the alkali halide paper in 1923, having "by the summer of 1922" been "thoroughly indoctrinated ... with quantum theory", in part by the courses of Edwin Kemble following a fascination with Bohr's work during his undergraduate days. In 1924, Slater went to Europe on a Harvard Sheldon Fellowship. After a brief stay at the University of Cambridge, he went on to the University of Copenhagen, where "he explained to Bohr and Kramers his idea (that was) a sort of forerunner of the duality principle, (hence) the celebrated paper" on the work that others dubbed the Bohr-Kramers-Slater (BKS) theory. "Slater suddenly became an internationally known name.". Interest in this "old-quantum-theory" paper subsided with the arrival of full quantum mechanics, but Philp M. Morse's biography states that "in recent years it has been recognized that the correct ideas in the article are those of Slater." Slater discusses his early life through the trip to Europe in a transcribed interview.
Slater joined the Harvard faculty on his return from Europe in 1925, then moved to MIT in 1930. His research papers covered many topics. A year by year selection, up to his switch to work relating to radar includes:
1924: the theoretical part of his Ph.D. work, the Bohr-Kramers-Slater (BKS) theory,
1925: widths of spectral lines; ideas that came very close to the electron spins principle,
1926 and 1927: explicit attention to electron spin, and to the Schrödinger equation;
1928: the Hartree self-consistent field, the Rydberg formula,
1929: the determinantal expression for an antisymmetric wave function,
1930: Slater type orbitals (STOs) and atomic shielding constants,
1931: linear combination of atomic orbitals, van der Waals forces (with Jack Kirkwood, as a Chemistry Research Associate).
1932 to 1935: atomic orbitals, metallic conduction, application of the Thomas–Fermi method to metals,
1936: ferromagnetism, (with Erik Rudberg, later Chairman of the Nobel Prize committee for Physics) inelastic scattering, and (with his Ph.D. student William Shockley and close to his own Ph.D. topic), optical properties of alkali halides
1937 and 1938: augmented plane waves, superconductivity, ferromagnetism, electrodynamics,
1939 he published "only" a book: the definitive Introduction to Chemical Physics,
1940 the Grüneisen constant, and the Curie point,
1941 phase transition analogous to ferromagnetism in potassium dihydrogen phosphate.
In his memoir, Morse wrote "In addition to other notable papers ... on ... Hartree's self-consistent field, the quantum mechanical derivation of the Rydberg constant, and the best values of atomic shielding constants, he wrote a seminal paper on directing valency " (what became known, later, as linear combination of atomic orbitals).
In further comments, John Van Vleck pays particular attention to (1) the 1925 study of the spectra of hydrogen and ionized helium, that J.V.V. considers one sentence short of proposing electron spin (which would have led to sharing a Nobel prize), and (2) what J.V.V. regards as Slater's greatest paper, that introduced the mathematical object now called the Slater determinant. "These were some of the achievements (that led to his) election to the National Academy ... at ... thirty-one. He played a key role in lifting American theoretical physics to high international standing." Slater's doctoral students, during this time, included Nathan Rosen Ph.D. in 1932 for a theoretical study of the hydrogen molecule, and William Shockley Ph.D. 1936 for an energy band structure of sodium chloride, who later received a Nobel Prize for the discovery of the transistor.
Research during the war and the return to peace time activities
Slater, in his experimental and theoretical work on the magnetron (key elements paralleled his prior work with self-consistent fields for atoms) and on other topics at the Radiation Laboratory and at the Bell Laboratories did "more than any other person to provide the understanding requisite to progress in the microwave field", in the words of Mervin Kelley, then head of Bell Labs, quoted by Morse.
Slater' publications during the war and the post-war recovery include a book and papers on microwave transmission and microwave electronics, linear accelerators, cryogenics, and, with Francis Bitter and several other colleagues, superconductors, These publications credit the many other scientists, mathematicians and engineers who participated. Among these,
George H. Vineyard received his Ph.D. with Slater in 1943 for a study of space charge in the cavity magnetron. Later, he became Director of the Brookhaven National Laboratory and President of the American Physical Society. The work of the Radiation Laboratory paralleled research at the Telecommunications Research Establishment in England and the groups maintained a productive liaison.
The Solid State and Molecular Theory Group
Activities
In the words of Robert Nesbet: "Slater founded the SSMTG with the idea of bringing together a younger generation of students and PostDocs with a common interest in the electronic structure and properties of atoms, molecules and solids. This was in part to serve as a balance for electronic physics to survive the overwhelming growth of nuclear physics following the war" .
George Koster soon completed his Ph.D., joined the faculty, and became the senior member of the group. He wrote "During the fifteen-year life of the group some sixty persons were members and thirty-four took doctoral degrees with theses connected with its work. In my report I have been unable to separate the work of Slater from that of the group as a whole. He was part of every aspect of the group's research efforts."
Nesbet continued "Every morning in SSMTG began with a coffee session, chaired by Professor Slater, with the junior members seated around a long table ... Every member of the group was expected to contribute a summary of his own work and ideas to the Quarterly Progress Report". The SMMTG QPRs had a wide distribution to university and industrial research libraries, and to individual laboratories. They were quoted widely for scientific and biographical content, in journal articles and government reports and libraries are starting to put them online.
To begin the work of the group, Slater "distilled his experience with the Hartree self-consistent field method" into (1) a simplification that became known as the Xα method, and (2) a relationship between a feature of this method and a magnetic property of the system. These required computations that were excessive for "pencil and paper" work. Slater was quick to avail the SSMTG of the electronic computers that were being developed. An early paper on augmented plane waves used an IBM card programmed calculator. The Whirlwind was used heavily, then the IBM 704 in the MIT Computation Center and then the IBM 709 in the Cooperative Computing Laboratory (see below).
Solid state work progressed more rapidly at first in the SSMTG, with contributions over the first few years by George Koster, John Wood, Arthur Freeman and Leonard Mattheis. Molecular and atomic calculations also flourished in the hands of Fernando J. Corbató, Lee Allen and Alvin Meckler. This initial work followed lines largely set by Slater. Michael Barnett came in 1958. He and John Wood were given faculty appointments. Robert Nesbet, Brian Sutcliffe, Malcolm Harrison and Levente Szasz brought in a variety of further approaches to molecular and atomic problems. Jens Dahl, Alfred Switendick, Jules Moskowitz, Donald Merrifield and Russell Pitzer did further work on molecules, and Fred Quelle on solids.
Slater rarely included his name on the papers of SSMTG members who worked with him. Major pieces of work which he did coauthor dealt with applications of (1) group theory in band structure calculations and (2) equivalent features of linear combination of atomic orbital (LCAO), tight binding and Bloch electron approximations, to interpolate results for the energy levels of solids, obtained by more accurate methods,
People
A partial list of members of the SSMTG (Ph.D. students, post-doctoral members, research staff and faculty, in some cases successively, labeled †, ‡, ৳, ¶), together with references that report their SSMTG and later activities, follows.
Leland C. Allen †‡, ab initio molecular calculations, electronegativity, Professor of Chemistry Emeritus, Princeton University (2011).
Michael P Barnett ৳¶, molecular integrals, software, phototypesetting, cognition, later in industry, Columbia U. and CUNY.
Louis Burnelle‡, molecular calculations, later Professor of Chemistry, New York University.
Earl Callen †
Fernando J. Corbató †, began the molecular calculations in the SSMTG; later a pioneer of time-sharing and recipient of the Turing Award.
George Coulouris ৳, worked with MPB, later Professor of Computer Science at Queen Mary College of the University of London.
Imre Cszimadia ‡, molecular calculations (LiH), later Professor of Chemistry, U. Toronto, ab initio calculations, drug design.
Jens Dahl ‡, molecular calculations, later Professor of Chemistry, Technical University of Denmark, wrote quantum chemistry text.
Donald E. Ellis ৳†, molecular calculations, later Professor of Physics and Astronomy at Northwestern University, "real" materials.
Arthur Freeman †‡, orthogonalized plane wave calculations, later Professor of Physics and Astronomy at Northwestern University
Robert P. Futrelle ৳, programming methods, later Professor of Computer and Information Science at Northeastern University.
Leon Gunther †‡ lattice vibrations in alkali halides, later Professor of Physics at Tufts University, focus on condensed matter theory in many areas, including superconductivity and seminal papers on nanoscopic physics & quantum tunneling of magnetization.
Malcolm Harrison ‡, (died 2007) co-developer of POLYATOM, later Professor of Computer Science, New York University.
Frank Herman, band structure calculations, went into RCA then IBM Research Laboratories, wrote and edited major surveys.
David Howarth ‡, solid state, later Professor of Computer Science, Imperial College, University of London.
John Iliffe ৳, computer scientist.
San-Ichiro Ishigura ‡, later Professor, Ochinamizu University
Arnold Karo ‡, electronic structure of small molecules, later at Lawrence Livermore Laboratory.
C.W. Kern ‡, molecular calculations, later Professor of Chemistry, Ohio State U., published extensively.
Ryoichi Kikuchi ‡
Walter H. Kleiner, solid state physics, continued at Lincoln Laboratory.
George F. Koster †¶, became Chairman of the Physics Graduate Committee at MIT and wrote two books on solid state physics.
Leonard F. Mattheiss †, augmented plane wave calculations, later at Bell Labs, published about 100 papers.
Roy McWeeny ‡, valence theory, later held chairs at several British Universities and, since 1982, at the University of Pisa, Italy.
Alvin Meckler, first major molecular calculation on Whirlwind (oxygen), later National Security Agency,
Donald Merrifield †, molecular calculations (methane), later President of Loyola University, Los Angeles.
Jules Moskowitz ‡, molecular calculations (benzene), later Chairman, Department of Chemistry, NYU, published 100 papers.
Robert K. Nesbet ‡, molecular calculations, later at IBM Almaden Research Laboratories, published over 200 papers.
Robert H. Parmenter, later Professor of Physics, U. Arizona, crystal properties and superconductivity.
Russell M. Pitzer ‡, molecular calculations (ethane), later Chairman of Chemistry Department, Ohio State U, over 100 papers.
George W. Pratt, Jr. †‡later Professor of Electrical Engineering and CMSE, MIT, solid state electronics.
F.W. Quelle, Jr. augmented plane waves, later laser optics.
Melvin M. Saffren †
Robert Schrieffer wrote Bachelor's thesis on multiplets in heavy atoms, later shared Nobel Prize for BCS theory of superconductivity.
Edward Schultz
Harold Schweinler
Hermann Statz ‡, ferromagnetism, later director of research at Raytheon and recipient of 2004 IEEE Microwave Pioneer Award,
Levente Szasz, atomic structure, became Professor of Physics at Fordham University, published two books,
Brian T. Sutcliffe ‡, co-developer of POLYATOM, later Professor of Chemistry, University of York.
Richard E. Watson ‡៛, electronic properties of metal atoms, later at Brookhaven, published over 200 papers.
E.B. White †
John Wood †¶, augmented plane waves using Hartree–Fock methods, at Los Alamos National Laboratory (died 1986), published extensively.
Distinguished visitors included Frank Boys, Alex Dalgarno, V. Fano, Anders Fröman, Inge Fischer-Hjalmars, Douglas Hartree, Werner Heisenberg, Per-Olov Löwdin, Chaim Pekeris, Ivar Waller and Peter Wohlfarth.
Slater's further activities at MIT during this time
In the 1962 President's Report, Jay Stratton wrote (on p. 17) "A faculty committee under the chairmanship of Professor John C. Slater has taken primary responsibility for planning the facilities in the new Center for Materials. These include a new Cooperative Computing Laboratory completed this year and equipped with an I.B.M. 709 Computer".
The name Center for Materials Science and Engineering (CMSE) was adopted soon afterward. It embodied the ethos of interdepartmental research and teaching that Slater had espoused throughout his career. The first Director was R.A. Smith, previously Head of the Physics Division of the Royal Radar Establishment in England. He, Slater and Charles Townes, the Provost, had been in close acquaintance since the early years of World War II, working on overlapping topics.
The Center was set up, in accordance with Slater's plans. It "supported research and teaching in Metallurgy and Materials Science, Electrical Engineering, Physics, Chemistry and Chemical Engineering", and preserved MIT as a focus for work in solid state physics. By 1967, two years after Slater left, the MIT Physics Department "had a very, very small commitment to condensed matter physics" because it was so "heavily into high energy physics." But in the same year, the CMSE staff included 55 professors and 179 graduate students. The Center continues to flourish in the 21st century.
The Cooperative Computing Laboratory (CCL) was used, in its first year by some 400 faculty, students and staff. These included (1) members of the SSMTG and the CCL running quantum mechanical calculations and non-numeric applications directed by Slater, Koster, Wood and Barnett, (2) the computer-aided design team of Ross, Coons and Mann, (3) members of the Laboratory for Nuclear Science, (4) Charney and Phillips in theoretical meteorology, and (5) Simpson and Madden in geophysics (from 1964 President's report, p. 336-337).
Personal life and death
In 1926, he had married Helen Frankenfeld. Their three children (Louise Chapin, John Frederick, and Clarke Rothwell) all followed academic careers. Slater was divorced and in 1954 he married Rose Mooney, a physicist and crystallographer, who moved to Florida with him in 1965.
At the University of Florida (Gainesville) where the retirement age was 70, Slater was able to enjoy another five years of active research and publication as a Research Professor in the Quantum Theory Project (QTP). In 1975, in his scientific autobiography, he wrote: ""The Florida Physics Department was a congenial one, with main emphasis on solid state physics, statistical physics and related fields. It reminded me of the MIT department in the days when I had been department head there. It was a far cry from the MIT Physics Department which I was leaving; by then it had been literally captured by the nuclear theorists." Slater published to the end of his life: his final journal paper, published with John Connolly in 1976, was on a novel approach to molecular orbital theory.
Prof. Slater was also Committee Member for Dr. Ravi Sharma's Ph.D.(1966, U of Florida Gainesville) and for many such committees. He and Rose said to Ravi that he had lost his books and research papers when the truck carrying his belongings overturned while moving from MIT to Gainesville.
Slater died in Sanibel Island, Florida in 1976.
As an educator and advisor
Slater's concern for the well being of others is well illustrated by the following dialog that Richard Feynman relates. It took place at the end of his undergraduate days at MIT, when he wanted to stay on to do a Ph.D. "When I went to Professor Slater and told him of my intentions he said: 'We will not have you here'. I said 'What?' Slater said 'Why do you think you should go to graduate school at MIT?' 'Because it is the best school for science in the country' ... 'That is why you should go to some other school. You should find out how the rest of the world is.' So I went to Princeton. ... Slater was right. And I often advise my students the same way. Learn what the rest of the world is like. The variety is worthwhile."
Summary
From the memoir by Philip Morse: "He contributed significantly to the start of the quantum revolution in physics; he was one of the very few American-trained physicists to do so. He was exceptional in that he persisted in exploring atomic, molecular and solid state physics, while many of his peers were coerced by war, or tempted by novelty, to divert to nuclear mysteries. Not least, his texts and his lectures contributed materially to the rise of the illustrious American generation of physicists of the 1940s and 1950s."
The new generation that Slater launched from the SSMTG and the QTP took knowledge and skills into departments of Physics and Chemistry and Computer Science, into industrial and government laboratories and academe, into research and administration. They have continued and evolved his methodologies, applying them to an increasing variety of topics from atomic energy levels to drug design, and to a host of solids and their properties. Slater imparted knowledge and advice, and he recognized new trends, provided financial support from his grants, and motivational support by sharing the enthusiasms of the protagonists.
In a slight paraphrase of a recent and forward looking comment of John Connolly, it can be said that the contributions of John C. Slater and his students in the SSMTG and the Quantum Theory Project laid the foundations of density functional theory which has become one of the premier approximations in quantum theory today.
Slater's papers were bequeathed to the American Philosophical Society by his widow, Rose Mooney Slater, in 1980 and 1982. In August 2003, Alfred Switendick donated a collection of Quarterly Reports of the MIT Solid State and Molecular Theory Group (SSMTG), dating from 1951 to 1965. These are available in several major research libraries.
Awards and Honors
Irving Langmuir Award (1967)
Golden Plate Award of the American Academy of Achievement (1969)
National Medal of Science (1970)
Books
Slater, J. C. (1950). Microwave Electronics. New York: Van Nostrand.
References
External links
Biographical Memoir by Philip M. Morse, includes a photo
1900 births
1976 deaths
20th-century American physicists
University of Florida faculty
Members of the International Academy of Quantum Molecular Science
National Medal of Science laureates
American physical chemists
Theoretical chemists
University of Rochester alumni
Harvard University alumni
Scientists at Bell Labs
Fellows of the American Physical Society |
33432052 | https://en.wikipedia.org/wiki/Arista%20Networks | Arista Networks | Arista Networks (formerly Arastra) is an American computer networking company headquartered in Santa Clara, California. The company designs and sells multilayer network switches to deliver software-defined networking (SDN) for large datacenter, cloud computing, high-performance computing, and high-frequency trading environments. These products include 10/25/40/50/100 Gigabit Ethernet low-latency cut-through switches, including the 7124SX, which remained the fastest switch using SFP+ optics through September 2012, with its sub-500 nanosecond (ns) latency, and the 7500 series, Arista's modular 10G/40G/100Gbit/s switch. Arista's own Linux-based network operating system, Extensible Operating System (EOS), runs on all Arista products.
Corporate history
In 1982, Andy Bechtolsheim cofounded Sun Microsystems, and was its chief hardware designer. In 1995, David Cheriton cofounded Granite Systems with Bechtolsheim, a company that developed Gigabit Ethernet products, which was acquired by Cisco Systems in 1996. In 2001, Cheriton and Bechtolsheim founded another start up, Kealia, which was acquired by Sun in 2004. From 1996 to 2003, Bechtolsheim and Cheriton occupied executive positions at Cisco, leading the development of the Catalyst product line, along with Kenneth Duda who had been Granite Systems' first employee.
In 2004, the three then went on to found Arastra (later renamed Arista). Bechtolsheim and Cheriton were able to fund the company themselves. In May 2008, Jayshree Ullal left Cisco after 15 years at the firm, and was appointed CEO of Arista in October 2008.
In June 2014, Arista Networks had its initial public offering on the New York Stock Exchange under the symbol ANET.
In December 2014, Cisco filed two lawsuits against Arista alleging intellectual property infringement., and the United States International Trade Commission issued limited exclusion and cease-and-desist orders concerning two of the features patented by Cisco and upheld an import ban on infringing products. In 2016, on appeal, the ban was reversed following product changes and two overturned Cisco patents, and Cisco's claim of damages was ruled against. In August 2018, Arista agreed to pay Cisco as part of a settlement that included a release for all claims of infringement by Cisco, dismissal of Arista's antitrust claims against Cisco, and a 5-year stand-down between the companies.
In August 2018, Arista Networks acquired Mojo Networks. In September 2018, Arista Networks acquired Metamako and integrated their low latency product line as the 7130 series.
Arista's CEO, Jayshree Ullal, was named to Barron's list of World's Best CEOs in 2018 and 2019.
Products
Extensible Operating System
EOS is Arista's network operating system, and comes as one image that runs across all Arista devices or in a virtual machine (VM). EOS runs on an unmodified Linux kernel under a Fedora-based userland. There are more than 100 independent regular processes, called agents, responsible for different aspects and features of the switch, including drivers that manage the switching application-specific integrated circuit (ASICs), the command-line interface (CLI), Simple Network Management Protocol (SNMP), Spanning Tree Protocol, and various routing protocols. All the state of the switch and its various protocols is centralized in another process, called Sysdb. Separating processing (carried by the agents) from the state (in Sysdb) gives EOS two important properties. The first is software fault containment, which means that if a software fault occurs, any damage is limited to one agent. The second is stateful restarts, since the state is stored in Sysdb, when an agent restarts it picks up where it left off. Since agents are independent processes, they can also be upgraded while the switch is running (a feature called ISSU – In-Service Software Upgrade).
The fact that EOS runs on Linux allows the usage of common Linux tools on the switch itself, such as tcpdump or configuration management systems. EOS provides extensive application programming interfaces (APIs) to communicate with and control all aspects of the switch. To showcase EOS' extensibility, Arista developed a module named CloudVision that extends the CLI to use Extensible Messaging and Presence Protocol (XMPP) as a shared message bus to manage and configure switches. This was implemented simply by integrating an existing open-source XMPP Python library with the CLI.
Programmability
In addition to all the standard programming and scripting abilities traditionally available in a Linux environment, EOS can be programmed using different mechanisms:
Advanced Event Management can be used to react to various events and automatically trigger CLI commands, execute arbitrary scripts or send alerts when state changes occur in the switch, such as an interface going down or a virtual machine migrating to another host.
Event Monitor tracks changes made to the medium access control (MAC), Address Resolution Protocol (ARP), and routing table in a local SQLite database for later querying using standard Structured Query Language (SQL) queries.
eAPI (External API) offers a versioned JSON-RPC interface to execute CLI commands and retrieve their output in structured JSON objects.
Ethernet switches
Arista's product line can be separated into different product families:
7500R series: Modular chassis with a virtual output queueing (VOQ) fabric supporting from 4 to 16 store and forward line cards delivering line-rate non-blocking 10GbE, 40GbE, and 100GbE performance in a 150 Tbit/s fabric supporting a maximum of 576 100GbE ports with 384 GB of packet buffer. Each 100GbE ports can also operate as 40GbE or 4x10GbE ports, thus effectively providing 2304 line-rate 10GbE ports with large routing tables.
7300X, 7300X3 and 7320X series: Modular chassis with 4 or 8 line cards in a choice of 10G, 40G and 100G options with 6.4Tbit/s of capacity per line card, for a fabric totaling up to 50Tbit/s of capacity for up to 1024 10GbE ports. Unlike the 7500 series, 10GBASE-T is available on 7300 series line cards.
7280R series: 1U and 2U systems with a common architecture to the 7500R Series, deep buffer VOQ and large routing tables. Many different speed and port combinations from 10GbE to 100GbE.
7200X series: 2U low-latency high-density line-rate 100GbE and 40GbE switches, with up to 12.8Tbit/s of forwarding capacity.
7170 Series: High Performance Multi-function Programmable Platforms, a set of fixed 100G platforms based on Barefoot Tofino packet processor enabling the data plane to be customized using EOS and P4 profiles.
7160 series: 1U programmable high performance range of 10 GbE, 25 GbE and 100 GbE with the support for AlgoMatch technology and a software upgradeable packet processor
7150S series: 1U ultra-low latency cut-through line-rate 10 Gb switches. Port-to-port latency is sub-380ns, regardless of the frame size. Unlike the earlier 7100 series, the switch silicon can be re-programmed to add new features that work at wire-speed, such as Virtual Extensible LAN (VXLAN) or network address translation (NAT/PAT).
7130 series (7130, 7130L, 7130E): 1U and 2U ultra-low latency Layer 1 switch and programmable switches. Layer 1 switching enables mirroring and software-defined port routing with port-to-port latency starting from 4ns, depending on physical distance. The E and L variants allow running custom FPGA applications directly on the switch with a port-to-FPGA latency as low as 3ns. This series comes from the original Metamako product line acquired by Arista Networks in 2018 and runs a combination of MOS and Arista EOS operating systems.
7050X and 7060X series: 1U and 2U low-latency cut-through line-rate 10GbE/25GbE, 40GbE and 100GbE switches. This product line offers higher port density than the 7150 series, in a wider choice of port options and interface speeds at the expense of slightly increased latency (1µs or less). The 7050X and 7060X Series are based on Broadcom Trident and Tomahawk merchant silicon.
7020R series: 1U store and forward line-rate with a choice of either a 1Gb top-of-rack switch, with 6x10Gb uplinks or a 10G with 100G uplinks. These switches use a Deep Buffer architecture, with 3GB of packet memory.
7010 series: 1U low power (52W) line-rate 1Gb top-of-rack switch, with 4x10Gb uplinks.
The low-latency of Arista switches has made the platform prevalent in high-frequency trading environments, such as the Chicago Board Options Exchange (largest U.S. options exchange), Lehman Brothers or RBC Capital Markets. As of October 2009, one third of its customers were big Wall Street firms.
Arista's devices are multilayer switches, which support a range of layer 3 protocols, including IGMP, Virtual Router Redundancy Protocol (VRRP), Routing Information Protocol (RIP), Border Gateway Protocol (BGP), Open Shortest Path First (OSPF), IS-IS, and OpenFlow. The switches are also capable of layer 3 or layer 4 equal-cost multi-path routing (ECMP), and applying per-port L3/L4 access-control lists (ACLs) entirely in hardware.
All of Arista's switches are built using merchant silicon instead of custom switching application-specific integrated circuits (ASICs). This strategy enables Arista to leverage latest advances in processor manufacturing technology at a lower price, due to the prohibitive costs associated with developing and producing custom chips. Other major competitors such as Cisco Systems and Juniper Networks have also started using the same strategy, which led to multiple competing products built on top of the same chips. For instance Broadcom's Trident chip is used in some Cisco Nexus switches, Juniper QFX switches, Force10, IBM and HP switches. The integration of the chips with the rest of the system (including integration with the medium access control (MAC), physical layer (PHY), and device drivers on the control plane) and software are what differentiate the competing products.
In November 2013, Arista Networks introduced the Spline network, combining leaf and spine architectures into a single-tier network, which aims to cut operating costs.
In September 2015, Arista introduced the series 7060X, 7260X, and 7320X, refreshing then extant series 7050X, 7250X, and 7300X, with new, higher performance 100 GbE options.
Major competitors
Extreme Networks
Juniper Networks
Cisco Systems
Hewlett Packard Enterprise (Aruba Networks division)
References
External links
Networking hardware companies
Companies based in Santa Clara, California
American companies established in 2004
Networking companies of the United States
Electronics companies established in 2004
Companies listed on the New York Stock Exchange
2014 initial public offerings |
32512605 | https://en.wikipedia.org/wiki/Eduardo%20Suger | Eduardo Suger | Eduardo Suger Cofiño (November 29, 1938) is a Swiss-born Guatemalan physicist, scholar, educator, and politician. He is one of the founders of Galileo University in Guatemala City and of the Suger Montano Institute. Suger was the first Central American to receive his PhD in physics.
Early life
Suger was born in Zürich, Switzerland on 29 November 1938 to Emilio Suger, a Swiss national, and Estela Cofiño Valladares of Acatenango, Chimaltenango, Guatemala. When World War II broke out, Suger's father was called up to complete his mandatory military service. Suger's mother spoke no German despite living in Switzerland and traveled to the Guatemalan Consulate in Germany for help returning to Guatemala. Shortly after she and Suger returned, she married Enrique Castañeda Rubio, an engineer and official in the Army, and had four more children. Suger lived with his maternal grandmother nearby from the time his mother remarried until his grandmother passed away in 1949/1950.
Education
Suger graduated from La Preparatoria and earned extra money tutoring his classmates in math. He briefly studied chemistry at Universidad de San Carlos de Guatemala (USAC) before deciding to study at the Zürich's Federal Institute of Technology (ETH), the same school his role model Albert Einstein attended. He lived with his strict father and stepmother while earning his BS in physics and mathematics and MS in theoretical physics. At 20, he began teaching geometry and physics, and later worked for a quantum mechanics and molecular physics lab at IBM. Suger served in the military before returning to Guatemala in 1964. He subsequently earned his PhD in molecular physics the University of Texas at Austin (UTA) in 1971. While at UTA, he was inducted into Sigma Xi science society; did labwork in a molecular physics group; and was an academic assistant for a postgraduate Classical Mechanics course.
Academic career
Suger has been teaching mathematical physics for more than 50 years in universities in four different countries. He taught at the Minerva and Freundenberg Institutes in Zürich; UTA as a PhD candidate in Texas; as a visiting professor in the Informatics and Computer Science departments at Alberto Masferrer Salvadorean University and the Technological University of El Salvador; and USAC, Universidad Francisco Marroquín, Universidad del Valle de Guatemala, Universidad Mariano Gálvez, Rafael Landívar University, and his own Galileo University in Guatemala City. During the 17 years he worked at USAC, he was affiliated with the medical sciences, engineering, chemistry, pharmacy, economic science, and architecture departments and established the secondary school teaching program. In 1977, Suger joined the faculty at UFM and taught technology, accounting, and economic science classes. Later that year, he proposed the creation of a computer sciences department, which would allow for more in-depth study of these fields; the program proved so popular that it was quickly converted into the Computer Science and Information Technology Institute (IICC). In 1978, enrollment for the related Systems Engineering, Informatics, and Computer Sciences department opened; in 1982, this too became an institute (FISICC) and Suger was named FISICC's first dean. FISICC eventually became part of Galileo University. He also founded the School of Economics and Business Administration department. He founded and directed the Institute of Open Education (IDEA), which challenges the structure of traditional university learning, in 1994.
In 2000, he established Galileo University, one of the first science-technology universities in Guatemala. It was authorized by the Council of Private Higher Education in Guatemala the same year. In 2019, the university, located on Dr Eduardo Suger Cofiño Street in his honor, boasted 40,000 students. Since its foundation, Suger has served as Galileo University's rector.
Military
Suger has a long history of working with the military as both an educator and an engineer. He has been commended for his work in increasing military access to university through his Guatemalan Army Program at Galileo University.
During the Guatemalan Civil War, he was approached by Chief of Defense Staff General Marco Antonio Espinoza to engineer a computerized system that would help the government monitor revolutionaries and other dissidents. He was made an honorary colonel for his intelligence work. In 2021, USAC announced they intended to award Suger with a Doctor Honoris Causa, also for his military service. A USAC student organization dedicated to honoring the memory of more than 700 students who were murdered or disappeared during the war pushed back. They claimed that Suger's technological modernization within the military and his tracking system worsened the human rights violations that characterized the war. This sentiment echoes Albedrío magazine, the Pro-Human Rights Action Foundation, and Rafael Landívar University's student newspaper Plaza Pública, who directly suggest he should be held accountable for his indirect influence on the violence. Many publications, such as InSight Crime and El Observador GT, as well as academics like Jennifer Schirmer (Historical Clarification Commission) and Hal Brands (Johns Hopkins), merely refer to the intelligence systems he helped develop rather than directly by name.
Political career
Suger ran for President of Guatemala in 2003 (Authentic Integral Development), 2007 (Social Action Centre), and 2011 (CREO). In 2003, he received 2.23% of the votes; in 2007, 7.45%; and in 2011, he took third place with 16.4% of the vote. He has been criticized for only engaging in politics during campaign season, preferring instead to return to his academic career, and for not aligning himself with a political party unless they approach him to be their presidential candidate.
If elected, one of Suger's main priorities would be to improve Guatemala's education system. He is a critic of the complexity of public higher education and claims it often is what delays institutional changes. He encourages strengthening the "academic-productivity" and believes everyone deserves an equal right to education, even guerrilla soldiers. He planned to eliminate poverty by expanding the middle class and by heavily investing in and empowering those in the war-torn northwestern part of the country. He was also interested in turning Guatemala's centralized governmental structure into a federal republic and strengthening the relationship between his home countries of Guatemala and Switzerland.
Personal life
In 1960, Suger met Regina Margarita Castillo Rodríguez during a visit to Guatemala while on holiday from ETH Zürich. They continued to correspond by mail before he returned in 1964. They married on 11 January 1964 in Guatemala. The couple has five sons: José Eduardo, Carlos Enrique, Emilio Alejandro, Christian Andree, and Jean Paul. Jean Paul (Administrative Vice President), José Eduardo (Dean of FISICC), Christian (Director of Materials Distribution), and Carlos (Director of Operations) all work at Galileo. Regina passed away on 16 March 2021.
Suger's uncle was businessman José Cofiño Ubico, who cofounded Industrias Licoreras de Guatemala. His wife is descended from the founders of Cervecería Centro Americana.
Awards and honors
Publications
Suger published four editions (1971, 1974, 1978, 1981) of the textbook Introducción a la matemática moderna, which he wrote with Bernardo Morales Figueroa and Leonel Pinot Leiva.
Selected academic publications
References
1938 births
Living people
Guatemalan physicists
Galileo University faculty
Rafael Landívar University faculty
Universidad Francisco Marroquín faculty
Universidad de San Carlos de Guatemala faculty
Mathematical physicists
ETH Zurich alumni
People from Zürich
Scientists from Zürich
People from Guatemala City
University of Texas at Austin alumni
University of Texas at Austin College of Natural Sciences alumni
Guatemalan military personnel
Swiss military personnel
Military personnel from Zürich
20th-century Swiss military personnel
Recipients of the Olympic Order |
4888949 | https://en.wikipedia.org/wiki/History%20of%20podcasting | History of podcasting | Podcasts, previously known as "audioblogs", has its roots dating back to the 1980s. With the advent of broadband Internet access and portable digital audio playback devices such as the iPod, podcasting began to catch hold in late 2004. Today there are more than 115,000 English-language podcasts available on the Internet, and dozens of websites available for distribution at little or no cost to the producer or listener.
Precursors
The Illusion of Independent Radio is a Russian samizdat "radio program" created in 1989 in Rostov-on-Don and distributed on magnetic tape and cassettes. It was the first Soviet Russian prototype of the media phenomenon that was widely developed in the 2000s as podcasting.
Before the advent of the internet, in the 1980s, RCS (Radio Computing Services), provided music and talk-related software to radio stations in a digital format. Before online music digital distribution, the MIDI format as well as the Mbone, Multicast Network was used to distribute audio and video files. The MBone was a multicast network over the Internet used primarily by educational and research institutes, but there were audio talk programs.
Many other jukeboxes and websites in the mid-1990s provided a system for sorting and selecting music or audio files, talk, segue announcements of different digital formats. There were a few websites that provided audio subscription services. In 1993, the early days of Internet radio, Carl Malamud launched Internet Talk Radio which was the "first computer-radio talk show, each week interviewing a computer expert". It was distributed "as audio files that computer users fetch one by one". A 1993 episode of The Computer Chronicles described the concept as "asynchronous radio". Malamud said listeners could pause and restart the audio files at will, as well as skip content they did not like.
Some websites allowed downloadable audio shows, such as the comedy show The Dan & Scott Show, available on AOL.com from 1996. Additionally, in 1998, Radio Usach, radio station from the University of Santiago, Chile, explored the option to broadcast online and on demand streaming talk shows. However, the development of downloaded music did not reach a critical mass until the launch of Napster, another system of aggregating music, but without the subscription services provided by podcasting or video blogging aggregation client or system software. Independent of the development of podcasting via RSS, a portable player and music download system had been developed at Compaq Research as early as 1999 or 2000. Called PocketDJ, it would have been launched as a service for the Personal Jukebox or a successor, the first hard-disk based MP3-player.
In 2001, Applian Technologies of San Francisco introduced Replay Radio (later renamed into Replay AV), a TiVo-like recorder for Internet Radio Shows. Besides scheduling and recording audio, one of the features was a Direct Download link, which would scan a radio publisher's site for new files and copy them directly to a PC's hard disk. The first radio show to publish in this format was WebTalkGuys World Radio Show, produced by Rob and Dana Greenlee.
Timeline
In September 2000, the first system that enabled the selection, automatic downloading and storage of serial episodic audio content on PCs and portable devices was launched from early MP3 player manufacturer, i2Go. To supply content for its portable MP3 players, i2Go introduced a digital audio news and entertainment service called MyAudio2Go.com that enabled users to download episodic news, sports, entertainment, weather, and music in audio format for listening on a PC, the eGo portable audio player, or other MP3 players. The i2GoMediaManager and the eGo file transfer application could be programmed to automatically download the latest episodic content available from user selected content types to a PC or portable device as desired. The service lasted over a year, but succumbed when the i2Go company ran out of capital during the dot-com crash and folded.
The RSS connection
In October 2000, the concept of attaching sound and video files in RSS feeds was proposed in a draft by Tristan Louis. The idea was implemented by Dave Winer, a software developer and an author of the RSS format. Winer had received other customer requests for "audioblogging" features and had discussed the enclosure concept (also in October 2000) with Adam Curry, a user of Userland's Manila and Radio blogging and RSS aggregator software.
Winer included the new functionality in RSS 0.92 by defining a new element called "enclosure", which would simply pass the address to a media aggregator. On January 11, 2001, Winer demonstrated the RSS enclosure feature by enclosing a Grateful Dead song in his Scripting News weblog.
For its first two years, the enclosure element had relatively few users and many developers simply avoided using it. Winer's company incorporated both RSS-enclosure and feed-aggregator features in its weblogging product, Radio Userland, the program favored by Curry, audioblogger Harold Gilchrist and others. Since Radio Userland had a built-in aggregator, it provided both the "send" and "receive" components of what was then called "audioblogging".<ref>Gilchrist, Harold 2002-10-27 Audioblog/Mobileblogging News this morning I'm experimenting with producing an audioblogging show... </ref> All that was needed for "podcasting" was a way to automatically move audio files from Radio Userland's download folder to an audio player (either software or hardware)—along with enough compelling audio to make such automation worth the trouble.
In June 2003, Stephen Downes demonstrated aggregation and syndication of audio files in his Ed Radio application. Ed Radio scanned RSS feeds for MP3 files, collected them into a single feed, and made the result available as SMIL or Webjay audio feeds.
The first on-demand radio show and the first podcast
In August 2000, the New England Patriots launched the Internet radio show PFW in Progress. It was a live show that was recorded and made available for on-demand download to visitors of Patriots.com, although this wasn't technically a podcast at the time, since the technology had not yet been invented to automatically download new episodes—a key differentiator that sets podcasts apart from simple audio files that can be downloaded manually. In 2005, two years after the introduction of the iTunes platform, the show was also offered there as a bona fide podcast. Today, it is still in existence, under the name Patriots Unfiltered, and is available on all podcast platforms. However, this was not the first podcast. That honour goes to IT Conversations by Doug Kaye; the show ran from 2003 to 2012.
In September 2003, the aforementioned Dave Winer created a special RSS-with-enclosures feed for his Harvard Berkman Center colleague Christopher Lydon's weblog, which previously had a text-only RSS feed. Lydon, a former New York Times reporter, Boston TV news anchor and NPR talkshow host, had developed a portable recording studio, conducted in-depth interviews with bloggers, futurists and political figures, and posted MP3 files as part of his Harvard blog. When Lydon had accumulated about 25 audio interviews, Winer gradually released them as a new RSS feed. Announcing the feed in his weblog, Winer challenged other aggregator developers to support this new form of content and provide enclosure support.
Not long after, Pete Prodoehl released a skin for the Amphetadesk aggregator that displayed enclosure links. Doug Kaye, who had been publishing MP3 recordings of his interviews at IT Conversations since June, created an RSS feed with enclosures, thus creating the first true podcast. Lydon's blog eventually became Radio Open Source; its accompanying podcast, titled Open Source (not to be confused with Adam Curry's Daily Source Code, which was also one of the first podcasts), is now the oldest still-running podcast.
BloggerCon
October 2003, Winer and friends organized the first BloggerCon weblogger conference at Berkman Center. CDs of Lydon's interviews were distributed as an example of the high-quality MP3 content enclosures could deliver; Bob Doyle demonstrated the portable studio he helped Lydon develop; Harold Gilchrist presented a history of audioblogging, including Curry's early role, and Kevin Marks demonstrated a script to download RSS enclosures and pass them to iTunes for transfer to an iPod. Curry and Marks discussed collaborating.
Pushing audio to a device
After the conference, Curry offered his blog readers an RSS-to-iPod script (iPodder) that moved MP3 files from Userland Radio to iTunes, and encouraged other developers to build on the idea.
In November 2003, the company AudioFeast (later renamed PodBridge, then VoloMedia) filed a patent application for “Method for Providing Episodic Media” with the USPTO based on its work in developing the AudioFeast service launched in September 2004. Although AudioFeast did not refer to itself as a podcasting service and was not built on RSS, it provided a way of downloading episodic audio content through desktop software and portable devices, with a system similar to the MyAudio2Go.com service four years before it. (AudioFeast shut down its service in July 2005 due to the unwillingness of its free customers to pay for its $49.95 paid annual subscription service, and a lack of a strong competitive differentiation in the market with the emergence of free RSS podcatchers.)
In May 2004, Eric Rice, then of SlackStreet.com, along with Randy Dryburgh of VocalSpace.com launched Audioblog.com as the first commercial podcasting hosting service. Audioblog.com became Hipcast.com in June 2006 and has hosted hundreds of thousands of podcasts since.
In September 2004, the media-in-newsfeed idea was picked up by multiple developer groups. While many of the early efforts remained command-line based, the very first podcasting client with a graphic user interface was iPodderX (later called Transistr after a trademark dispute with Apple), developed by August Trometer and Ray Slakinski. It was released first for the Mac, then for the PC. Shortly thereafter, another group (iSpider) rebranded their software as iPodder and released it under that name as Free Software (under GPL). The project was terminated after a cease and desist letter from Apple (over iPodder trademark issues). It was reincarnated as Juice and CastPodder.
The name
Writing for The Guardian in February 2004, journalist Ben Hammersley suggested the term "podcasting" as a name for the nascent technology. Seven months later, Dannie Gregoire used the term "podcasting" to describe the automatic download and synchronization of audio content (as opposed to the broadcasting of digital audio, which is how the word is usually used today); he also registered several "podcast"-related domains (e.g. podcast.net). The first documented use of "podcasting" in the definition known today (i.e., broadcasting rather than downloading) was mentioned in a podcast episode of the Evil Genius Chronicles on September 18, 2004, by Dave Slusher, who also mentioned the emerging technology of torrenting as well as pondering if he should monetise the podcast (and, if so, whether it should be through sponsorship or through voluntary donations, which is a dilemma that many professional podcasters face today). As of March 2021, the recording is still available to be streamed or downloaded.
The use of "podcast" by Gregoire was picked up by podcasting evangelists such as Slusher, Winer and Curry, and entered common usage. Also in September, Adam Curry launched a mailing list; then Slashdot had a 100+ message discussion, bringing even more attention to the podcasting developer projects in progress.
On September 28, 2004, Blogger and technology columnist Doc Searls began keeping track of how many "hits" Google found for the word "podcasts". His first query reportedly returned 24 results. On September 28, 2004, there were 526 hits on Google's search engine for the word "podcasts". Google Trends marks the beginning of searches for "podcast" at the end of September. On October 1, 2004, there were 2,750 hits on Google's search engine for the word "podcasts". This number continued to double every few days.
By October 11, 2004, capturing the early distribution and variety of podcasts was more difficult than counting Google hits. However, by the end of October, The New York Times had reported on podcasts across the United States and in Canada, Australia and Sweden, mentioning podcast topics from technology to veganism to movie reviews.
Wider noticeUSA Today told its readers about the "free amateur chatfests" the following February, profiling several podcasters, giving instructions for sending and receiving podcasts, and including a "Top Ten" list from one of the many podcast directories that had sprung up. Those Top Ten programs gave further indication of podcast topics: four were about technology (including Curry's Daily Source Code, which also included music and personal chat), three were about music, one about movies, one about politics, and—at the time number one on the list—The Dawn and Drew Show, described as "married-couple banter", a program format that (as USA Today noted) was popular on American broadcast radio in the 1940s (e.g. Breakfast with Dorothy and Dick). After Dawn and Drew, such "couplecasts" became quite popular among independent podcasts, the most notable being the London couple Sowerby and Luff (consisting of comedy writers Brian West (Luff) and Georgina Sowerby), whose talk show The Big Squeeze quickly achieved a global audience via the podcast Comedy 365. On October 18, 2004, the number of hits on Google's search engine for the word "podcasts" ballooned to more than 100,000 after being just 24 results three weeks prior.
In October 2004, detailed how-to-podcast articles had begun to appear online, and a month later, Liberated Syndication launched the first Podcast Service Provider, offering storage, bandwidth, and RSS creation tools. This was the same month that Podtrac started providing its free download tracking service and audience demographics survey to the podcasting industry. "Podcasting" was first defined in Wikipedia. In November 2004, podcasting networks started to appear on the scene with podcasters affiliating with one another. One of the earliest adopters from the mainstream media of on-demand audio (although not strictly a podcast) was the BBC, with the BBC World Service show, Go Digital, in August 2001. The first domestic BBC show to be podcasted was In Our Time, made available as a podcast in November 2004.
Apple adds podcasts to iTunes
In June 2005, Apple added podcasting to its iTunes 4.9 music software and building a directory of podcasts at its iTunes Music Store.iTunes 4.9 with podcasting available for download—still no formal announcement Engadget The new iTunes could subscribe to, download and organize podcasts, which made a separate aggregator application unnecessary for many users. Apple also promoted creation of podcasts using its GarageBand and QuickTime Pro software and the MP4 format instead of MP3. Prior to iTunes' seamless integration, acquiring and organising podcasts required dedicated "podcatching" software that was often clunky and intimidating for the average user.
In July 2005, U.S. President George W. Bush became a podcaster of sorts, when the White House website added an RSS 2.0 feed to the previously downloadable files of the president's weekly radio addresses. Also in July, the first People's Choice Podcast Awards were held during the Podcast Expo. Awards were given in 20 categories. On September 28, 2005, exactly a year after first tracking hits for the word "podcasts" on Google's search engine, Google found more than 100 million hits on the word "podcasts". In November 2005, the first Portable Media Expo and Podcasting Conference was held at the Ontario Convention Center in Ontario, California. The annual conference changed its name to the Podcast and New Media Expo, which stopped being held in 2015. On December 3, 2005, "podcast" was named the word of the year in 2005 by the New Oxford American Dictionary and would be in the dictionary in 2006.
Expansion
In February 2006, following London radio station LBC's successful launch of the first premium-podcasting platform, LBC Plus, there was widespread acceptance that podcasting had considerable commercial potential. UK comedian Ricky Gervais, whose first season of The Ricky Gervais Show became a big hit, launched a new series of the popular podcast. The second series of the podcast was distributed through audible.co.uk and was the first major podcast to charge consumers to download the show (at a rate of 95 pence per half-hour episode). The first series of The Ricky Gervais Show podcast had been freely distributed by the Positive Internet Company and marketed through The Guardian newspaper's website, and it was the world's most successful podcast for several years, eventually gaining more than 300 million unique downloads by March 2011. Even in its new subscription format, The Ricky Gervais Show was regularly the most-downloaded podcast on iTunes. The Adam Carolla Show claimed a new Guinness world record, with total downloads approaching 60 million, but Guinness failed to acknowledge that Gervais's podcast had more than 5 times as many downloads as Carolla's show at the time that this new record was supposedly set.
In February 2006, LA podcaster Lance Anderson became nearly the first to take a podcast and create a live venue tour. The Lance Anderson Podcast Experment (sic) included a sold-out extravaganza in The Pilgrim, a central Liverpool (UK) venue (February 23, 2006), followed by a theatrical event at The Rose Theatre, Edge Hill University (February 24, 2006), which included appearances by Mark Hunter from The Tartan Podcast, Jon and Rob from Top of the Pods, Dan Klass from The Bitterest Pill via video link from Los Angeles, and live music from The Hotrod Cadets. In addition, Anderson was also invited to take part in the first-ever Podcast Forum at CARET, the Centre for Applied Research in Educational Technologies at the University of Cambridge (February 21, 2006). Organised and supported by Josh Newman, the university's Apple Campus Rep, Anderson was joined at this event by Dr. Chris Smith from the Naked Scientists podcast; Debbie McGowan, an Open University lecturer and advocate for podcasting in education; and Nigel Paice, a professional music producer and podcasting tutor. In March 2006, Canadian Prime Minister Stephen Harper became the second head of government to issue a podcast, the Prime Minister of Canada's Podcast (George W. Bush technically being the first one back in July 2005). In July 2009, the company VoloMedia is awarded the "Podcast patent" by the USPTO in patent number 7,568,213. Dave Winer, the co-inventor of podcasting (with Adam Curry), points out that his invention predated this patent by two years.
On February 2, 2006, Virginia Tech (Virginia Polytechnic Institute and State University) launched the first regular schedule of podcast programming at the university. Having four regularly scheduled podcasts was a first for a major American university, which was launched as part of Virginia Tech's "Invent the Future" campaign.
In April 2006, comedy podcast Never Not Funny began when Matt Belknap of ASpecialThing Records interviewed comedian Jimmy Pardo on the podcast for his popular alternative comedy forum A Special Thing. The two had previously discussed producing a podcast version of Jimmy's Los Angeles show "Running Your Trap", which he hosted at the Upright Citizens Brigade Theatre, but they hit it off so well on AST Radio that Pardo said "This is the show." Shortly after, Never Not Funny started simulcasting both a podcast stream and a paid video version. The podcast still uses this format, releasing two shows a week—one free and one paid—along with paid video feed.
In October 2006, the This American Life radio program began to offer a podcast version to listeners. Since debuting, This American Life has consistently been one of the most-listened-to podcasts, averaging around 2.5 million downloads per episode.
In March 2007, after being on-air talent and being fired from KYSR (STAR) in Los Angeles, California, Jack and Stench started their own subscription-based podcast. At $5.00 per subscription, subscribers had access to a one-hour podcast, free of any commercials. They had free local events at bars, ice cream parlors and restaurants all around Southern California. With a successful run of 12 years and over 2,700 episodes, the Jack and Stench Show is among the longest-running monetized podcasts.
In March 2007, the Cambridge CARET Centre also helped to give birth to the first as-live podcast channel for women politicians in the UK and globally called Women's Parliamentary Radio. A former BBC correspondent and political editor in the East, Boni Sones OBE, worked with three other broadcast journalists—Jackie Ashley, Deborah McGurran, and Linda Fairbrother—to create an online radio station where women MPs of all parties could be interviewed impartially. The MP3 files could be streamed or downloaded. Their resulting 550 interviews over 15 years can now be found in one of four audio archives nationally at the British Library, the London School of Economics, The History of Parliament Trust and the Churchill Archives University of Cambridge. Sones has also written four books about these podcast interviews and archives, which are in all the major libraries in the UK.The Adam Carolla Show started as a regular weekday podcast in March 2009; by March 2011, 59.6 million episodes had been downloaded in total, claiming a record; however, as previously mentioned, Gervais's podcast had already received five times Carolla's downloads by the time the record was supposedly set. The BBC noted in 2011 that more people (eight million in the UK or about 16% of the population, with half listening at least once a week—a similar proportion to the USA) had downloaded podcasts than had used Twitter.
Besides the aforementioned Adam Carolla Show, 2009 saw a huge influx of many other popular new comedy podcasts, including the massively successful talk-style podcasts with a comedic bent such as WTF with Marc Maron, The Joe Rogan Experience, and the David Feldman Show. 2009 also saw the launch of the surrealist comedy show Comedy Bang! Bang! (which was known as Comedy Death-Ray Radio at the time), which was later turned into a TV show with the same name.
With a run of eight years (as of October 2013), the various podcasts provided by Wrestling Observer/Figure Four Online, including Figure Four Daily and the Bryan and Vinny Show with host Bryan Alvarez, and Wrestling Observer Radio with hosts Alvarez and Dave Meltzer, have produced over 6,000 monetized podcasts at a subscription rate of $10.99 per month. Their subscription podcast model launched in June 2005.http://www.f4wonline.com/more/membership-benefits Alvarez and Meltzer were co-hosts in the late 1990s at Eyada.com, the first Internet-exclusive live streaming radio station, broadcasting out of New York City.
In 2014, This American Life launched the first season of their Serial podcast. The podcast was a surprise success, achieving 68 million downloads by the end of Season 1 and becoming the first podcast to win a Peabody Award. The program was referred to as a "phenomenon" by media outlets and popularized true crime podcasts. True crime programs such as My Favorite Murder, Crimetown, and Casefile were produced after the release of Serial and each of these titles became successful in their own right. From 2012 to 2013, surveys showed that the number of podcast listeners had dropped for the first time since 2008. However, after Serial debuted, audience numbers rose by 3%.
Podcasting reached a new stage of growth in 2017 when The New York Times debuted The Daily news podcast. The Daily is designed to match the fast pace of modern news, and the show features original reporting and recordings of the newspaper's top stories. As of May 2019, it has the highest unique monthly US audience of any podcast. That's right!
Download records
Due to the fragmented delivery mechanisms and various other factors, it is difficult to externally nail down a precise listenership figure for any one podcast (although podcasters themselves can generally get fairly accurate data if they so please, which is especially useful for securing advertising contracts). As of December 2018, Serial was believed by some sources to be the most downloaded podcast of all time, with 420 million total downloads, surpassing Gervais's 300 million figure from back in 2011. However, Stuff You Should Know has accrued more than a billion downloads, and there are others still that have also hit this figure. According to Podtrac, NPR is the most popular podcast publisher, with over 175 million downloads and streams every month; however, Joe Rogan claimed in 2019 that his podcast alone was receiving 190 million downloads a month—a claim that is very likely true—and therefore makes his show the most downloaded podcast of all time in terms of both average viewership and total downloads. Indeed, Rogan signed a $100 million licensing deal with Spotify due to his unprecedented success with the medium.
Nielsen and Edison Research reported in April 2019 that they had logged 700,000 active podcasts worldwide. Their research also revealed that, per capita, South Korea leads the world in podcast listeners, with 58% of South Koreans listening to podcasts every month. For comparison, in 2019, 32% of Americans had listened to podcasts in the last month. In 2020, 24% of Americans had listened to podcasts weekly. Comedy is the most popular podcast genre in the United States. There are more than 1,700,000 shows and nearly 44 million episodes as of January 19, 2021. Podtrac reports iHeartRadio's shows had more than 243 million downloads. IAB and PWC project that U.S. podcast advertising revenues will surpass $1 billion by 2021.
Video podcasting
A video podcast or vodcast is a podcast that contains video content. Web television series are often distributed as video podcasts. Dead End Days, a serialized dark comedy about zombies released from 31 October 2003 through 2004, is commonly believed to be the first video podcast. Never Not Funny was a pioneer in providing video content in the form of a podcast. H3H3's H3 podcast and the Joe Rogan Experience are two examples of a litany of video podcasts, with many of them now being hosted on YouTube rather than as part of a feed (which was much more common when video podcasting was a brand-new medium). The key difference between a vlog and a video podcast is the length. While a vlog could technically be a video podcast, long-form conversational-style videos are generally considered to be a video podcast.
Popularization
Business model studies
Classes of MBA students have been commissioned to research podcasting and compare possible business models, and venture capital flowing to influential content providers.
Podnography
As is often the case with new technologies, pornography has become a part of the scene, producing what is sometimes called podnography.
Podsafe music
The growing popularity of podcasting introduced a demand for music available for use on the shows without significant cost or licensing difficulty. Out of this demand, a growing number of tracks, by independent as well as signed acts, are now being designated "podsafe".
Use by conventional media
Podcasting has been given a major push by conventional media. (See Podcasting by traditional broadcasters.)
Broadcast media
Podcasting has presented both opportunities and challenges for mainstream radio outlets, which on one hand see it as an alternative medium for their programs while on the other hand struggle to identify its unique affordances and subtle differences. In a famous example of the way online statistics can be misused by those unused to the nuances of the online world, marketing executives from the ABC in Australia were unsure of how to make sense of why Digital Living, at that stage a little-known podcast from one of their local stations, outrated all of their expensively produced shows. It turned out that a single segment on Blu-ray had been downloaded a massive 150,000 times in one day from a single location in China.
Print media
For example, podcasting has been picked up by some print media outlets, which supply their readers with spoken versions of their content. One of the first examples of a print publication to produce an audio podcast to supplement its printed content was the international scientific journal Nature. The Nature Podcast was set up in October 2005 by Cambridge University's award-winning "Naked Scientist", Chris Smith, who produces and presents the weekly show.
Although firm business models have yet to be established, podcasting represents a chance to bring additional revenue to a newspaper through advertising, subscription fees and licensing.
Podcamps
Chris Brogan and Christopher S. Penn launched the PodCamp unconference series aimed at bringing together people interested in blogging, social media, social networking, podcasting, and video on the net, and in so doing, Brogan won the Mass High Tech All Stars award for 2008.
Podcast Movement
Veteran podcaster Gary Leland joined forces with Dan Franks and Jared Easley to form a new international conference for podcasters in early 2014 called Podcast Movement. Unlike other new media events, Podcast Movement was the first conference of its size in over a decade that was focused specifically on podcasting, and has tracks for both new and experienced podcast creators, as well as industry professionals. The sixth annual conference is expected to be attended by over 3,000 podcasters, and is scheduled for August 2019 in Orlando, FL.
Adaptions
Some popular podcasts, such as Lore, Homecoming, My Brother, My Brother, and Me and Serial, have been adapted as films or television series.
Coping with growth
While podcasting's innovators took advantage of the sound-file synchronization feature of Apple Inc.'s iPod and iTunes software—and included "pod" in the name—the technology was always compatible with other players and programs. Apple was not actively involved until mid-2005, when it joined the market on three fronts: as a source of "podcatcher" software, as publisher of a podcast directory, and as provider of tutorials on how to create podcasts with Apple products GarageBand and QuickTime Pro''. Apple CEO Steve Jobs demonstrated creating a podcast during his January 10, 2006 keynote address to the Macworld Conference & Expo using new "podcast studio" features in GarageBand 3.
When it added a podcast-subscription feature to its June 28, 2005, release of iTunes 4.9, Apple also launched a directory of podcasts at the iTunes Music Store, starting with 3,000 entries. Apple's software enabled AAC-encoded podcasts to use chapters, bookmarks, external links, and synchronized images displayed on iPod screens or in the iTunes artwork viewer. Two days after release of the program, Apple reported one million podcast subscriptions.
Some podcasters found that exposure to iTunes' huge number of downloaders threatened to make great demands on their bandwidth and related expenses. Possible solutions were proposed, including the addition of a content delivery system, such as Liberated Syndication; Podcast Servers; Akamai; a peer-to-peer solution, BitTorrent; or use of free hosting services, such as those offered by the Internet Archive or Buzzsprout.
Since September 2005, a number of services began featuring video-based podcasting, including Apple (via its iTunes Music Store), the Participatory Culture Foundation, and Loomia. These services handle both audio and video feeds.
See also
List of podcast clients
Uses of podcasting
Enhanced podcast
References
Podcasting |
26078570 | https://en.wikipedia.org/wiki/WinPenPack | WinPenPack | winPenPack (often shortened to wPP) is an open-source software application suite for Windows. It is a collection of open source applications that have been modified to be executed directly from a USB flash drive (or any other removable storage device) without prior installation. WinPenPack programs are distributed as free software, and can be downloaded individually or grouped into suites.
History
The creator, Danilo Leggieri, put the site winPenPack.com online on 23 November 2005. The project and the associated community then grew quickly. Since that date, 15 new versions and hundreds of open-source portable applications were released. The project is well known in Italy and abroad. It is hosted on SourceForge. The collections are regularly distributed bundled with popular PC magazines in Italy and worldwide. A thriving community of users is actively contributing to the growth of the project. The site currently hosts various projects created and suggested by forum members, and is also used for bug reporting and suggestions.
Press coverage
Since May 2006, winPenPack has been covered by most major Italian PC publications including: PC Professionale, Win Magazine, Computer Magazine, Total Computer, Internet Genius, Quale Computer, Computer Week, and many others.
Features
Portable software
All the applications available in the winPenPack suites are portable applications.
Portable applications:
do not require installation
can be executed from any USB flash drive, and from any PC hard disk drive (internal or external)
leave no traces of their use in the Windows applications registry or any other user folder in the host PC hard drive
do not conflict with the programs installed in the host PC hard drive (for example, X-Firefox executed from a USB flash drive does not modify (or conflict with) the counterpart Firefox program installed on the host PC)
X-Software
X-Software is software that has been modified with X-Launcher to be executed as if it were a portable application. X-Launcher is a specific application which executes other applications in "portable mode" by means of recreating their original operating environment. A few examples of X-Software include X-Firefox (counterpart to Mozilla Firefox), X-Thunderbird (Mozilla Thunderbird), X-Gimp (GIMP), and others.
Main menu functions
The winPenPack main menu can be executed from any removable storage device (including, and especially, from USB flash drives). In each different winPenPack suite, the main menu is pre-configured to list all programs available (including programs belonging to other suites), and can be edited at any time. New programs can be added to the menu either manually (by means of the "Add" options or by drag-and-dropping them onto the menu) or automatically (please note that automatic installation is only available for X-Software, as opposed to portable applications).
Notes
External links
Application launchers
Computing websites
Free software distributions
Portable software suites
Portable software |
50871 | https://en.wikipedia.org/wiki/Router%20%28woodworking%29 | Router (woodworking) | The router is a power tool with a flat base and a rotating blade extending past the base. The spindle may be driven by an electric motor or by a pneumatic motor. It routs (hollows out) an area in hard material, such as wood or plastic. Routers are used most often in woodworking, especially cabinetry. They may be handheld or affixed to router tables. Some woodworkers consider the router one of the most versatile power tools.
There is also a traditional hand tool known as a router plane, a form of hand plane with a broad base and a narrow blade projecting well beyond the base plate.
CNC wood routers add the advantages of computer numerical control (CNC).
The laminate trimmer is a smaller, lighter version of the router. Although it is designed for trimming laminates, it can also be used for smaller general routing work.
Rotary tools can also be used similarly to routers with the right bits and accessories (such as plastic router bases).
History
Before power routers existed, the router plane was often used for the same purpose.
An incremental step toward modern power routers was the foot-pedal operated router, such as the Barnes Former/Shaper, available in 1877. Barnes patented a reversible rotary cutting head in 1889.
The first portable power router was patented in 1906 by George Kelley and marketed by the Kelley Electric Machine Company.
The early electric routers were quite heavy, and only nominally "portable."
In 1915 Oscar and Rudy Onsrud produced an air-powered router, which they referred to as a Jet Motor Hand Router.
In the 1930s, Stanley Works acquired a line of portable routers from Roy L. Carter, and marketed an 18000 RPM electric hand router similar to modern routers.
Further refinement produced the plunge router, invented by ELU (now part of DeWalt) in Germany around 1949.
Modern routers are often used in place of traditional moulding planes or spindle moulder machines for edge decoration (moulding) of timber.
Process
Routing is a high speed process of cutting, trimming, and shaping wood, metal, plastic, and a variety of other materials.
Chip formation
Routing and milling are conceptually similar, and end mills can be used in routers, but routing wood is different from milling metal in terms of the mechanics. Chip formation is different, so the optimal tool geometry is different. Routing is properly applied to relatively weak and brittle materials, typically wood. As these materials are weak in small sections, routers can run at extremely high speeds, so even a small router may cut rapidly. Owing to inertia at these high speeds, the normal wood cutting mechanism of Type I chips cannot take place. The cutter edge angle is blunt, approaching 90°, and so a Type III chip forms, with waste material produced as fine dust. This dust is a respiratory hazard, even in benign materials. The forces against the cutter are light, so routers may be hand-held.
When milling metals, the material is relatively ductile, although remaining strong even at a small scale. A Type II chip forms, and waste may be produced as continuous swarf. Cutter forces are high, so milling machines must be robust and rigid, usually substantial constructions of cast iron.
Intermediate materials, such as plastics and sometimes soft aluminium, may be cut by either method, though routing aluminium is usually more of an improvised expedient than a production process, and is noisy and hard on tools.
Process characteristics
Routing is usually limited to soft metals (aluminium etc.) and rigid non-metals. Specially designed cutters are used for a variety of patterns, cuts, and edging. Both hand controlled and machine controlled/aided routers are common today.
Workpiece geometry
Routing is a shaping process that produces finished edges and shapes. Some materials that are difficult to shape with other processes, such as fiber-glass, Kevlar, and graphite, can be shaped and finished neatly via various routing techniques. Apart from finished edges and shaping, cutaways, holes, and contours can also be shaped using routers.
Tools and equipment
The set up includes an air or electric driven router, a cutting tool often referred to as a router bit, and a guide template. Also the router can be fixed to a table or connected to radial arms which can be controlled more easily.
In general there are three types of cutting bits or tools.
Fluted cutters (used for edging and trimming)
Profile cutters (used for shaping and trimming)
Helical cutters (used on easily machined materials, for drilling, shaping, trimming)
Safety glasses and ear protection should be worn at all times when using a router.
Only trained adults, or trained adolescents with supervision, should use the router.
Moulding
The spindle router is positioned at the finer end of the scale of work done by a moulding spindle. That is to say it is able to cut grooves, edge moulding, and chamfer or radius the edge of a piece of wood. It is also possible to use it for cutting some joints. The shape of cut that is created is determined by the size and shape of the bit (cutter) held in the collet and the depth by the depth adjustment of the sole plate.
Variety of routers
There are a variety of router styles, some are plunge, some are D handled, some are double knob handled. Different manufacturers produce the routers for different wood works, as plunge routers, fixed-base wood routers, combo routers, variable-speed routers, laminate trimmers, CNC wood routers. Nowadays, most better quality routers have variable speed controls and will have plunge bases that can also be locked in place so the router can be used as a fixed-base router. Some have a soft-start feature, meaning they build up speed gradually. This feature is particularly desirable for routers with a large cutter. Holding a 3-horsepower router and turning it on without a soft-start is potentially dangerous, due to the torque of the motor. Holding it with two hands is a must. For routers with a toggle type on/off switch it is important to check to verify the switch is in the off position, prior to plugging it in. For safety, larger router cutters can usually only be used in a router that is mounted in a router table. This makes the tool even more versatile and stable.
The purpose of multiple handle arrangements depends on the bit. Control is easier with different configurations. For example, when shaping the edge of a fine table top, many users prefer a D handle, with variable speed, as it seems to permit better control and burning the wood can be minimized.
Routers have many uses. With the help of the multitude of jigs and various bits, they are capable of producing dovetails, mortises, and tenons, moldings of infinite varieties, dados, rabbets/rebates, raised-panel doors and frames, cutting circles, and so much more.
Features of the modern spindle router
The tool usually consists of a base housing a vertically mounted universal electric motor with a collet on the end of its shaft. The bit is height-adjustable to allow protrusion through an opening in a flat sole plate, usually via adjusting the motor-mounting height (the mechanism of adjustment is widely varied among manufacturers). Control of the router is derived from a handle or knob on each side of the device, or by the more recently developed "D-handle".
There are two standard types of router—plunge and fixed. When using a plunge-base router, the sole of the base is placed on the face of the work with the cutting bit raised above the work, then the motor is turned on and the cutter is lowered into the work. With a fixed-base router, the cut depth is set before the tool is turned on. The sole plate is then either rested flat on the workpiece overhanging the edge so that the cutting bit is not contacting the work (and then entering the work from the side once the motor is turned on), or the sole plate is placed at an angle with the bit above the work and the bit is "rocked" over into the work once the motor is turned on. In each case, the bit cuts its way in, but the plunge router does it in a more refined way, although the bit used must be shaped so it bores into the wood when lowered.
The baseplate (sole plate) is generally circular (though this, too, varies by individual models) and may be used in conjunction with a fence attached to the base, which then braces the router against the edge of the work, or via a straight-edge clamped across the work to obtain a straight cut. Other means of guiding the machine include the template guide bushing secured in the base around the router cutter, or router cutters with built-in guide bearings. Both of these run against a straight edge or shaped template. Without this, the varying reaction of the wood against the torque of the tool makes it impossible to control with the precision normally required.
Table mounted router
A router may be mounted upside down in a router table or bench. The router's base plate is mounted to the underside of the table, with a hole allowing the bit to protrude above the table top. This allows the work to be passed over the router, rather than passing the router over the work. This has benefits when working with smaller objects and makes some router operations safer to execute. A router table may be fitted with a fence, fingerboards and other work-guiding accessories to make the operation safer and more accurate.
A simple router table consists of a rigid top with the router bolted or screwed directly to the underside. More complex solutions can be developed to allow the router to be easily removed from the table as well as facilitate adjusting the router's bit height using a lift mechanism; there is a wide range of commercially available systems.
In this mode, the router can perform tasks similar to a spindle moulder. For smaller, lighter jobs, the router used in this way can be more convenient than the spindle moulder, with the task of set up being somewhat faster. There is also a much wider range of bit profiles available for the router, although the size is limited.
The router table is usually oriented so that the router bit is vertical and the table over which the work is passed is horizontal. Variations on this include the horizontal router table, in which the table remains horizontal but the router is mounted vertically above the table, so that the router bit cuts from the side. This alternative is for edge operations, such as panel raising and slot cutting.
Available cutters
Router bits come in a large variety of designs to create either decorative effects or joinery aids. Generally, they are classified as either high-speed steel (HSS) or carbide-tipped, however some recent innovations such as solid carbide bits provide even more variety for specialized tasks.
Aside from the materials they are made of, bits can be classified as edge bits or non-edge bits, and whether the bit is designed to be anti-kickback. Edge bits have a small wheel bearing to act as a fence against the work in making edge moldings. These bearings can be changed by using commercially available bearing kits. Changing the bearing, in effect, changes the diameter of the cutting edge. This is especially important with rabbeting/rebating bits. Non-edge bits require the use of a fence, either on a router table or attached to the work or router. Anti-kickback bits employ added non-cutting bit material around the circumference of the bit's shoulders which serves to limit feed-rate. This reduces the chance that the workpiece is pushed too deeply into the bit (which would result in significant kickback from the cutting edge being unable to compensate).
Bits also differ by the diameter of their shank, with -inch, 12 mm, 10 mm, -inch, 8 mm and -inch and 6 mm shanks (ordered from thickest to thinnest) being the most common. Half-inch bits cost more but, being stiffer, are less prone to vibration (giving smoother cuts) and are less likely to break than the smaller sizes. Care must be taken to ensure the bit shank and router collet sizes match exactly. Failure to do so can cause permanent damage to either or both and can lead to the dangerous situation of the bit coming out of the collet during operation. Many routers come with removable collets for the popular shank sizes (in the US in and in, in Great Britain in, 8 mm and in, and metric sizes in Europe—although in the United States the in and 8 mm sizes are often only available for an extra cost).
Many modern routers allow the speed of the bit's rotation to be varied. A slower rotation allows bits of larger cutting diameter to be used safely. Typical speeds range from 8,000 to 30,000 rpm.
Router bits can be made to match almost any imaginable profile. Custom router bits can be ordered. They are especially beneficial for home restoration projects, where production of the original trim and molding has been discontinued.
Sometimes complementary bits come in sets designed to facilitate the joinery used in frame and panel construction. One bit is designed to cut the grove in the rail and stile pieces while the other shapes the edge of the panel to fit in the grove.
CNC router
A CNC wood router is a computer controlled machine to which the router or spindle mounts. The CNC Machine can be either a moving gantry style, where the table is fixed and the router spindle moves over it, or fixed bridge design, where the table moves underneath the router spindle, or hand-held style, where the operator moves the machine to the area to be cut and the machine controls the fine adjustments. CAD/CAM software programming is used to model the part that is to be created in the computer and then create a tool path for the machine to follow to cut out the part. The CNC moves along three axes (X-Y-Z). Most CNC routers have a three motor drive system utilizing either servo or stepper motors. More advanced routers use a four motor system for added speed and accuracy.
Similar tools
A tool similar to a router, but designed to hold smaller cutting bits—thereby making it easier to handle for small jobs—is a laminate trimmer.
A related tool, called a spindle moulder (UK) or shaper (North America), is used to hold larger cutter heads and can be used for deeper or larger-diameter cuts. Another related machine is the pin router, a larger static version of the hand electric router but normally with a much more powerful motor and other features such as automatic template copying.
Some profile cutters use a cutting head reminiscent of a spindle router. These should not be confused with profile cutters used for steel plate which use a flame as the cutting method.
See also
Laminate trimmer
Biscuit joiner
End mill
Drill bit
References
Notes
Bibliography
Todd, Robert H.; Allen, Dell K.; Alting, Leo (1994). Manufacturing Process Reference Guide. Industrial Press Inc., New York.
External links
Woodworking router demonstration
Woodworking hand-held power tools
Woodworking tools |
1590932 | https://en.wikipedia.org/wiki/United%20States%20Southern%20Command | United States Southern Command | The United States Southern Command (USSOUTHCOM), located in Doral, Florida in Greater Miami, is one of the eleven unified combatant commands in the United States Department of Defense. It is responsible for providing contingency planning, operations, and security cooperation for Central and South America, the Caribbean (except U.S. commonwealths, territories, and possessions), their territorial waters, and for the force protection of U.S. military resources at these locations. USSOUTHCOM is also responsible for ensuring the defense of the Panama Canal and the canal area.
Under the leadership of a four-star Commander, USSOUTHCOM is organized into a headquarters with six main directorates, component commands and military groups that represent SOUTHCOM in the region. USSOUTHCOM is a joint command of more than 1,201 military and civilian personnel representing the United States Army, Navy, Air Force, Marine Corps, Coast Guard, and several other federal agencies. Civilians working at USSOUTHCOM are, for the most part, civilian employees of the Army, as the Army is USSOUTHCOM's Combatant Command Support Agent. The Services provide USSOUTHCOM with component commands which, along with their Joint Special Operations component, two Joint Task Forces, one Joint Interagency Task Force, and Security Cooperation Offices, perform USSOUTHCOM missions and security cooperation activities. USSOUTHCOM exercises its authority through the commanders of its components, Joint Task Forces/Joint Interagency Task Force, and Security Cooperation Organizations.
Area of Responsibility
The USSOUTHCOM Area of Responsibility (AOR) encompasses 32 nations (19 in Central and South America and 13 in the Caribbean), of which 31 are democracies, and 14 U.S. and European territories. As of October 2002, the area of focus covered 14.5 million square miles (23.2 million square kilometers.)
The United States Southern Command area of interest includes:
The land mass of Latin America south of Mexico
The waters adjacent to Central and South America
The Caribbean Sea, its 12 island nations and European territories
A portion of the Atlantic Ocean
Components
USSOUTHCOM accomplishes much of its mission through its service components, four representing each service, one specializing in Special Operations missions, and three additional joint task forces:
U.S. Army South (Sixth Army)
United States Army South (ARSOUTH) forces include aviation, intelligence, communication, and logistics units. Located at Fort Sam Houston, Texas, it supports regional disaster relief and counterdrug efforts. ARSOUTH also exercises oversight, planning, and logistical support for humanitarian and civic assistance projects throughout the region in support of the USSOUTHCOM Theater Security Cooperation Strategy. ARSOUTH provides Title 10 and Executive Agent responsibilities throughout the Latin American and Caribbean region. In 2013, around four thousand troops were deployed in Latin America.
Air Forces Southern (Twelfth Air Force)
Located at Davis-Monthan Air Force Base, Arizona, AFSOUTH consists of a staff; a Falconer Combined Air and Space Operations Center for command and control of air activity in the USSOUTHCOM area and an Air Force operations group responsible for Air Force forces in the area. AFSOUTH serves as the executive agent for forward operating locations; provides joint/combined radar surveillance architecture oversight; provides intra-theater airlift; and supports USSOUTHCOM's Theater Security Cooperation Strategy through regional disaster relief exercises and counter-drug operations. AFSOUTH also provides oversight, planning, execution, and logistical support for humanitarians and civic assistance projects and hosts a number of Airmen-to-Airmen conferences. Twelfth Air Force is also leading the way in bringing the Chief of Staff of the Air Force's Warfighting Headquarters (WFHQ) concept to life. The WFHQ is composed of a command and control element, an Air Force forces staff and an Air Operations Center. Operating as a WFHQ since June 2004, Twelfth Air Force has served as the Air Force model for the future of Combined Air and Space Operations Centers and WFHQ Air Force forces.
U.S. Naval Forces Southern Command & U.S. Fourth Fleet
Located at Naval Station Mayport, Florida, USNAVSO exercises command and control over all U.S. naval operations in the USSOUTHCOM area including naval exercises, maritime operations, and port visits. USNAVSO is also the executive agent for the operation of the cooperative security location at Comalapa, El Salvador, which provides basing in support of aerial counter narco-terrorism operations.
On 24 April 2008, Admiral Gary Roughead, the Chief of Naval Operations, announced that the United States Fourth Fleet would be re-established, effective 1 July, responsible for U.S. Navy ships, aircraft and submarines operating in the Caribbean Sea, as well as Central and South America. Rear Admiral Joseph D. Kernan was named as the fleet commander and Commander, U.S. Naval Forces Southern Command. Up to four ships are deployed in the waters in and around Latin American, at any given time.
U.S. Marine Corps Forces, South
Located in Doral, Florida, USMARFORSOUTH commands all United States Marine Corps Forces (MARFORs) assigned to USSOUTHCOM; advises USSOUTHCOM on the proper employment and support of MARFORs; conducts deployment/redeployment planning and execution of assigned/attached MARFORs; and accomplishes other operational missions as assigned.
Special Operations Command South
Located at Homestead Air Reserve Base near Miami, Florida, Special Operations Command South (SOCSOUTH) provides the primary theater contingency response force and plans, prepares for, and conducts special operations in support of USSOUTHCOM. USSOCSOUTH controls all Special Operations Forces in the region and also establishes and operates a Joint Special Operations Task Force when required. As a Theater Special Operations Command (TSOC), USSOCSOUTH is a sub-unified command of USSOUTHCOM.
SOCSOUTH has five assigned or attached subordinate commands including "Charlie" Company, 3rd Battalion, 7th Special Forces Group (Airborne) (7th SFG(A)); "Charlie" Company, 3rd Battalion, 160th Special Operations Aviation Regiment (Airborne); Naval Special Warfare Unit FOUR; 112th Signal Detachment SOCSOUTH; and Joint Special Operations Air Component-South.
There are also three task forces with specific missions in the region that report to U.S. Southern Command:
Joint Task Force Bravo
Located at Soto Cano Air Base, Honduras, Joint Task Force (JTF) -Bravo operates a forward, all-weather day/night C-5-capable airbase. JTF – Bravo organizes multilateral exercises and supports, in cooperation with partner nations, humanitarian and civic assistance, counterdrug, contingency and disaster relief operations in Central America.
Joint Task Force Guantanamo
Located at U.S. Naval Station Guantanamo Bay, Cuba, JTF – Guantanamo conducts detention and interrogation operations in support of the War on Terrorism, coordinates and implements detainee screening operations, and supports law enforcement and war crimes investigations as well as Military Commissions for Detained Enemy Combatants. JTF – Guantanamo is also prepared to support mass migration operations at Naval Station GTMO.
Joint Interagency Task Force South
Located in Key West, Florida, JIATF South is an interagency task force that serves as the catalyst for integrated and synchronized interagency counter-drug operations and is responsible for the detection and monitoring of suspect air and maritime drug activity in the Caribbean Sea, Gulf of Mexico, and the eastern Pacific. JIATF- South also collects, processes, and disseminates counter-drug information for interagency operations. Manta Air Base was one of JIATF-South's bases, in Ecuador until 19 September 2009.
Humanitarian assistance and disaster relief
USSOUTHCOM's overseas humanitarian assistance and disaster relief programs build the capacity of host nations to respond to disasters and build their self-sufficiency while also empowering regional organizations.
These programs provide valuable training to U.S. military units in responding effectively to assist the victims of storms, earthquakes, and other natural disasters through the provision of medical, surgical, dental, and veterinary services, as well as civil construction projects.
The Humanitarian Assistance Program funds projects that enhance the capacity of host nations to respond when disasters strike and better prepare them to mitigate acts of terrorism. Humanitarian Assistance Program projects such as technical aid and the construction of disaster relief warehouses, emergency operation centers, shelters, and schools promote peace and stability, support the development of the civilian infrastructure necessary for economic and social reforms, and improve the living conditions of impoverished regions in the AOR.
Humanitarian assistance exercises such as Exercise Nuevos Horizontes (New Horizons) involve the construction of schools, clinics, and water wells in countries throughout the region. At the same time, medical readiness exercises involving teams consisting of doctors, nurses and dentists also provide general and specialized health services to host nation citizens requiring care. These humanitarian assistance exercises, which last several months each, provide much-needed services and infrastructure, while providing critical training for deployed U.S. military forces. These exercises generally take place in rural, underprivileged areas. USSOUTHCOM attempts to combine these efforts with those of host-nation doctors, either military or civilian, to make them even more beneficial.
In 2006, USSOUTHCOM sponsored 69 Medical Readiness Training Exercises in 15 nations, providing medical services to more than 270,000 citizens from the region. During 2007, USSOUTHCOM is scheduled to conduct 61 additional medical exercises in 14 partner nations.
USSOUTHCOM sponsors disaster preparedness exercises, seminars and conferences to improve the collective ability of the U.S. and its partner nations to respond effectively and expeditiously to disasters. USSOUTHCOM has also supported the construction or improvement of three Emergency Operations Centers, 13 Disaster Relief Warehouses and prepositioned relief supplies across the region. Construction of eight additional Emergency Operation Centers and seven additional warehouses is ongoing.
This type of multinational disaster preparedness has proven to increase the ability of USSOUTHCOM to work with America's partner nations. For example, following the 2005 Hurricane Stan in Guatemala, USSOUTHCOM deployed 11 military helicopters and 125 personnel to assist with relief efforts. In conjunction with their Guatemalan counterparts, they evacuated 48 victims and delivered nearly 200 tons of food, medical supplies and communications equipment. Following Tropical Storm Gamma in Honduras, JTF-Bravo deployed nine helicopters and more than 40 personnel to assist with relief efforts. They airlifted more than 100,000 pounds of emergency food, water and medical supplies. USSOUTHCOM was deployed to Haiti following the 2010 Haiti earthquake to lead the humanitarian effort.
USSOUTHCOM also conducts counternarcotics and counternarcoterrorism programs.
History
The United States Southern Command (USSOUTHCOM) traces its origins to 1903 when the first U.S. Marines arrived in Panama to ensure U.S. control of the Panama Railroad connecting the Atlantic and Pacific Oceans across the narrow waist of the Panamanian Isthmus.
The Marines protected the Panamanian civilian uprising led by former Panama Canal Company general manager Philippe-Jean Bunau-Varilla guaranteeing his creation of the Panamanian state. Following the signing of the Hay–Bunau-Varilla Treaty granting control of the Panama Canal Zone to the United States, the Marines remained to provide security during the early construction days of the Panama Canal.
In 1904, Army Colonel William C. Gorgas was sent to the Canal Zone (as it was then called) as Chief Sanitary Officer to fight yellow fever and malaria. In two years, yellow fever was eliminated from the Canal Zone. Soon after, malaria was also brought under control. With the appointment of Army Lieutenant Colonel George W. Goethals to the post of chief engineer of the Isthmian Canal Commission by then President Theodore Roosevelt in 1907, the construction changed from a civilian to a military project.
In 1911, the first troops of the U.S. Army's 10th Infantry Regiment arrived at Camp E. S. Otis, on the Pacific side of the Isthmus. They assumed primary responsibility for Canal defense. In 1914, the Marine Battalion left the Isthmus to participate in operations against Pancho Villa in Mexico. On 14 August 1914, seven years after Goethals' arrival, the Panama Canal opened to world commerce.
The first company of coast artillery troops arrived in 1914 and later established fortifications at each end (Atlantic and Pacific) of the Canal as the Harbor Defenses (HD) of Cristobal and HD Balboa, respectively, with mobile forces of infantry and light artillery centrally located to support either end. By 1915, a consolidated command was designated as Headquarters, U.S. Troops, Panama Canal Zone. The command reported directly to the Army's Eastern Department headquartered at Fort Jay, Governors Island, New York. The headquarters of this newly created command was first located in the Isthmian Canal Commission building in the town of Ancon, adjacent to Panama City. It relocated in 1916 to the nearby newly designated military post of Quarry Heights, which had begun construction in 1911.
On 1 July 1917, almost three months after the American entry into World War I, the Panama Canal Department was activated as a geographic command of the U.S. Army. It remained as the senior Army headquarters in the region until activation of the Caribbean Defense Command (CDC) on 10 February 1941. The CDC, co-located at Quarry Heights, was commanded by Lieutenant General Daniel Van Voorhis, who continued to command the Panama Canal Department.
The new command eventually assumed operational responsibility over air and naval forces assigned in its area of operations during World War II, which included all U.S. forces and bases in the Caribbean basin outside the continental United States. By early 1942, a Joint Operations Center had been established at Quarry Heights. Meanwhile, 960 jungle-trained officers and enlisted men from the CDC deployed to New Caledonia in the southwest Pacific to help form the 5307th Composite Unit (Provisional), codenamed 'Galahad' and later nicknamed Merrill's Marauders for its famous exploits in Burma. In the meantime, military strength in the area was gradually rising and reached its peak in January 1943, when 68,000 personnel were defending the Panama Canal. Military strength was sharply reduced with the termination of World War II. Between 1946 and 1974, total military strength in Panama fluctuated between 6,600 and 20,300 (with the lowest force strength in 1959).
In December 1946, President Harry S. Truman approved recommendations of the Joint Chiefs of Staff for a comprehensive system of military commands to put responsibility for conducting military operations of all military forces in various geographical areas, in the hands of a single commander. Although the Caribbean Command was designated by the Defense Department on 1 November 1947, it did not become fully operational until 10 March 1948, when the old Caribbean Defense Command was inactivated.
On 6 June 1963, reflecting the fact that the command had a responsibility for U.S. military operations primarily in Central and South America, rather than in the Caribbean, President John F. Kennedy and Secretary of Defense Robert McNamara formally redesignated it as the United States Southern Command. The command's mission began to shift with the expansion of the Cold War to Latin America. Kennedy and his successor Lyndon B. Johnson expanded the division in the aftermath of the Cuban Missile Crisis and reoriented it towards irregular warfare against the establishment of another Communist state in the Western Hemisphere. From 1975 until late 1994 total military strength in Panama remained at about 10,000 personnel.
In January 1996 and June 1997, two phases of changes to the Department of Defense Unified Command Plan (UCP) were completed. Each phase of the UCP change added territory to SOUTHCOM's area of responsibility. The impact of the changes is significant. The new AOR includes the Caribbean, its 13 island nations and several U.S. and European territories, the Gulf of Mexico, as well as significant portions of the Atlantic and Pacific Oceans. The 1999 update to the UCP also transferred responsibility of an additional portion of the Atlantic Ocean to SOUTHCOM. On 1 October 2000, Southern Command assumed responsibility of the adjacent waters in the upper quadrant above Brazil, which was presently under the responsibility of U.S. Joint Forces Command.
The new AOR encompasses 32 nations (19 in Central and South America and 13 in the Caribbean), of which 31 are democracies, and 14 U.S. and European territories covering more than .
With the creation of the United States Department of Homeland Security, USSOUTHCOM Area of Responsibility (October 2002) experienced minor upper boundary redistribution or changes decreasing its total boundary by 1.1 square miles. (14.5 million square miles (23.2 million square kilometers.)
With the implementation of the Panama Canal Treaties (the Panama Canal Treaty of 1977 and the Treaty concerning the Permanent Neutrality and Operations of the Panama Canal), the U.S. Southern Command was relocated in Miami, Florida, on 26 September 1997.
A new headquarters building was constructed and opened in 2010 adjacent to the old rented building in the Doral area of Miami-Dade County. The complex features state-of-the-art planning and conference facilities. This capability is showcased in the 45,000-square-foot Conference Center of the Americas, which can support meetings of differing classification levels and multiple translations, information sources and video conferencing.
In 2012, as many as a dozen SouthCom service members, together with a number of Secret Service officers, were disciplined after they were found to have brought prostitutes to their rooms shortly before President Obama arrived for a summit in Cartagena, Colombia. According to the Associated Press seven Army soldiers and two Marines received administrative punishments for what an official report cited by the wire service said was misconduct consisting "almost exclusively of patronizing prostitutes and adultery." Hiring prostitutes, the report added, "is a violation of the U.S. military code of justice."
In 2014, SouthCom commander Kelly testified that while border security was an 'Existential' threat to the country, due to Budget sequestration in 2013 his forces were unable to respond to 75% of illicit trafficking events.
USSOUTHCOM's 2017-2027 Theater Strategy states that potential challenges in the future include transregional and transnational threat networks (T3Ns) which include traditional criminal organizations, as well as the expanding potential of extremist organizations such as ISIL and Hezbollah operating in the region by taking advantage of weak Caribbean and Latin American institutions. USSOUTHCOM also notes that the region is "extremely vulnerable to natural disasters and the outbreak of infectious diseases" due to issues with governance and inequality. Finally, the report recognizes the growing presence of China, Iran and Russia in the region, and that the intentions of these nations bring "a challenge to every nation that values nonaggression, rule of law, and respect for human rights". These challenges have been used to promote relationships between the United States and other governments in the region.
State Partnership Program
US SOUTHCOM currently has 22 state partnerships under the state partnership program (SPP). SPP creates a partnership between a state of the U.S. and a foreign nation by linking the host nation military or security forces with the National Guard. SOUTHCOM is equaled only by EUCOM in its number of partnerships.
Commanders
The U.S. Southern Command was activated in 1963, emerging from the U.S. Caribbean Command, established in 1947. Last commander of the U.S. Caribbean Command from January 1961 to June 1963 and first commander of the U.S. Southern Command since June 1963 was Lieutenant General–later General–Andrew P. O'Meara.
See also
Caribbean Regional Maritime Agreement
Manta Air Base
Operation Coronet Nighthawk
Operation Enduring Freedom - Caribbean and Central America
Partnership for Prosperity and Security in the Caribbean
Western Hemisphere Institute for Security Cooperation (formerly School of the Americas)
References
Further reading
Vasquez, Cesar A. "A History of the United States Caribbean Defense Command (1941-1947)." Florida International University, doctoral thesis (2016).
External links
Latin, Caribbean allies hail new U.S. Southern Command chief by John Yearwood, Miami Herald, 26 June 2009
Southern Command
Military units and formations in Florida
United States–Caribbean relations
United States–Central American relations
United States–South American relations
Military in the Caribbean
Military in Central America
Military in South America
1963 establishments in the United States
Military units and formations established in 1963 |
78969 | https://en.wikipedia.org/wiki/Pixar | Pixar | Pixar Animation Studios (), commonly known as just Pixar, is an American computer animation studio known for its critically and commercially successful computer animated feature films. It is based in Emeryville, California, and is a subsidiary of Walt Disney Studios, which is owned by The Walt Disney Company.
Pixar began in 1979 as part of the Lucasfilm computer division, known as the Graphics Group, before its spin-off as a corporation in 1986, with funding from Apple co-founder Steve Jobs, who became its majority shareholder. Disney purchased Pixar in 2006 at a valuation of $7.4+ billion by converting each share of Pixar stock to 2.3 shares of Disney stock. Pixar is best known for its feature films, technologically powered by RenderMan, the company's own implementation of the industry-standard RenderMan Interface Specification image-rendering application programming interface. Luxo Jr., a desk lamp from the studio's 1986 short film of the same name, is the studio's mascot.
Pixar has produced 24 feature films, beginning with Toy Story (1995), which is the first fully computer-animated feature film; its most recent film was Luca (2021). The studio has also produced many short films. , its feature films have earned approximately $14 billion at the worldwide box office, with an average worldwide gross of $680 million per film. Toy Story 3 (2010), Finding Dory (2016), Incredibles 2 (2018), and Toy Story 4 (2019) are all among the 50 highest-grossing films of all time, with Incredibles 2 being the fourth highest-grossing animated film of all time, with a gross of $1.2 billion; the other three also grossed over $1 billion. Moreover, 15 of Pixar's films are in the 50 highest-grossing animated films of all time.
The studio has earned 23 Academy Awards, 10 Golden Globe Awards, and 11 Grammy Awards, along with numerous other awards and acknowledgments. Many of Pixar's films have been nominated for the Academy Award for Best Animated Feature, since its inauguration in 2001, with eleven winners being Finding Nemo (2003), The Incredibles (2004), Ratatouille (2007), WALL-E (2008), Up (2009), Toy Story 3 (2010), Brave (2012), Inside Out (2015), Coco (2017), Toy Story 4 (2019), and Soul (2020); the four nominated without winning are Monsters, Inc. (2001), Cars (2006), Incredibles 2 (2018), and Onward (2020). Up and Toy Story 3 were also nominated for the more competitive and inclusive Academy Award for Best Picture.
On February 10, 2009, Pixar executives John Lasseter, Brad Bird, Pete Docter, Andrew Stanton, and Lee Unkrich were presented with the Golden Lion award for Lifetime Achievement by the Venice Film Festival. The physical award was ceremoniously handed to Lucasfilm's founder, George Lucas.
History
Early history
Pixar got its start in 1974 when New York Institute of Technology's (NYIT) founder, Alexander Schure, who was also the owner of a traditional animation studio, established the Computer Graphics Lab (CGL) and recruited computer scientists who shared his ambitions about creating the world's first computer-animated film. Edwin Catmull and Malcolm Blanchard were the first to be hired and were soon joined by Alvy Ray Smith and David DiFrancesco some months later, which were the four original members of the Computer Graphics Lab, located in a converted two-story garage acquired from the former Vanderbilt-Whitney estate. Schure kept pouring money into the computer graphics lab, an estimated $15 million, giving the group everything they desired and driving NYIT into serious financial troubles. Eventually, the group realized they needed to work in a real film studio in order to reach their goal. Francis Ford Coppola then invited Smith to his house for a three-day media conference, where Coppola and George Lucas shared their visions for the future of digital moviemaking.
When Lucas approached them and offered them a job at his studio, six employees moved to Lucasfilm. During the following months, they gradually resigned from CGL, found temporary jobs for about a year to avoid making Schure suspicious, and joined the Graphics Group at Lucasfilm.
The Graphics Group, which was one-third of the Computer Division of Lucasfilm, was launched in 1979 with the hiring of Catmull from NYIT, where he was in charge of the Computer Graphics Lab. He was then reunited with Smith, who also made the journey from NYIT to Lucasfilm, and was made the director of the Graphics Group. At NYIT, the researchers pioneered many of the CG foundation techniques—in particular, the invention of the alpha channel by Catmull and Smith. Over the next several years, the CGL would produce a few frames of an experimental film called The Works. After moving to Lucasfilm, the team worked on creating the precursor to RenderMan, called REYES (for "renders everything you ever saw") and developed several critical technologies for CG—including particle effects and various animation tools.
John Lasseter was hired to the Lucasfilm team for a week in late 1983 with the title "interface designer"; he animated the short film The Adventures of André & Wally B. In the next few years, a designer suggested naming a new digital compositing computer the "Picture Maker". Smith suggested that the laser-based device have a catchier name, and came up with "Pixer", which after a meeting was changed to "Pixar".
In 1982, the Pixar team began working on special-effects film sequences with Industrial Light & Magic. After years of research, and key milestones such as the Genesis Effect in Star Trek II: The Wrath of Khan and the Stained Glass Knight in Young Sherlock Holmes, the group, which then numbered 40 individuals, was spun out as a corporation in February 1986 by Catmull and Smith. Among the 38 remaining employees, there were also Malcolm Blanchard, David DiFrancesco, Ralph Guggenheim, and Bill Reeves, who had been part of the team since the days of NYIT. Tom Duff, also an NYIT member, would later join Pixar after its formation. With Lucas's 1983 divorce, which coincided with the sudden dropoff in revenues from Star Wars licenses following the release of Return of the Jedi, they knew he would most likely sell the whole Graphics Group. Worried that the employees would be lost to them if that happened, which would prevent the creation of the first computer-animated movie, they concluded that the best way to keep the team together was to turn the group into an independent company. But Moore's Law also suggested that sufficient computing power for the first film was still some years away, and they needed to focus on a proper product until then. Eventually, they decided they should be a hardware company in the meantime, with their Pixar Image Computer as the core product, a system primarily sold to governmental, scientific, and medical markets. They also used SGI computers.
In 1983, Nolan Bushnell founded a new computer-guided animation studio called Kadabrascope as a subsidiary of his Chuck E. Cheese's Pizza Time Theatres company (PTT), which was founded in 1977. Only one major project was made out of the new studio, an animated Christmas special for NBC starring Chuck E. Cheese and other PTT mascots; known as "Chuck E. Cheese: The Christmas That Almost Wasn't". The animation movement would be made using tweening instead of traditional cel animation. After the video game crash of 1983, Bushnell started selling some subsidiaries of PTT to keep the business afloat. Sente Technologies (another division, was founded to have games distributed in PTT stores) was sold to Bally Games and Kadabrascope was sold to Lucasfilm. The Kadabrascope assets were combined with the Computer Division of Lucasfilm. Coincidentally, one of Steve Jobs's first jobs was under Bushnell in 1973 as a technician at his other company Atari, which Bushnell sold to Warner Communications in 1976 to focus on PTT. PTT would later go bankrupt in 1984 and be acquired by ShowBiz Pizza Place.
Independent company (1986–1999)
In 1986, the newly independent Pixar was headed by President Edwin Catmull and Executive Vice President Alvy Ray Smith. Lucas's search for investors led to an offer from Steve Jobs, which Lucas initially found too low. He eventually accepted after determining it impossible to find other investors. At that point, Smith and Catmull had been declined 45 times, and 35 venture capitalists and ten large corporations had declined. Jobs, who had been edged out of Apple in 1985, was now founder and CEO of the new computer company NeXT. On February 3, 1986, he paid $5 million of his own money to George Lucas for technology rights and invested $5 million cash as capital into the company, joining the board of directors as chairman.
In 1985, while still at Lucasfilm, they had made a deal with the Japanese publisher Shogakukan to make a computer-animated movie called Monkey, based on the Monkey King. The project continued sometime after they became a separate company in 1986, but it became clear that the technology was not sufficiently advanced. The computers were not powerful enough and the budget would be too high. So they focused on the computer hardware business for years until a computer-animated feature became feasible according to Moore's law.
At the time, Walt Disney Studios was interested and eventually bought and used the Pixar Image Computer and custom software written by Pixar as part of its Computer Animation Production System (CAPS) project, to migrate the laborious ink and paint part of the 2D animation process to a more automated method. The company's first feature film to be released using this new animation method was The Rescuers Down Under (1990).
In a bid to drive sales of the system and increase the company's capital, Jobs suggested releasing the product to the mainstream market. Pixar employee John Lasseter, who had long been working on not-for-profit short demonstration animations, such as Luxo Jr. (1986) to show off the device's capabilities, premiered his creations to great fanfare at SIGGRAPH, the computer graphics industry's largest convention.
However, the Image Computer had inadequate sales which threatened to end the company as financial losses grew. Jobs increased investment in exchange for an increased stake, reducing the proportion of management and employee ownership until eventually, his total investment of $50 million gave him control of the entire company. In 1989, Lasseter's growing animation department, originally composed of just four people (Lasseter, Bill Reeves, Eben Ostby, and Sam Leffler), was turned into a division that produced computer-animated commercials for outside companies. In April 1990, Pixar sold its hardware division, including all proprietary hardware technology and imaging software, to Vicom Systems, and transferred 18 of Pixar's approximately 100 employees. That year, Pixar moved from San Rafael to Richmond, California. Pixar released some of its software tools on the open market for Macintosh and Windows systems. RenderMan is one of the leading 3D packages of the early 1990s, and Typestry is a special-purpose 3D text renderer that competed with RayDream.
During this period, Pixar continued its successful relationship with Walt Disney Animation Studios, a studio whose corporate parent would ultimately become its most important partner. As 1991 began, however, the layoff of 30 employees in the company's computer hardware department—including the company's president, Chuck Kolstad, reduced the total number of employees to just 42, approximately its original number. Pixar made a historic $26 million deal with Disney to produce three computer-animated feature films, the first of which was Toy Story, the product of the technological limitations that challenged CGI. By then the software programmers, who were doing RenderMan and IceMan, and Lasseter's animation department, which made television commercials (and four Luxo Jr. shorts for Sesame Street the same year), were all that remained of Pixar.
Even with income from these projects, the company continued to lose money and Steve Jobs, as chairman of the board and now the full owner, often considered selling it. Even as late as 1994, Jobs contemplated selling Pixar to other companies such as Hallmark Cards, Microsoft co-founder Paul Allen, and Oracle CEO and co-founder Larry Ellison. Only after learning from New York critics that Toy Story would probably be a hit—and confirming that Disney would distribute it for the 1995 Christmas season—did he decide to give Pixar another chance. For the first time, he also took an active leadership role in the company and made himself CEO. Toy Story grossed more than $373 million worldwide and, when Pixar held its initial public offering on November 29, 1995, it exceeded Netscape's as the biggest IPO of the year. In its first half-hour of trading, Pixar stock shot from $22 to $45, delaying trading because of unmatched buy orders. Shares climbed to and closed the day at $39.
During the 1990s and 2000s, Pixar gradually developed the "Pixar Braintrust", the studio's primary creative development process, in which all of its directors, writers, and lead storyboard artists regularly examine each other's projects and give very candid "notes", the industry term for constructive criticism. The Braintrust operates under a philosophy of a "filmmaker-driven studio", in which creatives help each other move their films forward through a process somewhat like peer review, as opposed to the traditional Hollywood approach of an "executive-driven studio" in which directors are micromanaged through "mandatory notes" from development executives outranking the producers. According to Catmull, it evolved out of the working relationship between Lasseter, Stanton, Docter, Unkrich, and Joe Ranft on Toy Story.
As a result of the success of Toy Story, Pixar built a new studio at the Emeryville campus which was designed by PWP Landscape Architecture and opened in November 2000.
Collaboration with Disney (1999–2006)
Pixar and Disney had disagreements over the production of Toy Story 2. Originally intended as a straight-to-video release (and thus not part of Pixar's three-picture deal), the film was eventually upgraded to a theatrical release during production. Pixar demanded that the film then be counted toward the three-picture agreement, but Disney refused. Though profitable for both, Pixar later complained that the arrangement was not equitable. Pixar was responsible for creation and production, while Disney handled marketing and distribution. Profits and production costs were split equally, but Disney exclusively owned all story, character, and sequel rights and also collected a 10- to 15-percent distribution fee. The lack of these rights was perhaps the most onerous aspect for Pixar and precipitated a contentious relationship.
The two companies attempted to reach a new agreement for ten months and failed in January 26, 2001, July 26, 2002, April 22, 2003 January 16, 2004, July 22, 2004 and January 14, 2005. The new deal would be only for distribution, as Pixar intended to control production and own the resulting story, character, and sequel rights while Disney would own the right of first refusal to distribute any sequels. Pixar also wanted to finance its own films and collect 100 percent profit, paying Disney only the 10- to 15-percent distribution fee. More importantly, as part of any distribution agreement with Disney, Pixar demanded control over films already in production under the old agreement, including The Incredibles (2004) and Cars (2006). Disney considered these conditions unacceptable, but Pixar would not concede.
Disagreements between Steve Jobs and Disney chairman and CEO Michael Eisner made the negotiations more difficult than they otherwise might have been. They broke down completely in mid-2004, with Disney forming Circle Seven Animation and Jobs declaring that Pixar was actively seeking partners other than Disney. Even with this announcement and several talks with Warner Bros., Sony Pictures, and 20th Century Fox, Pixar did not enter negotiations with other distributors, although a Warner Bros. spokesperson told CNN, "We would love to be in business with Pixar. They are a great company." After a lengthy hiatus, negotiations between the two companies resumed following the departure of Eisner from Disney in September 2005. In preparation for potential fallout between Pixar and Disney, Jobs announced in late 2004 that Pixar would no longer release movies at the Disney-dictated November time frame, but during the more lucrative early summer months. This would also allow Pixar to release DVDs for its major releases during the Christmas shopping season. An added benefit of delaying Cars from November 4, 2005, to June 9, 2006, was to extend the time frame remaining on the Pixar-Disney contract, to see how things would play out between the two companies.
Pending the Disney acquisition of Pixar, the two companies created a distribution deal for the intended 2007 release of Ratatouille, to ensure that if the acquisition failed, this one film would be released through Disney's distribution channels. In contrast to the earlier Pixar deal, Ratatouille was meant to remain a Pixar property and Disney would have received only a distribution fee. The completion of Disney's Pixar acquisition, however, nullified this distribution arrangement.
Disney subsidiary (2006–present)
In January 2006, Disney ultimately agreed to buy Pixar for approximately $7.4 billion in an all-stock deal. Following Pixar shareholder approval, the acquisition was completed May 5, 2006. The transaction catapulted Jobs, who owned 49.65% of total share interest in Pixar, to Disney's largest individual shareholder with 7%, valued at $3.9 billion, and a new seat on its board of directors. Jobs's new Disney holdings exceeded holdings belonging to ex-CEO Michael Eisner, the previous top shareholder, who still held 1.7%; and Disney Director Emeritus Roy E. Disney, who held almost 1% of the corporation's shares. Pixar shareholders received 2.3 shares of Disney common stock for each share of Pixar common stock redeemed.
As part of the deal, John Lasseter, by then Executive Vice President, became Chief Creative Officer (reporting directly to President and CEO Robert Iger and consulting with Disney Director Roy E. Disney) of both Pixar and Walt Disney Animation Studios (including its division DisneyToon Studios), as well as the Principal Creative Adviser at Walt Disney Imagineering, which designs and builds the company's theme parks. Catmull retained his position as President of Pixar, while also becoming President of Walt Disney Animation Studios, reporting to Iger and Dick Cook, chairman of the Walt Disney Studios. Jobs's position as Pixar's chairman and chief executive officer was abolished, and instead, he took a place on the Disney board of directors.
After the deal closed in May 2006, Lasseter revealed that Iger had realized Disney needed to buy Pixar while watching a parade at the opening of Hong Kong Disneyland in September 2005. Iger noticed that of all the Disney characters in the parade, not one was a character that Disney had created within the last ten years since all the newer ones had been created by Pixar. Upon returning to Burbank, Iger commissioned a financial analysis that confirmed that Disney had actually lost money on animation for the past decade, then presented that information to the board of directors at his first board meeting after being promoted from COO to CEO, and the board, in turn, authorized him to explore the possibility of a deal with Pixar. Lasseter and Catmull were wary when the topic of Disney buying Pixar first came up, but Jobs asked them to give Iger a chance (based on his own experience negotiating with Iger in summer 2005 for the rights to ABC shows for the fifth-generation iPod Classic), and in turn, Iger convinced them of the sincerity of his epiphany that Disney really needed to re-focus on animation.
Lasseter and Catmull's oversight of both the Disney Feature Animation and Pixar studios did not mean that the two studios were merging, however. In fact, additional conditions were laid out as part of the deal to ensure that Pixar remained a separate entity, a concern that analysts had expressed about the Disney deal. Some of those conditions were that Pixar HR policies would remain intact, including the lack of employment contracts. Also, the Pixar name was guaranteed to continue, and the studio would remain in its current Emeryville, California, location with the "Pixar" sign. Finally, branding of films made post-merger would be "Disney•Pixar" (beginning with Cars).
Jim Morris, producer of WALL-E (2008), became general manager of Pixar. In this new position, Morris took charge of the day-to-day running of the studio facilities and products.
After a few years, Lasseter and Catmull were able to successfully transfer the basic principles of the Pixar Braintrust to Disney Animation, although meetings of the Disney Story Trust are reportedly "more polite" than those of the Pixar Braintrust. Catmull later explained that after the merger, to maintain the studios' separate identities and cultures (notwithstanding the fact of common ownership and common senior management), he and Lasseter "drew a hard line" that each studio was solely responsible for its own projects and would not be allowed to borrow personnel from or lend tasks out to the other. That rule ensures that each studio maintains "local ownership" of projects and can be proud of its own work. Thus, for example, when Pixar had issues with Ratatouille and Disney Animation had issues with Bolt (2008), "nobody bailed them out" and each studio was required "to solve the problem on its own" even when they knew there were personnel at the other studio who theoretically could have helped.
In November 2014, Morris was promoted to president of Pixar, while his counterpart at Disney Animation, general manager Andrew Millstein, was also promoted to president of that studio. Both continued to report to Catmull, who retained the title of president of both Disney Animation and Pixar.
On November 21, 2017, Lasseter announced that he was taking a six-month leave of absence after acknowledging what he called "missteps" in his behavior with employees in a memo to staff. According to The Hollywood Reporter and The Washington Post, Lasseter had a history of alleged sexual misconduct towards employees. On June 8, 2018, it was announced that Lasseter would leave Disney Animation and Pixar at the end of the year, but would take on a consulting role until then. Pete Docter was announced as Lasseter's replacement as chief creative officer of Pixar on June 19, 2018.
On October 23, 2018, it was announced that Catmull would be retiring. He stayed in an adviser role until July 2019. On January 18, 2019, it was announced that Lee Unkrich would be leaving Pixar after 25 years.
Expansion
On April 20, 2010, Pixar opened Pixar Canada in the downtown area of Vancouver, British Columbia, Canada. The roughly 2,000 square meters studio produced seven short films based on Toy Story and Cars characters. In October 2013, the studio was closed down to refocus Pixar's efforts at its main headquarters.
Campus
When Steve Jobs, chief executive officer of Apple Inc. and Pixar, and John Lasseter, then-executive vice president of Pixar, decided to move their studios from a leased space in Point Richmond, California, to larger quarters of their own, they chose a 20-acre site in Emeryville, California, formerly occupied by Del Monte Foods, Inc. The first of several buildings, the high-tech structure designed by Bohlin Cywinski Jackson has special foundations and electricity generators to ensure continued film production, even through major earthquakes. The character of the building is intended to abstractly recall Emeryville's industrial past. The two-story steel-and-masonry building is a collaborative space with many pathways.
The digital revolution in filmmaking was driven by applied mathematics, including computational physics and geometry. In 2008, this led Pixar senior scientist Tony DeRose to offer to host the second Julia Robinson Mathematics Festival at the Emeryville campus.
Feature films and shorts
Traditions
Some of Pixar's first animators were former cel animators including John Lasseter, and others came from computer animation or were fresh college graduates. A large number of animators that make up its animation department had been hired around the releases of A Bug's Life (1998), Monsters, Inc. (2001), and Finding Nemo (2003). The success of Toy Story (1995) made Pixar the first major computer-animation studio to successfully produce theatrical feature films. The majority of the animation industry was (and still is) located in Los Angeles, and Pixar is located north in the San Francisco Bay Area. Traditional hand-drawn animation was still the dominant medium for feature animated films.
With the scarcity of Los Angeles-based animators willing to move their families so far north to give up traditional animation and try computer animation, Pixar's new hires at this time either came directly from college or had worked outside feature animation. For those who had traditional animation skills, the Pixar animation software Marionette was designed so that traditional animators would require a minimum amount of training before becoming productive.
In an interview with PBS talk show host Tavis Smiley, Lasseter said that Pixar's films follow the same theme of self-improvement as the company itself has: with the help of friends or family, a character ventures out into the real world and learns to appreciate his friends and family. At the core, Lasseter said, "it's gotta be about the growth of the main character and how he changes."
Actor John Ratzenberger, who had previously starred in the television series Cheers, has voiced a character in every Pixar feature film from Toy Story through Onward. He does not have a role in either Soul or Luca; however, a non-speaking background character in the former film bears his likeness. Pixar paid tribute to Ratzenberger in the end credits of Cars (2006) by parodying scenes from three of its earlier films (Toy Story, Monsters, Inc., and A Bug's Life), replacing all of the characters with motor vehicle versions of them and giving each film an automotive-based title. After the third scene, Mack (his character in Cars) realizes that the same actor has been voicing characters in every film.
Due to the traditions that have occurred within the films and shorts such as anthropomorphic creatures and objects, and easter egg crossovers between films and shorts that have been spotted by Pixar fans, a blog post titled The Pixar Theory was published in 2013 by Jon Negroni, and popularized by the YouTube channel Super Carlin Brothers, proposing that all of the characters within the Pixar universe were related, surrounding Boo from Monsters Inc. and the Witch from Brave (2012).
Sequels and prequels
Toy Story 2 was originally commissioned by Disney as a 60-minute direct-to-video film. Expressing doubts about the strength of the material, John Lasseter convinced the Pixar team to start from scratch and make the sequel their third full-length feature film.
Following the release of Toy Story 2 in 1999, Pixar and Disney had a gentlemen's agreement that Disney would not make any sequels without Pixar's involvement though retaining a right to do so. After the two companies were unable to agree on a new deal, Disney announced in 2004 they would plan to move forward on sequels with or without Pixar and put Toy Story 3 into pre-production at Disney's then-new CGI division Circle Seven Animation. However, when Lasseter was placed in charge of all Disney and Pixar animation following Disney's acquisition of Pixar in 2006, he put all sequels on hold and Toy Story 3 was canceled. In May 2006, it was announced that Toy Story 3 was back in pre-production with a new plot and under Pixar's control. The film was released on June 18, 2010, as Pixar's eleventh feature film.
Shortly after announcing the resurrection of Toy Story 3, Lasseter fueled speculation on further sequels by saying, "If we have a great story, we'll do a sequel." Cars 2, Pixar's first non-Toy Story sequel, was officially announced in April 2008 and released on June 24, 2011 as their twelfth. Monsters University, a prequel to Monsters, Inc. (2001), was announced in April 2010 and initially set for release in November 2012; the release date was pushed to June 21, 2013 due to Pixar's past success with summer releases, according to a Disney executive.
In June 2011, Tom Hanks, who voiced Woody in the Toy Story series, implied that Toy Story 4 was "in the works", although it had not yet been confirmed by the studio. In April 2013, Finding Dory, a sequel to Finding Nemo, was announced for a June 17, 2016 release. In March 2014, Incredibles 2 and Cars 3 were announced as films in development. In November 2014, Toy Story 4 was confirmed to be in development with Lasseter serving as director. However, in July 2017, Lasseter announced that he had stepped down, leaving Josh Cooley as sole director. Released in June 2019, Toy Story 4 ranks among the 40 top-grossing films in American cinema.
Adaptation to television
Toy Story is the first Pixar film to be adapted for television as Buzz Lightyear of Star Command film and TV series on the UPN television network, now The CW. Cars became the second with the help of Cars Toons, a series of 3-to-5-minute short films running between regular Disney Channel show intervals and featuring Mater from Cars. Between 2013 and 2014, Pixar released its first two television specials, Toy Story of Terror! and Toy Story That Time Forgot. Monsters at Work, a television series spin-off of Monsters, Inc., premiered in July 2021, on Disney+.
On December 10, 2020, it was announced that three series would be released on Disney+. The first is Dug Days (featuring Dug from Up) where Dug explores suburbia. Dug Days premiered on September 1, 2021. Next, a Cars show, titled Cars on the Road, was announced to come to Disney+ in Fall 2022 where it follows Mater and Lightning McQueen as they go on a road trip. Lastly, an original show entitled Win or Lose would be released on Disney+ in Fall 2023. The series will follow a middle school softball team the week leading up to the big championship game where each episode will be from a different perspective.
2D animation and live-action
All Pixar films and shorts to date have been computer-animated features, but so far, WALL-E (2008) has been the only Pixar film not to be completely animated as it featured a small amount of live-action footage including Hello, Dolly! while Day & Night (2010), Kitbull (2019), Burrow (2020), and Twenty Something (2021) are the only four shorts to feature 2D animation. 1906, the live-action film by Brad Bird based on a screenplay and novel by James Dalessandro about the 1906 earthquake, was in development but has since been abandoned by Bird and Pixar. Bird has stated that he was "interested in moving into the live-action realm with some projects" while "staying at Pixar [because] it's a very comfortable environment for me to work in". In June 2018, Bird mentioned the possibility of adapting the novel as a TV series, and the earthquake sequence as a live-action feature film.
The Toy Story Toons short Hawaiian Vacation (2011) also includes the fish and shark as live-action.
Jim Morris, president of Pixar, produced Disney's John Carter (2012) which Andrew Stanton co-wrote and directed.
Pixar's creative heads were consulted to fine tune the script for the 2011 live-action film The Muppets. Similarly, Pixar assisted in the story development of Disney's The Jungle Book (2016) as well as providing suggestions for the film's end credits sequence. Both Pixar and Mark Andrews were given a "Special Thanks" credit in the film's credits. Additionally, many Pixar animators, both former and current, were recruited for a traditional hand-drawn animated sequence for the 2018 film Mary Poppins Returns.
Pixar representatives have also assisted in the English localization of several Studio Ghibli films, mainly those from Hayao Miyazaki.
In 2019, Pixar developed a live-action hidden camera reality show, titled Pixar in Real Life, for Disney+.
Upcoming films
Five upcoming films have been announced. The first, titled Turning Red, written and directed by Domee Shi, will be released on March 11, 2022, followed by Lightyear, directed by Angus MacLane, on June 17, 2022, and three untitled films on June 16, 2023, March 1, 2024, and June 14, 2024.
Co-op Program
The Pixar Co-op Program, a part of the Pixar University professional development program, allows their animators to use Pixar resources to produce independent films. The first 3D project accepted to the program was Borrowed Time (2016); all previously accepted films were live-action.
Franchises
This is not including the associated productions from the Pixar media.
Exhibitions
Since December 2005, Pixar has held a variety of exhibitions celebrating the art and artists of the organization and its contribution to the world of animation.
Pixar: 20 Years of Animation
Upon its 20th anniversary, in 2006, Pixar celebrated with the release of its 7th feature film Cars, and held two exhibitions from April to June 2010 at Science Centre Singapore in Jurong East, Singapore and the London Science Museum in London. It was their first time holding an exhibition in Singapore.
The exhibition highlights consist of work-in-progress sketches from various Pixar productions, clay sculptures of their characters and an autostereoscopic short showcasing a 3D version of the exhibition pieces which is projected through four projectors. Another highlight is the Zoetrope, where visitors of the exhibition are shown figurines of Toy Story characters "animated" in real-life through the zoetrope.
Pixar: 25 Years of Animation
Pixar celebrated its 25th anniversary in 2011 with the release of its twelfth feature film Cars 2, and held an exhibition at the Oakland Museum of California from July 2010 until January 2011. The exhibition tour debuted in Hong Kong and was held at the Hong Kong Heritage Museum in Sha Tin from March 27 to July 11, 2011. In 2013, the exhibition was held in the EXPO in Amsterdam, The Netherlands. For 6 months from July 6, 2012 until January 6, 2013 the city of Bonn (Germany) hosted the public showing,
On November 16, 2013, the exhibition moved to the Art Ludique museum in Paris, France with a scheduled run until March 2, 2014. The exhibition moved to three Spanish cities later in 2014 and 2015: Madrid (held in CaixaForum from March 21 until June 22), Barcelona (held also in Caixaforum from February until May) and Zaragoza.
Pixar: 25 Years of Animation includes all of the artwork from Pixar: 20 Years of Animation, plus art from Ratatouille, WALL-E, Up and Toy Story 3.
The Science Behind Pixar
The Science Behind Pixar is a travelling exhibition that first opened on June 28, 2015, at the Museum of Science in Boston, Massachusetts. It was developed by the Museum of Science in collaboration with Pixar. The exhibit features forty interactive elements that explain the production pipeline at Pixar. They are divided into eight sections, each demonstrating a step in the filmmaking process: Modeling, Rigging, Surfaces, Sets & Cameras, Animation, Simulation, Lighting, and Rendering. Before visitors enter the exhibit, they watch a short video at an introductory theater showing Mr. Ray from Finding Nemo and Roz from Monsters, Inc..
The exhibition closed on January 10, 2016 and was moved to the Franklin Institute in Philadelphia, Pennsylvania where it ran from March 12 to September 5. Afterwards, it moved to the California Science Center in Los Angeles, California and was open from October 15, 2016 to April 9, 2017. It made another stop at the Science Museum of Minnesota in St. Paul, Minnesota from May 27 through September 4, 2017.
The exhibition opened in Canada on July 1, 2017, at the TELUS World of Science – Edmonton (TWOSE).
Pixar: The Design of Story
Pixar: The Design of Story was an exhibition held at the Cooper Hewitt, Smithsonian Design Museum in New York City from October 8, 2015 to September 11, 2016. The museum also hosted a presentation and conversation with John Lasseter on November 12, 2015 entitled "Design By Hand: Pixar's John Lasseter".
Pixar: 30 Years of Animation
Pixar celebrated its 30th anniversary in 2016 with the release of its seventeenth feature film Finding Dory, and put together another milestone exhibition. The exhibition first opened at the Museum of Contemporary Art in Tokyo, Japan from March 5, 2016 to May 29, 2016. It subsequently moved to the Nagasaki Prefectural Art Museum National Museum of History, Dongdaemun Design Plaza where it ended on March 5, 2018 at the Hong Kong Heritage Museum.
References
External links
List of the 40 founding employees of Pixar
Computer animation studios
1986 establishments in California
2006 mergers and acquisitions
American animation studios
American companies established in 1986
Cinema of the San Francisco Bay Area
Companies based in Emeryville, California
Disney acquisitions
Disney production studios
Entertainment companies based in California
Film production companies of the United States
Film studios
Mass media companies established in 1986
Pixar
The Walt Disney Studios |
1678806 | https://en.wikipedia.org/wiki/Think%20different | Think different | "Think different" is an advertising slogan used from 1997 to 2002 by Apple Computer, Inc., now named Apple Inc. The campaign was created by the Los Angeles office of advertising agency TBWA\Chiat\Day.
The slogan has been widely taken as a response to IBM's slogan "Think." It was used in a television advertisement, several print advertisements, and several TV promos for Apple products.
As of 2020, "Think different" was still printed on the back of the box of the iMac, and possibly elsewhere.
Development
In 1984, Apple's "1984" Super Bowl advertisement was created by advertising agency Chiat\Day. In 1986, CEO John Sculley replaced Chiat\Day with BBDO. In 1997, under CEO Gil Amelio, BBDO pitched to an internal marketing meeting at the then struggling Apple, a new brand campaign with the slogan "We're back." Reportedly everyone in the meeting expressed approval with the exception of the recently returned Jobs who said "the slogan was stupid because Apple wasn't [yet] back."
Jobs then invited three advertising agencies to present new ideas that reflected the philosophy he thought had to be reinforced within the company he had co-founded. Chiat\Day was one of them.
The script was written by Rob Siltanen with participation of Lee Clow and many others on his creative team. The slogan "Think different" was created by Craig Tanimoto, an art director at Chiat\Day, who also contributed to the initial concept work. The look and feel of the print, outdoor and the photography used was researched, curated, and visually developed by art & design director Jessica (Schulman) Edelstein who, together with Lee Clow, met weekly with Steve Jobs and the team at Apple to hone the campaign in its many forms. Susan Alinsangan and Margaret (Midgett) Keene were also instrumental in developing the campaign further as it progressed and spread throughout the world. Great contributions were made by professionals in all agency departments from account services, to art buying, to production, to contract negotiators and media buyers who secured key placements. The commercial's music was composed by Chip Jenkins for Elias Arts.
The full text of the various versions of this script were co-written by creative director Rob Siltanen and creative director Ken Segall, along with input from many on the team at the agency and at Apple. While Jobs thought the creative concept "brilliant", he originally hated the words of the television commercial, but then changed his mind. According to Rob Siltanen:
Craig Tanimoto is also credited with opting for "Think different" rather than "Think differently," which was considered but rejected by Lee Clow. Jobs insisted that he wanted "different" to be used as a noun, as in "think victory" or "think beauty". He specifically said that "think differently" wouldn't have the same meaning to him. He wanted to make it sound colloquial, like the phrase "think big".
Jobs was crucial to the selection of the historical subjects pictured in the campaign, many of whom had never been featured in advertising, or never would have done so with any other company. He enabled the selection and the speed of negotiation with them or their surviving estates. Some of the particular iconic subjects were chosen because of his personal relationships, calling the families of Jim Henson and John F. Kennedy and flying to New York City to visit Yoko Ono. For the television narration, he called Robin Williams who was well known to be against appearing in advertising and whose wife refused to forward the call anyway, and Tom Hanks was then considered, but Richard Dreyfuss was an Apple fan.
Two versions of the narration in the television ad were created in the development process: one narrated by Jobs and one by Dreyfuss. Lee Clow argued that it would be "really powerful" for Jobs to narrate the piece, as a symbol of his return to the company and of reclaiming the Apple brand. On the morning of the first air date, Jobs decided to go with the Dreyfuss version, stating that it was about Apple, not about himself.
It was edited at Venice Beach Editorial, by Dan Bootzin, Chiat\Day's in-house editor, and post-produced by Hunter Conner.
Jobs said the following in a 1994 interview with the Santa Clara Valley Historical Association:
The Steve Jobs version of the ad was played at Apple's in-house memorial for him in 2011.
Formats
Television
Significantly shortened versions of the advertisement script were used in two television advertisements, known as "Crazy Ones", directed by Chiat\Day's Jennifer Golub who also shared the art director credit with Jessica Schulman Edelstein and Yvonne Smith.
The one-minute ad featured black-and-white footage of 17 iconic 20th-century personalities, in this order of appearance: Albert Einstein, Bob Dylan, Martin Luther King Jr., Richard Branson, John Lennon (with Yoko Ono), Buckminster Fuller, Thomas Edison, Muhammad Ali, Ted Turner, Maria Callas, Mahatma Gandhi, Amelia Earhart, Alfred Hitchcock, Martha Graham, Jim Henson (with Kermit the Frog), Frank Lloyd Wright, and Pablo Picasso. The advertisement ends with an image of a young girl opening her closed eyes, as if making a wish. The final clip is taken from the All Around The World version of the "Sweet Lullaby" music video, directed by Tarsem Singh; the young girl is Shaan Sahota, Singh's niece.
The thirty-second advertisement was a shorter version of the previous one, using 11 of the 17 personalities, but closed with Jerry Seinfeld, instead of the young girl. In order of appearance: Albert Einstein, Bob Dylan, Martin Luther King Jr., John Lennon, Martha Graham, Muhammad Ali, Alfred Hitchcock, Mahatma Gandhi, Jim Henson, Maria Callas, Pablo Picasso, and Jerry Seinfeld. This version aired only once, during the series finale of Seinfeld.
Another early example of the Think different ads is on February 4, 1998, months before switching the colored apple logo to solid white, where an ad aired with a snail carrying an Intel Pentium II chip on its back moving slowly, as the Power Macintosh G3 claims that it is twice as fast as Intel's Pentium II Processor.
Print
Print advertisements from the campaign were published in many mainstream magazines such as Newsweek and Time. Their style was predominantly traditional, prominently featuring the company's computers or consumer electronics along with the slogan.
There was also another series of print ads which were more focused on brand image than specific products. Those featured a portrait of one historic figure, with a small Apple logo and the words "Think different" in one corner, with no reference to the company's products. Creative geniuses whose thinking and work actively changed their respective fields where honored and included: Jimi Hendrix, Richard Clayderman, Miles Davis, Billy Graham, Bryan Adams, Cesar Chavez, John Lennon, Laurence Gartel, Mahatma Gandhi, Eleanor Roosevelt and others.
Posters
Promotional posters from the campaign were produced in small numbers in 24 x 36 inch sizes. They feature the portrait of one historic figure, with a small Apple logo and the words "Think different" in one corner. The original long version of the ad script appears on some of them. The posters were produced between 1997 and 1998.
There were at least 29 "Think different" posters created. The sets were as follows:
Set 1
Amelia Earhart
Alfred Hitchcock
Pablo Picasso
Mahatma Gandhi
Thomas Edison
Set 2
Maria Callas
Martha Graham
Joan Baez
Ted Turner
14th Dalai Lama (never officially released due to licensing issues and the politically sensitive nature)
Set 3
Jimi Hendrix
Miles Davis
Ansel Adams
Lucille Ball and Desi Arnaz
Bob Dylan (Never officially released due to licensing issues)
Paul Rand
Set 4
Frank Sinatra
Richard Feynman
Jackie Robinson
Cesar Chavez
Set 5 (The Directors set, never officially released)
Charlie Chaplin
Francis Ford Coppola
Orson Welles
Frank Capra
John Huston
In addition, around the year 2000, Apple produced the ten, 11x17 poster set often referred to as The Educators Set, which was distributed through their Education Channels. Apple sent out boxes (the cover of which is a copy of the "Crazy Ones" original TD poster) that each contained 3 packs (sealed in plastic) of 10 small or miniature Think different posters.
Educator Set
Albert Einstein
Amelia Earhart
Miles Davis
Jim Henson
Jane Goodall
Mahatma Gandhi
John Lennon and Yoko Ono
Cesar Chavez
James Watson
Pablo Picasso
During a special event held on October 14, 1998 at the Flint Center in Cupertino California, a limited edition 11" x 14" softbound book was given to employees and affiliates of Apple Computer, Inc. to commemorate the first year of the ad campaign. The 50 page book contained a foreword by Steve Jobs, the text of the original Think different ad, and illustrations of many of the posters used in the campaign along with narratives describing each person.
Outdoor advertisement at MacWorld 2000 Tokyo, etc.
Akira Kurosawa
Issei Miyake
Osamu Tezuka
Akio Morita
Reception and influence
Upon release, the "Think different" Campaign proved to be an enormous success for Apple and TBWA\Chiat\Day. Critically acclaimed, the spot would garner numerous awards and accolades, including the 1998 Emmy Award for Best Commercial and the 2000 Grand Effie Award for most effective campaign in America.
In retrospect, the new ad campaign marked the beginning of Apple's re-emergence as a marketing powerhouse. In the years leading up to the ad Apple had lost market share to the Wintel ecosystem which offered lower prices, more software choices, and higher-performance CPUs. Worse for Apple's reputation was the high-profile failure of the Apple Newton, a billion-dollar project that proved to be a technical and commercial dud. The success of the "Think different" campaign, along with the return of Steve Jobs, bolstered the Apple brand and reestablished the "counter-culture" aura of its earlier days, setting the stage for the immensely successful iMac all-in-one personal computer and later the Mac OS X (now named macOS) operating system.
Revivals
Product packaging
Since late 2009, the box packaging specification sheet for iMac computers has included the following footnote:
Macintosh Think different.
In previous Macintosh packaging, Apple's website URL was printed below the specifications list.
The apparent explanation for this inconspicuous usage is that Apple wished to maintain its trademark registrations on both terms – in most jurisdictions, a company must show continued use of a trademark on its products in order to maintain registration, but neither trademark is widely used in the company's current marketing. This packaging was used as the required specimen of use when Apple filed to re-register "Think different" as a U.S. trademark in 2009.
macOS
Apple has continued to include portions of the "Crazy Ones" text as Easter eggs in a range of places in macOS. This includes the high-resolution icon for TextEdit introduced in Leopard, the "All My Files" Finder icon introduced in Lion, the high-resolution icon for Notes in Mountain Lion and Mavericks and on the new Color LCD Display preferences menu introduced for MacBook Pro with Retina Display.
Apple Color Emoji
Several emoji glyphs in Apple's Apple Color Emoji font contain portions of the text of "Crazy Ones”, including 1F4CB ‘Clipboard’, 1F4C3 ‘Page with Curl’, 1F4C4 ‘Page facing up’, 1F4D1 ‘Bookmark Tabs’ and 1FA99 ‘Coin’.
Apple.com
On at least five separate occasions, the Apple homepage featured images of notable figures not originally part of the campaign alongside the "Think different" slogan:
In 2001, when George Harrison died
In 2002, when Jimmy Carter won the Nobel Peace Prize
In 2003, when Gregory Hines died
In 2005, when Rosa Parks died
Similar portraits were also posted without the "Think different" text on at least seven additional occasions:
In 2007, when Al Gore received the Nobel Peace Prize
In 2010, when Jerry York died
In 2011, when Steve Jobs died
In 2013, when Nelson Mandela died
In 2014, when the Macintosh turned 30 on January 24, 2014
In 2014, when Robin Williams died
In 2016, when Muhammad Ali died
In 2019, when Abiy Ahmed won the Nobel Peace Prize
Other media
A portion of the text is recited in the trailer for Jobs, a biographical drama film of Steve Jobs' life. Ashton Kutcher, as Jobs, is shown recording the audio for the trailer in the film's final scene.
The Richard Dreyfuss audio version is used in the introduction of the first episode of The Crazy Ones, a podcast provided by Ricochet, hosted by Owen Brennan and Patrick Jones.
Parodies
The Simpsons episode "Mypods and Boomsticks" pokes fun at the slogan, writing it "Think differently", which is grammatically correct.
For Steam's release on Mac OS X, Valve has released a Left 4 Dead–themed advertisement featuring Francis, whose in-game spoken lines involve him hating various things. The given slogan is "I hate different." Subsequently, for Team Fortress 2s release on Mac, a trailer was released which concludes with "Think bullets".
Aiura parodies this through the use of "Think Crabbing" in its opening.
In the musical Nerds, which depicts a fictionalized account of the lives of Steve Jobs and Bill Gates, there is a song titled "Think Different" in which Jobs hallucinates an anthropomorphized Oracle dancing with him and urging him to fight back against the Microsoft empire.
In the animated show Gravity Falls in episode "A Tale of Two Stans", a poster with the words "Ponder Alternatively" and a strawberry colored in a similar fashion as the old Apple logo shows in the background.
In the movie Monsters, Inc., an easter egg magazine at the end of the film references the slogan with a computer captioned, "Scare Different."
See also
1984 Super Bowl ad
AppleMasters
The organization of the artist
Think (IBM)
References
External links
Steve Jobs narrated version (video)
American advertising slogans
1997 neologisms
Apple Inc. advertising
American television commercials
Advertising campaigns
1990s television commercials |
13347496 | https://en.wikipedia.org/wiki/Spin-up | Spin-up | Spin-up refers to the process of a hard disk drive or optical disc drive
its platters or inserted optical disc from a stopped state to an operational speed. The period of time taken by the drive to perform this process is referred to as its spin-up time, the average of which is reported by hard disks as a S.M.A.R.T. attribute. The required operational speed depends on the design of the disk drive. Typical speeds of hard disks have been 2400, 3600, 4200, 5400, 7200, 10000 and 15000 revolutions per minute (RPM). Achieving such speeds can require a significant portion of the available power budget of a computer system, and so application of power to the disks must be carefully controlled. Operational speed of optical disc drives may vary depending on type of disc and mode of operation (see Constant linear velocity).
Spin-up of hard disks generally occurs at the very beginning of the computer boot process. However, most modern computers have the ability to stop a drive while the machine is already running as a means of energy conservation or noise reduction. If a machine is running and requires access to a stopped drive, then a delay is incurred while the drive is spun up. It also depends on the type of mechanism used within.
A drive in the process of being spun up needs more energy input than a drive that is already spinning at operation speeds, since more effort is required for the electric motor to accelerate the platters, as opposed to maintaining their speed.
Staggered spin-up
In computers with multiple hard drives, a method called staggered spin-up can be employed to prevent the excessive power-consumption of spin-up, which may result in a power shortage. Power consumption during spin-up is often the highest power draw of all of the different operating states of a hard disk drive. Staggered spin-up typically starts one drive at a time, either waiting for the drive to signal it is ready or allowing a predefined period of time to pass before starting the next drive. If the power supply is able to supply sufficient current to start multiple drives at a time, that, too, is common.
Staggered Spin-up (SSU) and Power-Up In Standby (PUIS) are different features that can help control spin-up of multiple drives within a computer system or a disk subsystem. Both are defined in the ATA Specifications Standards. See Serial ATA for more information.
One feature, called Power-up in standby (PUIS) (also called PM2) is used on some Serial ATA (SATA) and Parallel ATA (sometimes called PATA or IDE) hard disk drives. PUIS requires BIOS and/or driver support to use. When power is applied to the hard disk drive, the drive will not spin-up until a PUIS Spin-Up command is issued. The computer system BIOS or RAID controller must issue the command to tell the drive(s) to spin-up before they can be accessed. PUIS can be enabled by tools such as hdparm for drives which support this feature.
Another feature, called Staggered Spin-up (SSU) is used on most Serial ATA (SATA) hard disk drives. This is more common than Power-Up In Standby (PUIS) because it does not require any special commands to get the drive to spin-up. The drive electronics waits for the SATA Data Phy (Physical I/F) to activate to spin-up the drive. The computer system BIOS and/or RAID controller or RAID driver set can delay and control when the different drives will spin-up.
With Western Digital hard disk drives, Pin 11 of the SATA Power Interface controls whether Staggered Spin-Up (SSU) is enabled or not. Pin 11 is also used as an activity LED connection. When the drive is initially powered on, the drive senses whether Pin 11 is left floating (high or '1' logic state) or grounded (low or '0' logic state). SSU is disabled when Pin 11 is grounded. When disabled, the drive will spin-up as soon as power is applied to it. SSU is enabled when Pin 11 is left floating or driven high (high or '1' logic state). The drive will not spin-up until the SATA Phy Interface becomes active with a connection to a SATA controller or SATA RAID controller. The SATA or SATA RAID controller can control when and how many drives can be spun-up. SSU and PUIS are features that are configured in software or firmware by the manufacturer.
Information from the Fujitsu Serial ATA Interface for Mobile Hard Disk Drives whitepaper:
Note that staggered spin-up of disks is a feature of many multi-drive systems using SATA and RAID. It is not typically used on mobile platforms.
References
Booting
Hard disk drives |
3469522 | https://en.wikipedia.org/wiki/Asymptote%20%28vector%20graphics%20language%29 | Asymptote (vector graphics language) | Asymptote is a descriptive vector graphics language — developed by Andy Hammerlindl, John C. Bowman (University of Alberta), and Tom Prince — which provides a natural coordinate-based framework for technical drawing. Asymptote runs on all major platforms (Unix, Mac OS, Microsoft Windows). It is free software, available under the terms of the GNU Lesser General Public License (LGPL).
Syntax and notable features
Asymptote typesets labels and equations with LaTeX, producing high-quality PostScript, PDF, SVG, or 3D PRC output. It is inspired by MetaPost, but has a C++-like syntax. It provides a language for typesetting mathematical figures, just as TeX/LaTeX provides a language for typesetting equations. It is mathematically oriented (e.g. rotation of vectors by complex multiplication), and uses the simplex method and deferred drawing to solve overall size constraint issues between fixed-sized objects (labels and arrowheads) and objects that should scale with figure size.
Asymptote fully generalizes MetaPost path construction algorithms to three dimensions, and compiles commands into virtual machine code for speed without sacrificing portability. High-level graphics commands are implemented in the Asymptote language itself, allowing them to be easily tailored to specific applications. It also appears to be the first software package to lift TeX into three dimensions.
This allows Asymptote to be used as a 3D vector file format.
Asymptote is also notable for having a graphical interface coded in Python (and the Tk widget set), xasy.py — this allows an inexperienced user to quickly draw up objects and save them as .asy source code which can then be examined or edited by hand.
The program's syntax was originally described by using a yacc compatible grammar.
Application examples
The following source code allows you to draw a graph of the Heaviside function by means of the Asymptote language.
import graph;
import settings;
outformat="pdf";
size(300,300);
// Function.
real[] x1 = {-1.5,0};
real[] y1 = {0,0};
real[] x2 = {0,1.5};
real[] y2 = {1,1};
draw(graph(x1,y1),red+2);
draw(graph(x2,y2),red+2);
draw((0,0)--(0,1),red+1.5+linetype("4 4"));
fill( circle((0,1),0.035), red);
filldraw( circle((0,0),0.03), white, red+1.5);
// Axes.
xaxis( Label("$x$"), Ticks(new real[]{-1,-0.5,0.5,1}), Arrow);
yaxis( Label("$y$"), Ticks(new real[]{0.5,1}), Arrow, ymin=-0.18, ymax=1.25);
// Origin.
labelx("$O$",0,SW);
The code above yields the following pdf output.
See also
GeoGebra – free Dynamic Mathematics program with Asymptote export
PSTricks
TikZ
PyX
References
External links
Asymptote official website
Philippe Ivaldi's extensive Asymptote gallery
Asymptote: Art of Problem Solving Wiki
Art of Problem Solving Forum
Programming with Asymptote (in Dutch)
An Asymptote Tutorial by Charles Staats
Free educational software
Free graphics software
Free software programmed in C++
Linux TeX software
TeX SourceForge projects
Vector graphics |
226147 | https://en.wikipedia.org/wiki/Dell%20Inspiron | Dell Inspiron | The Inspiron ( , formerly stylized as inspiron) is a line of consumer-oriented laptop computers, desktop computers and all-in-one computers sold by Dell. The Inspiron range mainly competes against computers such as Acer's Aspire, Asus' Transformer Book Flip, VivoBook and Zenbook, HP's Pavilion, Stream and ENVY, Lenovo's IdeaPad, Samsung's Sens and Toshiba's Satellite.
Types
The Dell Inspiron lineup consists of laptops, desktops and all in ones.
Dell Inspiron laptop computers
Dell Inspiron desktop computers
Dell Inspiron All-in-One
Discontinued:
Dell Inspiron Mini Series netbooks (2008-2010)
See also
Dell's Home Office/Consumer class product lines:
Studio (mainstream desktop and laptop computers)
XPS (high-end desktop and notebook computers)
Studio XPS (high-end design-focus of XPS systems and extreme multimedia capability)
Alienware (high-performance gaming systems)
Adamo (high-end luxury subnotebook)
Dell Business/Corporate class product lines:
Vostro (office/small business desktop and notebook systems)
n Series (desktop and notebook computers shipped with Linux or FreeDOS installed)
Latitude (business-focused notebooks)
Optiplex (business-focused workstations)
Precision (high performance workstations)
References
External links
Dell Inspiron Drivers
Dell laptops
Dell personal computers
Convertible laptops
Consumer electronics brands
Computer-related introductions in 1990 |
20756850 | https://en.wikipedia.org/wiki/Collective%20intelligence | Collective intelligence | Collective intelligence (CI) is shared or group intelligence that emerges from the collaboration, collective efforts, and competition of many individuals and appears in consensus decision making. The term appears in sociobiology, political science and in context of mass peer review and crowdsourcing applications. It may involve consensus, social capital and formalisms such as voting systems, social media and other means of quantifying mass activity. Collective IQ is a measure of collective intelligence, although it is often used interchangeably with the term collective intelligence. Collective intelligence has also been attributed to bacteria and animals.
It can be understood as an emergent property from the synergies among: 1) data-information-knowledge; 2) software-hardware; and 3) individuals (those with new insights as well as recognized authorities) that continually learns from feedback to produce just-in-time knowledge for better decisions than these three elements acting alone; or more narrowly as an emergent property between people and ways of processing information. This notion of collective intelligence is referred to as "symbiotic intelligence" by Norman Lee Johnson. The concept is used in sociology, business, computer science and mass communications: it also appears in science fiction. Pierre Lévy defines collective intelligence as, "It is a form of universally distributed intelligence, constantly enhanced, coordinated in real time, and resulting in the effective mobilization of skills. I'll add the following indispensable characteristic to this definition: The basis and goal of collective intelligence is mutual recognition and enrichment of individuals rather than the cult of fetishized or hypostatized communities." According to researchers Pierre Lévy and Derrick de Kerckhove, it refers to capacity of networked ICTs (Information communication technologies) to enhance the collective pool of social knowledge by simultaneously expanding the extent of human interactions. A broader definition was provided by Geoff Mulgan in a series of lectures and reports from 2006 onwards and in the book Big Mind which proposed a framework for analysing any thinking system, including both human and machine intelligence, in terms of functional elements (observation, prediction, creativity, judgement etc.), learning loops and forms of organisation. The aim was to provide a way to diagnose, and improve, the collective intelligence of a city, business, NGO or parliament.
Collective intelligence strongly contributes to the shift of knowledge and power from the individual to the collective. According to Eric S. Raymond (1998) and JC Herz (2005), open source intelligence will eventually generate superior outcomes to knowledge generated by proprietary software developed within corporations (Flew 2008). Media theorist Henry Jenkins sees collective intelligence as an 'alternative source of media power', related to convergence culture. He draws attention to education and the way people are learning to participate in knowledge cultures outside formal learning settings. Henry Jenkins criticizes schools which promote 'autonomous problem solvers and self-contained learners' while remaining hostile to learning through the means of collective intelligence. Both Pierre Lévy (2007) and Henry Jenkins (2008) support the claim that collective intelligence is important for democratization, as it is interlinked with knowledge-based culture and sustained by collective idea sharing, and thus contributes to a better understanding of diverse society.
Similar to the g factor (g) for general individual intelligence, a new scientific understanding of collective intelligence aims to extract a general collective intelligence factor c factor for groups indicating a group's ability to perform a wide range of tasks. Definition, operationalization and statistical methods are derived from g. Similarly as g is highly interrelated with the concept of IQ, this measurement of collective intelligence can be interpreted as intelligence quotient for groups (Group-IQ) even though the score is not a quotient per se. Causes for c and predictive validity are investigated as well.
Writers who have influenced the idea of collective intelligence include Francis Galton, Douglas Hofstadter (1979), Peter Russell (1983), Tom Atlee (1993), Pierre Lévy (1994), Howard Bloom (1995), Francis Heylighen (1995), Douglas Engelbart, Louis Rosenberg, Cliff Joslyn, Ron Dembo, Gottfried Mayer-Kress (2003), Geoff Mulgan
History
The concept (although not so named) originated in 1785 with the Marquis de Condorcet, whose "jury theorem" states that if each member of a voting group is more likely than not to make a correct decision, the probability that the highest vote of the group is the correct decision increases with the number of members of the group (see Condorcet's jury theorem). Many theorists have interpreted Aristotle's statement in the Politics that "a feast to which many contribute is better than a dinner provided out of a single purse" to mean that just as many may bring different dishes to the table, so in a deliberation many may contribute different pieces of information to generate a better decision. Recent scholarship, however, suggests that this was probably not what Aristotle meant but is a modern interpretation based on what we now know about team intelligence.
A precursor of the concept is found in entomologist William Morton Wheeler's observation that seemingly independent individuals can cooperate so closely as to become indistinguishable from a single organism (1910). Wheeler saw this collaborative process at work in ants that acted like the cells of a single beast he called a superorganism.
In 1912 Émile Durkheim identified society as the sole source of human logical thought. He argued in "The Elementary Forms of Religious Life" that society constitutes a higher intelligence because it transcends the individual over space and time. Other antecedents are Vladimir Vernadsky and Pierre Teilhard de Chardin's concept of "noosphere" and H.G. Wells's concept of "world brain" (see also the term "global brain"). Peter Russell, Elisabet Sahtouris, and Barbara Marx Hubbard (originator of the term "conscious evolution") are inspired by the visions of a noosphere – a transcendent, rapidly evolving collective intelligence – an informational cortex of the planet. The notion has more recently been examined by the philosopher Pierre Lévy. In a 1962 research report, Douglas Engelbart linked collective intelligence to organizational effectiveness, and predicted that pro-actively 'augmenting human intellect' would yield a multiplier effect in group problem solving: "Three people working together in this augmented mode [would] seem to be more than three times as effective in solving a complex problem as is one augmented person working alone". In 1994, he coined the term 'collective IQ' as a measure of collective intelligence, to focus attention on the opportunity to significantly raise collective IQ in business and society.
The idea of collective intelligence also forms the framework for contemporary democratic theories often referred to as epistemic democracy. Epistemic democratic theories refer to the capacity of the populace, either through deliberation or aggregation of knowledge, to track the truth and relies on mechanisms to synthesize and apply collective intelligence.
Collective intelligence was introduced into the machine learning community in the late 20th century, and matured into a broader consideration of how to design "collectives" of self-interested adaptive agents to meet a system-wide goal. This was related to single-agent work on "reward shaping" and has been taken forward by numerous researchers in
the game theory and engineering communities.
Dimensions
Howard Bloom has discussed mass behavior – collective behavior from the level of quarks to the level of bacterial, plant, animal, and human societies. He stresses the biological adaptations that have turned most of this earth's living beings into components of what he calls "a learning machine". In 1986 Bloom combined the concepts of apoptosis, parallel distributed processing, group selection, and the superorganism to produce a theory of how collective intelligence works. Later he showed how the collective intelligences of competing bacterial colonies and human societies can be explained in terms of computer-generated "complex adaptive systems" and the "genetic algorithms", concepts pioneered by John Holland.
Bloom traced the evolution of collective intelligence to our bacterial ancestors 1 billion years ago and demonstrated how a multi-species intelligence has worked since the beginning of life. Ant societies exhibit more intelligence, in terms of technology, than any other animal except for humans and co-operate in keeping livestock, for example aphids for "milking". Leaf cutters care for fungi and carry leaves to feed the fungi.
David Skrbina cites the concept of a 'group mind' as being derived from Plato's concept of panpsychism (that mind or consciousness is omnipresent and exists in all matter). He develops the concept of a 'group mind' as articulated by Thomas Hobbes in "Leviathan" and Fechner's arguments for a collective consciousness of mankind. He cites Durkheim as the most notable advocate of a "collective consciousness" and Teilhard de Chardin as a thinker who has developed the philosophical implications of the group mind.
Tom Atlee focuses primarily on humans and on work to upgrade what Howard Bloom calls "the group IQ". Atlee feels that collective intelligence can be encouraged "to overcome 'groupthink' and individual cognitive bias in order to allow a collective to cooperate on one process – while achieving enhanced intellectual performance." George Pór defined the collective intelligence phenomenon as "the capacity of human communities to evolve towards higher order complexity and harmony, through such innovation mechanisms as differentiation and integration, competition and collaboration." Atlee and Pór state that "collective intelligence also involves achieving a single focus of attention and standard of metrics which provide an appropriate threshold of action". Their approach is rooted in scientific community metaphor.
The term group intelligence is sometimes used interchangeably with the term collective intelligence. Anita Woolley presents Collective intelligence as a measure of group intelligence and group creativity. The idea is that a measure of collective intelligence covers a broad range of features of the group, mainly group composition and group interaction. The features of composition that lead to increased levels of collective intelligence in groups include criteria such as higher numbers of women in the group as well as increased diversity of the group.
Atlee and Pór suggest that the field of collective intelligence should primarily be seen as a human enterprise in which mind-sets, a willingness to share and an openness to the value of distributed intelligence for the common good are paramount, though group theory and artificial intelligence have something to offer. Individuals who respect collective intelligence are confident of their own abilities and recognize that the whole is indeed greater than the sum of any individual parts. Maximizing collective intelligence relies on the ability of an organization to accept and develop "The Golden Suggestion", which is any potentially useful input from any member. Groupthink often hampers collective intelligence by limiting input to a select few individuals or filtering potential Golden Suggestions without fully developing them to implementation.
Robert David Steele Vivas in The New Craft of Intelligence portrayed all citizens as "intelligence minutemen," drawing only on legal and ethical sources of information, able to create a "public intelligence" that keeps public officials and corporate managers honest, turning the concept of "national intelligence" (previously concerned about spies and secrecy) on its head.
According to Don Tapscott and Anthony D. Williams, collective intelligence is mass collaboration. In order for this concept to happen, four principles need to exist:
- Openness - Sharing ideas and intellectual property: though these resources provide the edge over competitors more benefits accrue from allowing others to share ideas and gain significant improvement and scrutiny through collaboration.
- Peering - Horizontal organization as with the 'opening up' of the Linux program where users are free to modify and develop it provided that they make it available for others. Peering succeeds because it encourages self-organization – a style of production that works more effectively than hierarchical management for certain tasks.
- Sharing - Companies have started to share some ideas while maintaining some degree of control over others, like potential and critical patent rights. Limiting all intellectual property shuts out opportunities, while sharing some expands markets and brings out products faster.
- Acting Globally - The advancement in communication technology has prompted the rise of global companies at low overhead costs. The internet is widespread, therefore a globally integrated company has no geographical boundaries and may access new markets, ideas and technology.
Collective intelligence factor c
A new scientific understanding of collective intelligence defines it as a group's general ability to perform a wide range of tasks. Definition, operationalization and statistical methods are similar to the psychometric approach of general individual intelligence. Hereby, an individual's performance on a given set of cognitive tasks is used to measure general cognitive ability indicated by the general intelligence factor g extracted via factor analysis. In the same vein as g serves to display between-individual performance differences on cognitive tasks, collective intelligence research aims to find a parallel intelligence factor for groups c factor' (also called 'collective intelligence factor' (CI)) displaying between-group differences on task performance. The collective intelligence score then is used to predict how this same group will perform on any other similar task in the future. Yet tasks, hereby, refer to mental or intellectual tasks performed by small groups even though the concept is hoped to be transferable to other performances and any groups or crowds reaching from families to companies and even whole cities. Since individuals' g factor scores are highly correlated with full-scale IQ scores, which are in turn regarded as good estimates of g, this measurement of collective intelligence can also be seen as an intelligence indicator or quotient respectively for a group (Group-IQ) parallel to an individual's intelligence quotient (IQ) even though the score is not a quotient per se.
Mathematically, c and g are both variables summarizing positive correlations among different tasks supposing that performance on one task is comparable with performance on other similar tasks. c thus is a source of variance among groups and can only be considered as a group's standing on the c factor compared to other groups in a given relevant population. The concept is in contrast to competing hypotheses including other correlational structures to explain group intelligence, such as a composition out of several equally important but independent factors as found in individual personality research.
Besides, this scientific idea also aims to explore the causes affecting collective intelligence, such as group size, collaboration tools or group members' interpersonal skills. The MIT Center for Collective Intelligence, for instance, announced the detection of The Genome of Collective Intelligence as one of its main goals aiming to develop a taxonomy of organizational building blocks, or genes, that can be combined and recombined to harness the intelligence of crowds.
Causes
Individual intelligence is shown to be genetically and environmentally influenced. Analogously, collective intelligence research aims to explore reasons why certain groups perform more intelligently than other groups given that c is just moderately correlated with the intelligence of individual group members. According to Woolley et al.'s results, neither team cohesion nor motivation or satisfaction is correlated with c. However, they claim that three factors were found as significant correlates: the variance in the number of speaking turns, group members' average social sensitivity and the proportion of females. All three had similar predictive power for c, but only social sensitivity was statistically significant (b=0.33, P=0.05).
The number speaking turns indicates that "groups where a few people dominated the conversation were less collectively intelligent than those with a more equal distribution of conversational turn-taking". Hence, providing multiple team members the chance to speak up made a group more intelligent.
Group members' social sensitivity was measured via the Reading the Mind in the Eyes Test (RME) and correlated .26 with c. Hereby, participants are asked to detect thinking or feeling expressed in other peoples' eyes presented on pictures and assessed in a multiple choice format. The test aims to measure peoples' theory of mind (ToM), also called 'mentalizing' or 'mind reading', which refers to the ability to attribute mental states, such as beliefs, desires or intents, to other people and in how far people understand that others have beliefs, desires, intentions or perspectives different from their own ones. RME is a ToM test for adults that shows sufficient test-retest reliability and constantly differentiates control groups from individuals with functional autism or Asperger Syndrome. It is one of the most widely accepted and well-validated tests for ToM within adults. ToM can be regarded as an associated subset of skills and abilities within the broader concept of emotional intelligence.
The proportion of females as a predictor of c was largely mediated by social sensitivity (Sobel z = 1.93, P= 0.03) which is in vein with previous research showing that women score higher on social sensitivity tests. While a mediation, statistically speaking, clarifies the mechanism underlying the relationship between a dependent and an independent variable, Wolley agreed in an interview with the Harvard Business Review that these findings are saying that groups of women are smarter than groups of men. However, she relativizes this stating that the actual important thing is the high social sensitivity of group members.
It is theorized that the collective intelligence factor c is an emergent property resulting from bottom-up as well as top-down processes. Hereby, bottom-up processes cover aggregated group-member characteristics. Top-down processes cover group structures and norms that influence a group's way of collaborating and coordinating.
Processes
Top-down processes
Top-down processes cover group interaction, such as structures, processes, and norms. An example of such top-down processes is conversational turn-taking. Research further suggest that collectively intelligent groups communicate more in general as well as more equally; same applies for participation and is shown for face-to-face as well as online groups communicating only via writing.
Bottom-up processes
Bottom-up processes include group composition, namely the characteristics of group members which are aggregated to the team level. An example of such bottom-up processes is the average social sensitivity or the average and maximum intelligence scores of group members. Furthermore, collective intelligence was found to be related to a group's cognitive diversity including thinking styles and perspectives. Groups that are moderately diverse in cognitive style have higher collective intelligence than those who are very similar in cognitive style or very different. Consequently, groups where members are too similar to each other lack the variety of perspectives and skills needed to perform well. On the other hand, groups whose members are too different seem to have difficulties to communicate and coordinate effectively.
Serial vs Parallel processes
For most of human history, collective intelligence was confined to small tribal groups in which opinions were aggregated through real-time parallel interactions among members. In modern times, mass communication, mass media, and networking technologies have enabled collective intelligence to span massive groups, distributed across continents and time-zones. To accommodate this shift in scale, collective intelligence in large-scale groups been dominated by serialized polling processes such as aggregating up-votes, likes, and ratings over time. In engineering, aggregating many engineering decisions allows for identifying typical good designs. While modern systems benefit from larger group size, the serialized process has been found to introduce substantial noise that distorts the collective output of the group. In one significant study of serialized collective intelligence, it was found that the first vote contributed to a serialized voting system can distort the final result by 34%.
To address the problems of serialized aggregation of input among large-scale groups, recent advancements collective intelligence have worked to replace serialized votes, polls, and markets, with parallel systems such as "human swarms" modeled after synchronous swarms in nature. Based on natural process of Swarm Intelligence, these artificial swarms of networked humans enable participants to work together in parallel to answer questions and make predictions as an emergent collective intelligence. In one high-profile example, a human swarm challenge by CBS Interactive to predict the Kentucky Derby. The swarm correctly predicted the first four horses, in order, defying 542–1 odds and turning a $20 bet into $10,800.
The value of parallel collective intelligence was demonstrated in medical applications by researchers at Stanford University School of Medicine and Unanimous AI in a set of published studies wherein groups of human doctors were connected by real-time swarming algorithms and tasked with diagnosing chest x-rays for the presence of pneumonia. When working together as "human swarms," the groups of experienced radiologists demonstrated a 33% reduction in diagnostic errors as compared to traditional methods.
Evidence
Woolley, Chabris, Pentland, Hashmi, & Malone (2010), the originators of this scientific understanding of collective intelligence, found a single statistical factor for collective intelligence in their research across 192 groups with people randomly recruited from the public. In Woolley et al.'s two initial studies, groups worked together on different tasks from the McGrath Task Circumplex, a well-established taxonomy of group tasks. Tasks were chosen from all four quadrants of the circumplex and included visual puzzles, brainstorming, making collective moral judgments, and negotiating over limited resources. The results in these tasks were taken to conduct a factor analysis. Both studies showed support for a general collective intelligence factor c underlying differences in group performance with an initial eigenvalue accounting for 43% (44% in study 2) of the variance, whereas the next factor accounted for only 18% (20%). That fits the range normally found in research regarding a general individual intelligence factor g typically accounting for 40% to 50% percent of between-individual performance differences on cognitive tests.
Afterwards, a more complex task was solved by each group to determine whether c factor scores predict performance on tasks beyond the original test. Criterion tasks were playing checkers (draughts) against a standardized computer in the first and a complex architectural design task in the second study. In a regression analysis using both individual intelligence of group members and c to predict performance on the criterion tasks, c had a significant effect, but average and maximum individual intelligence had not. While average (r=0.15, P=0.04) and maximum intelligence (r=0.19, P=0.008) of individual group members were moderately correlated with c, c was still a much better predictor of the criterion tasks. According to Woolley et al., this supports the existence of a collective intelligence factor c, because it demonstrates an effect over and beyond group members' individual intelligence and thus that c is more than just the aggregation of the individual IQs or the influence of the group member with the highest IQ.
Engel et al. (2014) replicated Woolley et al.'s findings applying an accelerated battery of tasks with a first factor in the factor analysis explaining 49% of the between-group variance in performance with the following factors explaining less than half of this amount. Moreover, they found a similar result for groups working together online communicating only via text and confirmed the role of female proportion and social sensitivity in causing collective intelligence in both cases. Similarly to Wolley et al., they also measured social sensitivity with the RME which is actually meant to measure people's ability to detect mental states in other peoples' eyes. The online collaborating participants, however, did neither know nor see each other at all. The authors conclude that scores on the RME must be related to a broader set of abilities of social reasoning than only drawing inferences from other people's eye expressions.
A collective intelligence factor c in the sense of Woolley et al. was further found in groups of MBA students working together over the course of a semester, in online gaming groups as well as in groups from different cultures and groups in different contexts in terms of short-term versus long-term groups. None of these investigations considered team members' individual intelligence scores as control variables.
Note as well that the field of collective intelligence research is quite young and published empirical evidence is relatively rare yet. However, various proposals and working papers are in progress or already completed but (supposedly) still in a scholarly peer reviewing publication process.
Predictive validity
Next to predicting a group's performance on more complex criterion tasks as shown in the original experiments, the collective intelligence factor c was also found to predict group performance in diverse tasks in MBA classes lasting over several months. Thereby, highly collectively intelligent groups earned significantly higher scores on their group assignments although their members did not do any better on other individually performed assignments. Moreover, highly collective intelligent teams improved performance over time suggesting that more collectively intelligent teams learn better. This is another potential parallel to individual intelligence where more intelligent people are found to acquire new material quicker.
Individual intelligence can be used to predict plenty of life outcomes from school attainment and career success to health outcomes and even mortality. Whether collective intelligence is able to predict other outcomes besides group performance on mental tasks has still to be investigated.
Potential connections to individual intelligence
Gladwell (2008) showed that the relationship between individual IQ and success works only to a certain point and that additional IQ points over an estimate of IQ 120 do not translate into real life advantages. If a similar border exists for Group-IQ or if advantages are linear and infinite, has still to be explored. Similarly, demand for further research on possible connections of individual and collective intelligence exists within plenty of other potentially transferable logics of individual intelligence, such as, for instance, the development over time or the question of improving intelligence. Whereas it is controversial whether human intelligence can be enhanced via training, a group's collective intelligence potentially offers simpler opportunities for improvement by exchanging team members or implementing structures and technologies. Moreover, social sensitivity was found to be, at least temporarily, improvable by reading literary fiction as well as watching drama movies. In how far such training ultimately improves collective intelligence through social sensitivity remains an open question.
There are further more advanced concepts and factor models attempting to explain individual cognitive ability including the categorization of intelligence in fluid and crystallized intelligence or the hierarchical model of intelligence differences. Further supplementing explanations and conceptualizations for the factor structure of the Genomes of collective intelligence besides a general c factor', though, are missing yet.
Controversies
Other scholars explain team performance by aggregating team members' general intelligence to the team level instead of building an own overall collective intelligence measure. Devine and Philips (2001) showed in a meta-analysis that mean cognitive ability predicts team performance in laboratory settings (.37) as well as field settings (.14) – note that this is only a small effect. Suggesting a strong dependence on the relevant tasks, other scholars showed that tasks requiring a high degree of communication and cooperation are found to be most influenced by the team member with the lowest cognitive ability. Tasks in which selecting the best team member is the most successful strategy, are shown to be most influenced by the member with the highest cognitive ability.
Since Woolley et al.'s results do not show any influence of group satisfaction, group cohesiveness, or motivation, they, at least implicitly, challenge these concepts regarding the importance for group performance in general and thus contrast meta-analytically proven evidence concerning the positive effects of group cohesion, motivation and satisfaction on group performance.
Noteworthy is also that the involved researchers among the confirming findings widely overlap with each other and with the authors participating in the original first study around Anita Woolley.
Alternative mathematical techniques
Computational collective intelligence
In 2001, Tadeusz (Tad) Szuba from the AGH University in Poland proposed a formal model for the phenomenon of collective intelligence. It is assumed to be an unconscious, random, parallel, and distributed computational process, run in mathematical logic by the social structure.
In this model, beings and information are modeled as abstract information molecules carrying expressions of mathematical logic. They are quasi-randomly displacing due to their interaction with their environments with their intended displacements. Their interaction in abstract computational space creates multi-thread inference process which we perceive as collective intelligence. Thus, a non-Turing model of computation is used. This theory allows simple formal definition of collective intelligence as the property of social structure and seems to be working well for a wide spectrum of beings, from bacterial colonies up to human social structures. Collective intelligence considered as a specific computational process is providing a straightforward explanation of several social phenomena. For this model of collective intelligence, the formal definition of IQS (IQ Social) was proposed and was defined as "the probability function over the time and domain of N-element inferences which are reflecting inference activity of the social structure". While IQS seems to be computationally hard, modeling of social structure in terms of a computational process as described above gives a chance for approximation. Prospective applications are optimization of companies through the maximization of their IQS, and the analysis of drug resistance against collective intelligence of bacterial colonies.
Collective intelligence quotient
One measure sometimes applied, especially by more artificial intelligence focused theorists, is a "collective intelligence quotient" (or "cooperation quotient") – which can be normalized from the "individual" intelligence quotient (IQ) – thus making it possible to determine the marginal intelligence added by each new individual participating in the collective action, thus using metrics to avoid the hazards of group think and stupidity.
Applications
There have been many recent applications of collective intelligence, including in fields such as crowd-sourcing, citizen science and prediction markets. The Nesta Centre for Collective Intelligence Design was launched in 2018 and has produced many surveys of applications as well as funding experiments. In 2020 the UNDP Accelerator Labs began using collective intelligence methods in their work to accelerate innovation for the Sustainable Development Goals.
Elicitation of point estimates
Here, the goal is to get an estimate (in a single value) of something. For example, estimating the weight of an object, or the release date of a product or probability of success of a project etc. as seen in prediction markets like Intrade, HSX or InklingMarkets and also in several implementations of crowdsourced estimation of a numeric outcome such as the Delphi method. Essentially, we try to get the average value of the estimates provided by the members in the crowd.
Opinion aggregation
In this situation, opinions are gathered from the crowd regarding an idea, issue or product. For example, trying to get a rating (on some scale) of a product sold online (such as Amazon's star rating system). Here, the emphasis is to collect and simply aggregate the ratings provided by customers/users.
Idea Collection
In these problems, someone solicits ideas for projects, designs or solutions from the crowd. For example, ideas on solving a data science problem (as in Kaggle) or getting a good design for a T-shirt (as in Threadless) or in getting answers to simple problems that only humans can do well (as in Amazon's Mechanical Turk). The objective is to gather the ideas and devise some selection criteria to choose the best ideas.
James Surowiecki divides the advantages of disorganized decision-making into three main categories, which are cognition, cooperation and coordination.
Cognition
Market judgment
Because of the Internet's ability to rapidly convey large amounts of information throughout the world, the use of collective intelligence to predict stock prices and stock price direction has become increasingly viable. Websites aggregate stock market information that is as current as possible so professional or amateur stock analysts can publish their viewpoints, enabling amateur investors to submit their financial opinions and create an aggregate opinion. The opinion of all investor can be weighed equally so that a pivotal premise of the effective application of collective intelligence can be applied: the masses, including a broad spectrum of stock market expertise, can be utilized to more accurately predict the behavior of financial markets.
Collective intelligence underpins the efficient-market hypothesis of Eugene Fama – although the term collective intelligence is not used explicitly in his paper. Fama cites research conducted by Michael Jensen in which 89 out of 115 selected funds underperformed relative to the index during the period from 1955 to 1964. But after removing the loading charge (up-front fee) only 72 underperformed while after removing brokerage costs only 58 underperformed. On the basis of such evidence index funds became popular investment vehicles using the collective intelligence of the market, rather than the judgement of professional fund managers, as an investment strategy.
Predictions in politics and technology
Political parties mobilize large numbers of people to form policy, select candidates and finance and run election campaigns. Knowledge focusing through various voting methods allows perspectives to converge through the assumption that uninformed voting is to some degree random and can be filtered from the decision process leaving only a residue of informed consensus. Critics point out that often bad ideas, misunderstandings, and misconceptions are widely held, and that structuring of the decision process must favor experts who are presumably less prone to random or misinformed voting in a given context.
Companies such as Affinnova (acquired by Nielsen), Google, InnoCentive, Marketocracy, and Threadless have successfully employed the concept of collective intelligence in bringing about the next generation of technological changes through their research and development (R&D), customer service, and knowledge management. An example of such application is Google's Project Aristotle in 2012, where the effect of collective intelligence on team makeup was examined in hundreds of the company's R&D teams.
Cooperation
Networks of trust
In 2012, the Global Futures Collective Intelligence System (GFIS) was created by The Millennium Project, which epitomizes collective intelligence as the synergistic intersection among data/information/knowledge, software/hardware, and expertise/insights that has a recursive learning process for better decision-making than the individual players alone.
New media are often associated with the promotion and enhancement of collective intelligence. The ability of new media to easily store and retrieve information, predominantly through databases and the Internet, allows for it to be shared without difficulty. Thus, through interaction with new media, knowledge easily passes between sources resulting in a form of collective intelligence. The use of interactive new media, particularly the internet, promotes online interaction and this distribution of knowledge between users.
Francis Heylighen, Valentin Turchin, and Gottfried Mayer-Kress are among those who view collective intelligence through the lens of computer science and cybernetics. In their view, the Internet enables collective intelligence at the widest, planetary scale, thus facilitating the emergence of a global brain.
The developer of the World Wide Web, Tim Berners-Lee, aimed to promote sharing and publishing of information globally. Later his employer opened up the technology for free use. In the early '90s, the Internet's potential was still untapped, until the mid-1990s when 'critical mass', as termed by the head of the Advanced Research Project Agency (ARPA), Dr. J.C.R. Licklider, demanded more accessibility and utility. The driving force of this Internet-based collective intelligence is the digitization of information and communication. Henry Jenkins, a key theorist of new media and media convergence draws on the theory that collective intelligence can be attributed to media convergence and participatory culture . He criticizes contemporary education for failing to incorporate online trends of collective problem solving into the classroom, stating "whereas a collective intelligence community encourages ownership of work as a group, schools grade individuals". Jenkins argues that interaction within a knowledge community builds vital skills for young people, and teamwork through collective intelligence communities contribute to the development of such skills. Collective intelligence is not merely a quantitative contribution of information from all cultures, it is also qualitative.
Lévy and de Kerckhove consider CI from a mass communications perspective, focusing on the ability of networked information and communication technologies to enhance the community knowledge pool. They suggest that these communications tools enable humans to interact and to share and collaborate with both ease and speed (Flew 2008). With the development of the Internet and its widespread use, the opportunity to contribute to knowledge-building communities, such as Wikipedia, is greater than ever before. These computer networks give participating users the opportunity to store and to retrieve knowledge through the collective access to these databases and allow them to "harness the hive" Researchers at the MIT Center for Collective Intelligence research and explore collective intelligence of groups of people and computers.
In this context collective intelligence is often confused with shared knowledge. The former is the sum total of information held individually by members of a community while the latter is information that is believed to be true and known by all members of the community. Collective intelligence as represented by Web 2.0 has less user engagement than collaborative intelligence. An art project using Web 2.0 platforms is "Shared Galaxy", an experiment developed by an anonymous artist to create a collective identity that shows up as one person on several platforms like MySpace, Facebook, YouTube and Second Life. The password is written in the profiles and the accounts named "Shared Galaxy" are open to be used by anyone. In this way many take part in being one. Another art project using collective intelligence to produce artistic work is Curatron, where a large group of artists together decides on a smaller group that they think would make a good collaborative group. The process is used based on an algorithm computing the collective preferences In creating what he calls 'CI-Art', Nova Scotia based artist Mathew Aldred follows Pierry Lévy's definition of collective intelligence. Aldred's CI-Art event in March 2016 involved over four hundred people from the community of Oxford, Nova Scotia, and internationally. Later work developed by Aldred used the UNU swarm intelligence system to create digital drawings and paintings. The Oxford Riverside Gallery (Nova Scotia) held a public CI-Art event in May 2016, which connected with online participants internationally.
In social bookmarking (also called collaborative tagging), users assign tags to resources shared with other users, which gives rise to a type of information organisation that emerges from this crowdsourcing process. The resulting information structure can be seen as reflecting the collective knowledge (or collective intelligence) of a community of users and is commonly called a "Folksonomy", and the process can be captured by models of collaborative tagging.
Recent research using data from the social bookmarking website Delicious, has shown that collaborative tagging systems exhibit a form of complex systems (or self-organizing) dynamics. Although there is no central controlled vocabulary to constrain the actions of individual users, the distributions of tags that describe different resources has been shown to converge over time to a stable power law distributions. Once such stable distributions form, examining the correlations between different tags can be used to construct simple folksonomy graphs, which can be efficiently partitioned to obtained a form of community or shared vocabularies. Such vocabularies can be seen as a form of collective intelligence, emerging from the decentralised actions of a community of users. The Wall-it Project is also an example of social bookmarking.
P2P business
Research performed by Tapscott and Williams has provided a few examples of the benefits of collective intelligence to business:
Talent utilization
At the rate technology is changing, no firm can fully keep up in the innovations needed to compete. Instead, smart firms are drawing on the power of mass collaboration to involve participation of the people they could not employ. This also helps generate continual interest in the firm in the form of those drawn to new idea creation as well as investment opportunities.
Demand creation
Firms can create a new market for complementary goods by engaging in open source community. Firms also are able to expand into new fields that they previously would not have been able to without the addition of resources and collaboration from the community. This creates, as mentioned before, a new market for complementary goods for the products in said new fields.
Costs reduction
Mass collaboration can help to reduce costs dramatically. Firms can release a specific software or product to be evaluated or debugged by online communities. The results will be more personal, robust and error-free products created in a short amount of time and costs. New ideas can also be generated and explored by collaboration of online communities creating opportunities for free R&D outside the confines of the company.
Open source software
Cultural theorist and online community developer, John Banks considered the contribution of online fan communities in the creation of the Trainz product. He argued that its commercial success was fundamentally dependent upon "the formation and growth of an active and vibrant online fan community that would both actively promote the product and create content- extensions and additions to the game software".
The increase in user created content and interactivity gives rise to issues of control over the game itself and ownership of the player-created content. This gives rise to fundamental legal issues, highlighted by Lessig and Bray and Konsynski, such as intellectual property and property ownership rights.
Gosney extends this issue of Collective Intelligence in videogames one step further in his discussion of alternate reality gaming. This genre, he describes as an "across-media game that deliberately blurs the line between the in-game and out-of-game experiences" as events that happen outside the game reality "reach out" into the player's lives in order to bring them together. Solving the game requires "the collective and collaborative efforts of multiple players"; thus the issue of collective and collaborative team play is essential to ARG. Gosney argues that the Alternate Reality genre of gaming dictates an unprecedented level of collaboration and "collective intelligence" in order to solve the mystery of the game.
Benefits of co-operation
Co-operation helps to solve most important and most interesting multi-science problems. In his book, James Surowiecki mentioned that most scientists think that benefits of co-operation have much more value when compared to potential costs. Co-operation works also because at best it guarantees number of different viewpoints. Because of the possibilities of technology global co-operation is nowadays much easier and productive than before. It is clear that, when co-operation goes from university level to global it has significant benefits.
For example, why do scientists co-operate? Science has become more and more isolated and each science field has spread even more and it is impossible for one person to be aware of all developments. This is true especially in experimental research where highly advanced equipment requires special skills. With co-operation scientists can use information from different fields and use it effectively instead of gathering all the information just by reading by themselves."
Coordination
Ad-hoc communities
Military, trade unions, and corporations satisfy some definitions of CI – the most rigorous definition would require a capacity to respond to very arbitrary conditions without orders or guidance from "law" or "customers" to constrain actions. Online advertising companies are using collective intelligence to bypass traditional marketing and creative agencies.
The UNU open platform for "human swarming" (or "social swarming") establishes real-time closed-loop systems around groups of networked users molded after biological swarms, enabling human participants to behave as a unified collective intelligence. When connected to UNU, groups of distributed users collectively answer questions and make predictions in real-time. Early testing shows that human swarms can out-predict individuals. In 2016, an UNU swarm was challenged by a reporter to predict the winners of the Kentucky Derby, and successfully picked the first four horses, in order, beating 540 to 1 odds.
Specialized information sites such as Digital Photography Review or Camera Labs is an example of collective intelligence. Anyone who has an access to the internet can contribute to distributing their knowledge over the world through the specialized information sites.
In learner-generated context a group of users marshal resources to create an ecology that meets their needs often (but not only) in relation to the co-configuration, co-creation and co-design of a particular learning space that allows learners to create their own context. Learner-generated contexts represent an ad hoc community that facilitates coordination of collective action in a network of trust. An example of learner-generated context is found on the Internet when collaborative users pool knowledge in a "shared intelligence space". As the Internet has developed so has the concept of CI as a shared public forum. The global accessibility and availability of the Internet has allowed more people than ever to contribute and access ideas.
Games such as The Sims Series, and Second Life are designed to be non-linear and to depend on collective intelligence for expansion. This way of sharing is gradually evolving and influencing the mindset of the current and future generations. For them, collective intelligence has become a norm. In Terry Flew's discussion of 'interactivity' in the online games environment, the ongoing interactive dialogue between users and game developers, he refers to Pierre Lévy's concept of Collective Intelligence and argues this is active in videogames as clans or guilds in MMORPG constantly work to achieve goals. Henry Jenkins proposes that the participatory cultures emerging between games producers, media companies, and the end-users mark a fundamental shift in the nature of media production and consumption. Jenkins argues that this new participatory culture arises at the intersection of three broad new media trends. Firstly, the development of new media tools/technologies enabling the creation of content. Secondly, the rise of subcultures promoting such creations, and lastly, the growth of value adding media conglomerates, which foster image, idea and narrative flow.
Coordinating collective actions
Improvisational actors also experience a type of collective intelligence which they term "group mind", as theatrical improvisation relies on mutual cooperation and agreement, leading to the unity of "group mind".
Growth of the Internet and mobile telecom has also produced "swarming" or "rendezvous" events that enable meetings or even dates on demand. The full impact has yet to be felt but the anti-globalization movement, for example, relies heavily on e-mail, cell phones, pagers, SMS and other means of organizing. The Indymedia organization does this in a more journalistic way. Such resources could combine into a form of collective intelligence accountable only to the current participants yet with some strong moral or linguistic guidance from generations of contributors – or even take on a more obviously democratic form to advance shared goal.
A further application of collective intelligence is found in the "Community Engineering for Innovations". In such an integrated framework proposed by Ebner et al., idea competitions and virtual communities are combined to better realize the potential of the collective intelligence of the participants, particularly in open-source R&D. In management theory the use of collective intelligence and crowd sourcing leads to innovations and very robust answers to quantitative issues. Therefore, collective intelligence and crowd sourcing is not necessarily leading to the best solution to economic problems, but to a stable, good solution.
Coordination in different types of tasks
Collective actions or tasks require different amounts of coordination depending on the complexity of the task. Tasks vary from being highly independent simple tasks that require very little coordination to complex interdependent tasks that are built by many individuals and require a lot of coordination. In the article written by Kittur, Lee and Kraut the writers introduce a problem in cooperation: "When tasks require high coordination because the work is highly interdependent, having more contributors can increase process losses, reducing the effectiveness of the group below what individual members could optimally accomplish". Having a team too large the overall effectiveness may suffer even when the extra contributors increase the resources. In the end the overall costs from coordination might overwhelm other costs.
Group collective intelligence is a property that emerges through coordination from both bottom-up and top-down processes. In a bottom-up process the different characteristics of each member are involved in contributing and enhancing coordination. Top-down processes are more strict and fixed with norms, group structures and routines that in their own way enhance the group's collective work.
Alternative views
A tool for combating self-preservation
Tom Atlee reflects that, although humans have an innate ability to gather and analyze data, they are affected by culture, education and social institutions. A single person tends to make decisions motivated by self-preservation. Therefore, without collective intelligence, humans may drive themselves into extinction based on their selfish needs.
Separation from IQism
Phillip Brown and Hugh Lauder quotes Bowles and Gintis (1976) that in order to truly define collective intelligence, it is crucial to separate 'intelligence' from IQism. They go on to argue that intelligence is an achievement and can only be developed if allowed to. For example, earlier on, groups from the lower levels of society are severely restricted from aggregating and pooling their intelligence. This is because the elites fear that the collective intelligence would convince the people to rebel. If there is no such capacity and relations, there would be no infrastructure on which collective intelligence is built. This reflects how powerful collective intelligence can be if left to develop.
Artificial intelligence views
Skeptics, especially those critical of artificial intelligence and more inclined to believe that risk of bodily harm and bodily action are the basis of all unity between people, are more likely to emphasize the capacity of a group to take action and withstand harm as one fluid mass mobilization, shrugging off harms the way a body shrugs off the loss of a few cells. This train of thought is most obvious in the anti-globalization movement and characterized by the works of John Zerzan, Carol Moore, and Starhawk, who typically shun academics. These theorists are more likely to refer to ecological and collective wisdom and to the role of consensus process in making ontological distinctions than to any form of "intelligence" as such, which they often argue does not exist, or is mere "cleverness".
Harsh critics of artificial intelligence on ethical grounds are likely to promote collective wisdom-building methods, such as the new tribalists and the Gaians. Whether these can be said to be collective intelligence systems is an open question. Some, e.g. Bill Joy, simply wish to avoid any form of autonomous artificial intelligence and seem willing to work on rigorous collective intelligence in order to remove any possible niche for AI.
In contrast to these views, companies such as Amazon Mechanical Turk and CrowdFlower are using collective intelligence and crowdsourcing or consensus-based assessment to collect the enormous amounts of data for machine learning algorithms.
See also
Similar concepts and applications
Citizen science
Civic intelligence
Collaborative filtering
Collaborative innovation network
Collective decision-making
Collective effervescence
Collective memory
Collective problem solving
Crowd psychology
Global Consciousness Project
Group behaviour
Group mind (science fiction)
Knowledge ecosystem
Open source intelligence
Recommendation system
Smart mob
Social commerce
Social information processing
Stigmergy
Syntality
The Wisdom of Crowds
Think tank
Wiki
Computation and computer science
Bees algorithm
Cellular automaton
Collaborative human interpreter
Collaborative software
Connectivity (graph theory)
Enterprise bookmarking
Human-based computation
Open-source software
Organismic computing
Preference elicitation
Others
Customer engagement
Dispersed knowledge
Distributed cognition
Facilitation (business)
Facilitator
Hundredth monkey effect
Keeping up with the Joneses
Library
Library of Alexandria
Meme
Open-space meeting
References
Works cited
Further reading
External links
CIRI – the Collective Intelligence Research Institute – a R&D non-profit organization on collective intelligence
An application of Collective Intelligence for the Global Climate Change Situation Room designed and implemented by The Millennium Project in Gimcheon, South Korea in 2009.
MIT Handbook of Collective Intelligence
Cultivating Society's Civic Intelligence Doug Schuler Journal of Society, Information and Communication, vol 4 No. 2.
Jennifer H. Watkins (2007). Prediction Markets as an Aggregation Mechanism for Collective Intelligence Los Alamos National Laboratory article on Collective Intelligence
Hideyasu Sasaki (2010). International Journal of Organizational and Collective Intelligence (IJOCI), vol 1 No. 1.
The collective intelligence framework, open-source framework for leveraging collective intelligence
Raimund Minichbauer (2012). Fragmented Collectives. On the Politics of "Collective Intelligence" in Electronic Networks, transversal 01 12, 'unsettling knowledges'
Artificial intelligence
Multi-robot systems |
35135491 | https://en.wikipedia.org/wiki/Construct%20%28game%20engine%29 | Construct (game engine) | Construct is an HTML5-based 2D video game engine developed by Scirra Ltd. It is aimed primarily at non-programmers, allowing quick creation of games through visual programming. First released as a GPL-licensed DirectX 9 game engine for Microsoft Windows with Python programming on October 27, 2007, it later became proprietary software with Construct 2, as well as switching its API technology from DirectX to NW.js and HTML5, as well as removing Python and adding JavaScript support and its plugin SDK in 2012, and eventually switched to a subscription-based model as a web app.
Features
Event system and behaviors
The primary method of programming games and applications in Construct is through 'event sheets', which are similar to source files used in programming languages. Each event sheet has a list of events, which contain conditional statements or triggers. Once these are met, actions or functions can be carried out. Event logic such as OR and AND, as well as sub-events (representing scope) allow for sophisticated systems to be programmed without learning a comparatively more difficult programming language. Groups can be used to enable and disable multiple events at once, and to organize events.
Object instance selection
Unlike many traditional development environments, Construct eschews selecting specific instances of objects when adding events, in favor of filtering through all instances of an object type on screen. When adding events, the editor allows the user to specify conditions or checks that must be fulfilled by each object instance on the screen before the event will be added or run by it. Events can be chained together using sub-events, allowing for more complicated behaviors to be created.
JavaScript
Construct 3 supports JavaScript as an optional scripting language which was announced in May 2019, citing the need to satisfy the advanced users' needs and popularity of existing workarounds.
Supported platforms
The latest version of Construct supports many platforms to export to, such as web applications and playable advertisements, to dedicated programs and mobile apps. Previous versions of Construct also supported other online platforms and storefronts, but have since been removed due to low use or service changes to the platform.
Construct Classic
Construct Classic can only export to .exe files, due to its reliance on DirectX.
Construct 2
HTML5 and storefronts
Construct 2's primary export platforms are HTML5 based. It claims support across Google Chrome, Firefox, Internet Explorer 9+, Safari 6+ and Opera 15+ on desktop browsers, and support for Safari in iOS 6+, Chrome and Firefox for Android, Windows Phone 8+, BlackBerry 10+ and Tizen.
Additionally, Construct 2 can export to several online marketplaces and platforms, including Facebook, the Chrome Web Store, the Firefox Marketplace, the Amazon Appstore, Construct Arcade (their own platform to host games made in Construct) and Kongregate.
Native platforms
Construct has the ability to export to several platforms that provide offline and native application behavior: Windows, MacOS and both 32-bit and 64-bit Linux are supported by exporting to NW.js. Doing this will allow the user to incorporate several features that HTML5 applications do not normally support, such as file I/O. On October 23, 2012, Scirra announced full support for exporting to Windows 8 Metro applications, including the incorporation of in-app purchases, 'snap' view states, roaming data, sharing, live tiles, touch input and accelerometer and inclinometer input. Support for exporting to Windows 10 Universal apps was added on August 26, 2015.
Construct handles native mobile support for iOS and Android by using Cordova.
Consoles
On January 20, 2014, Scirra announced that Construct 2 would be receiving support for Nintendo's Wii U system. Later that year, a plug-in was released to make Construct-based games compatible with the Nintendo Web Framework.
On April 13, 2016, Scirra announced that Construct 2's UWP support will allow publishing games to the Xbox One.
Construct 3
HTML5
Construct 3 currently supports web embeds through HTML5, uploading to Facebook Instant Games, Construct Arcade, as well as being formatted to interactive advertisements. It also originally had supported uploading to Kongregate, but it was removed on July 14, 2020, after Kongregate removed submitting new games to the platform.
Native platforms & consoles
Construct 3 also supports exporting to Windows, MacOS and Linux through NW.js, Android and iOS through Cordova, and Windows Store through UWP. Construct 3's UWP support also allows exporting to Xbox One, and Xbox Series X and S through backwards compatibility.
Release history
Construct Classic
Construct Classic is the first major version of the Construct engine. Unlike its successors, it is a free and open source game engine using DirectX. Originally developed by a group of students, it was first released on October 27, 2007, as version 0.8. The most recent release is r2, released on February 5, 2012.
This version largely defined the software's visual programming language and separately supported Python scripting.
Construct Classic was discontinued on April 20, 2013, to allow the development team to focus more on Construct 2.
Construct 2
Construct 2 is the second major release of the Construct engine. Major changes include DirectX being replaced with NW.js, allowing projects to be exported to platforms other than Microsoft Windows, including HTML5, Mac OS and Linux. The licensing system also moved from GPLv2 to a proprietary license with a free version available for download.
Construct 2 entered public beta on February 4, 2011, and was launched on August 22, 2011.
During 2012, Python scripting was retired, citing complications with running Python in browsers and general complexity of maintaining a compatible scripting system. A JavaScript SDK for plugins was introduced as a replacement.
As of May 2019, Construct 2 continues to be maintained and improved alongside Construct 3 development.
Sales of new licenses were retired on July 1, 2020.
Steam version
On October 18, 2012, Construct 2 was submitted to Steam Greenlight. Construct 2 was in the first batch of software titles to be greenlit on November 30, 2012. On January 26, 2013, Construct 2 was the second software title from Steam Greenlight to be launched on Steam.
On January 17, 2019, it was announced that the Steam version will be delisted from the store on January 31, 2019, due to the phasing out of Construct 2. However, the free version can still be downloaded via unofficial websites or a Steam install link.
Discontinuation
On February 20, 2020, Scirra announced plans to discontinue Construct 2, with sales of new licenses retired on July 1, 2020. The software was fully discontinued on July 1, 2021.
Construct 3
Construct 3 is the most recent major version of the Construct engine. Announced on January 27, 2015, new features include Mac and Linux support, multi-language support and third-party expansion of the editor with an official plugin SDK for the editor. More details were revealed on February 1, 2017, with a public beta starting on March 28 of the same year. It concluded on December 4, 2017, with the release of the engine. Improvements include a overhauled manual, official tutorials and translations of the IDE.
This version also changed from a pay-once model to a yearly subscription-based model.
On May 23, 2019, JavaScript coding was announced as a separate add-on, but was free for all users who had a paying license before September 2, 2019. The feature was added with r157 on July 5, 2019.
Construct Arcade
Construct Arcade (formerly known as Scirra Arcade) is a game portal for projects created in Construct 2 or 3. It was launched on November 23, 2011, along with update r69 of Construct 2. It was later added to Construct 3 on r24.
On August 14, 2019, a new version of the arcade released, with it being renamed to the Construct Arcade. Changes to the platform includes a new layout, stability improvements, publisher profiles, a way to view analytics of published games on the website and links to other storefronts.
See also
Verge3D
WebGL
Other engines that are similar to Construct:
GameMaker Studio
Clickteam Fusion
Stencyl
GDevelop
References
External links
The current website for Construct
Official Construct Classic page
Official Construct 2 page
Graphics libraries
HTML5
IPhone video game engines
Video game engines
Video game IDE
Video game development software for Linux |
1713552 | https://en.wikipedia.org/wiki/Adobe%20Flash%20Player | Adobe Flash Player | Adobe Flash Player (formerly Macromedia Flash Player and FutureSplash Player, and known in Internet Explorer, Firefox, and Google Chrome as Shockwave Flash) is computer software for content created on the Adobe Flash platform. Flash Player is capable of viewing multimedia contents, executing rich Internet applications, and streaming audio and video. In addition, Flash Player can run from a web browser as a browser plug-in or on supported mobile devices. Originally created by FutureWave under the name FutureSplash Player, it was renamed to Flash Player after Macromedia acquired FutureWave in 1996. It was then developed and distributed by Adobe Systems after Adobe acquired Macromedia in 2005. Currently, it's developed and distributed by Zhongcheng for users in China, and by Harman International for enterprise users outside of China, in collaboration with Adobe. Flash Player is distributed as freeware. With the exception of the China-specific and enterprise supported variants, Flash Player was discontinued on 31 December 2020, and its download page disappeared two days later. Since 12 January 2021, Flash Player (original global variants) versions newer than 32.0.0.371, released in May 2020, refuse to play Flash content and instead display a static warning message.
Flash Player runs SWF files that can be created by Adobe Flash Professional, Adobe Flash Builder or by third-party tools such as FlashDevelop. Flash Player supports vector graphics, 3D graphics, embedded audio, video and raster graphics, and a scripting language called ActionScript. ActionScript is based on ECMAScript (similar to JavaScript) and supports object-oriented code. Internet Explorer 11 and Microsoft Edge Legacy, in Windows 8 and later, along with Google Chrome on all versions of Windows, came bundled with a sandboxed Adobe Flash plug-in.
Flash Player once had a large user base, and was a common format for web games, animations, and graphical user interface (GUI) elements embedded in web pages. However, the most popular use of Flash among the 10-20 age group was for Flash games. Adobe stated in 2013 that more than 400 million out of over 1 billion connected desktops update to the new version of Flash Player within six weeks of release. However, Flash Player has become increasingly criticized for its performance, consumption of battery on mobile devices, the number of security vulnerabilities that had been discovered in the software, and its closed platform nature. Apple co-founder Steve Jobs was highly critical of Flash Player, having published an open letter detailing Apple's reasoning for not supporting Flash on its iOS device family. Its usage has also waned because of modern web standards that allow some of Flash's use cases to be fulfilled without third-party plugins.
Features
Adobe Flash Player is a runtime that executes and displays content from a provided SWF file, although it has no in-built features to modify the SWF file at runtime. It can execute software written in the ActionScript programming language which enables the runtime manipulation of text, data, vector graphics, raster graphics, sound, and video. The player can also access certain connected hardware devices, including the web cameras and microphones, after permission for the same has been granted by the user.
Flash Player was used internally by the Adobe Integrated Runtime (AIR), to provide a cross-platform runtime environment for desktop applications and mobile applications. AIR supports installable applications on Windows, Linux, macOS, and some mobile operating systems such as iOS and Android. Flash applications must specifically be built for the AIR runtime to use additional features provided, such as file system integration, native client extensions, native window/screen integration, taskbar/dock integration, and hardware integration with connected Accelerometer and GPS devices.
Data formats
Flash Player included native support for many data formats, some of which can only be accessed through the ActionScript scripting interface.
XML: Flash Player has included native support for XML parsing and generation since version 8. XML data is held in memory as an XML Document Object Model, and can be manipulated using ActionScript. ActionScript 3 also supports ECMAScript for XML (E4X), which allows XML data to be manipulated more easily.
JSON: Flash Player 11 includes native support for importing and exporting data in the JavaScript Object Notation (JSON) format, which allows interoperability with web services and JavaScript programs.
AMF: Flash Player allows application data to be stored on users computers, in the form of Local Shared Objects, the Flash equivalent to browser cookies. Flash Player can also natively read and write files in the Action Message Format, the default data format for Local Shared Objects. Since the AMF format specification is published, data can be transferred to and from Flash applications using AMF datasets instead of JSON or XML, reducing the need for parsing and validating such data.
SWF: The specification for the SWF file format was published by Adobe, enabling the development of the SWX Format project, which used the SWF file format and AMF as a means for Flash applications to exchange data with server side applications. The SWX system stores data as standard SWF bytecode which is automatically interpreted by Flash Player. Another open-source project, SWXml allows Flash applications to load XML files as native ActionScript objects without any client-side XML parsing, by converting XML files to SWF/AMF on the server.
Multimedia formats
Flash Player is primarily a graphics and multimedia platform, and has supported raster graphics and vector graphics since its earliest version. It supports the following different multimedia formats which it can natively decode and play back.
MP3: Support for decoding and playback of streaming MPEG-2 Audio Layer III (MP3) audio was introduced in Flash Player 4. MP3 files can be accessed and played back from a server via HTTP, or embedded inside an SWF file, which is also a streaming format.
FLV: Support for decoding and playing back video and audio inside Flash Video (FLV and F4V) files, a format developed by Adobe Systems and Macromedia. Flash Video is only a container format and supports multiple different video codecs, such as Sorenson Spark, VP6, and more recently H.264. Flash Player uses hardware acceleration to display video where present, using technologies such as DirectX Video Acceleration and OpenGL to do so. Flash Video is used by YouTube, Hulu, Yahoo! Video, BBC Online, and other news providers. FLV files can be played back from a server using HTTP progressive download, and can also be embedded inside an SWF file. Flash Video can also be streamed via RTMP using the Adobe Flash Media Server or other such server-side software.
PNG: Support for decoding and rendering Portable Network Graphics (PNG) images, in both its 24-bit (opaque) and 32-bit (semi-transparent) variants. Flash Player 11 can also encode a PNG bitmap via ActionScript.
JPEG: Support for decoding and rendering compressed JPEG images. Flash Player 10 added support for the JPEG-XR advanced image compression standard developed by Microsoft Corporation, which results in better compression and quality than JPEG. JPEG-XR enables lossy and lossless compression with or without alpha channel transparency. Flash Player 11 can also encode a JPEG or JPEG-XR bitmap via ActionScript.
GIF: Support for decoding and rendering compressed Graphics Interchange Format (GIF) images, in its single-frame variants only. Loading a multi-frame GIF will display only the first image frame.
Streaming protocols
HTTP: Support for communicating with web servers using HTTP requests and POST data. However, only websites that explicitly allow Flash to connect to them can be accessed via HTTP or sockets, to prevent Flash being used as a tool for cross-site request forgery, cross-site scripting, DNS rebinding, and denial-of-service attacks. Websites must host a certain XML file termed a cross domain policy, allowing or denying Flash content from specific websites to connect to them. Certain websites, such as Digg, Flickr, and Photobucket already host a cross domain policy that permits Flash content to access their website via HTTP.
RTMP: Support for live audio and video streaming using the Real Time Messaging Protocol (RTMP) developed by Macromedia. RTMP supports a non-encrypted version over the Transmission Control Protocol (TCP) or an encrypted version over a secure Transport Layer Security (SSL) connection. RTMPT can also be encapsulated within HTTP requests to traverse firewalls that only allow HTTP traffic.
TCP: Support for Transmission Control Protocol (TCP) Internet socket communication to communicate with any type of server, using stream sockets. Sockets can be used only via ActionScript, and can transfer plain text, XML, or binary data (ActionScript 3.0 and later). To prevent security issues, web servers that permit Flash content to communicate with them using sockets must host an XML-based cross domain policy file, served on Port 843. Sockets enable AS3 programs to interface with any kind of server software, such as MySQL.
Performance
Hardware acceleration
Until version 10 of the Flash player, there was no support for GPU acceleration. Version 10 added a limited form of support for shaders on materials in the form of the Pixel Bender API, but still did not have GPU-accelerated 3D vertex processing. A significant change came in version 11, which added a new low-level API called Stage3D (initially codenamed Molehill), which provides full GPU acceleration, similar to WebGL. (The partial support for GPU acceleration in Pixel Bender was completely removed in Flash 11.8, resulting in the disruption of some projects like MIT's Scratch, which lacked the manpower to recode their applications quickly enough.)
Current versions of Flash Player are optimized to use hardware acceleration for video playback and 3D graphics rendering on many devices, including desktop computers. Performance is similar to HTML5 video playback. Also, Flash Player has been used on multiple mobile devices as a primary user interface renderer.
Compilation
Although code written in ActionScript 3 executes up to 10 times faster than the prior ActionScript 2, the Adobe ActionScript 3 compiler is a non-optimizing compiler, and produces inefficient bytecode in the resulting SWF, when compared to toolkits such as CrossBridge.
CrossBridge, a toolkit that targets C++ code to run within the Flash Player, uses the LLVM compiler to produce bytecode that runs up to 10 times faster than code the ActionScript 3 compiler produces, only because the LLVM compiler uses more aggressive optimization.
Adobe has released ActionScript Compiler 2 (ASC2) in Flex 4.7 and onwards, which improves compilation times and optimizes the generated bytecode and supports method inlining, improving its performance at runtime.
As of 2012, the Haxe multiplatform language can build programs for Flash Player that perform faster than the same application built with the Adobe Flex SDK compiler.
Development methods
Flash Player applications and games can be built in two significantly different methods:
"Flex" applications: The Adobe Flex Framework is an integrated collection of stylable Graphical User Interface, data manipulation and networking components, and applications built upon it are termed "Flex" applications. Startup time is reduced since the Flex framework must be downloaded before the application begins, and weighs in at approximately 500KB. Editors include Adobe Flash Builder and FlashDevelop.
"Pure ActionScript" applications: Applications built without the Flex framework allow greater flexibility and performance. Video games built for Flash Player are typically pure-Actionscript projects. Various open-source component frameworks are available for pure ActionScript projects, such as MadComponents, that provide UI Components at significantly smaller SWF file sizes.
In both methods, developers can access the full Flash Player set of functions, including text, vector graphics, bitmap graphics, video, audio, camera, microphone, and others. AIR also includes added features such as file system integration, native extensions, native desktop integration, and hardware integration with connected devices.
Development tools
Adobe provides five ways of developing applications for Flash Player:
Adobe Animate: graphic design, animation and scripting toolset
Adobe Flash Builder: enterprise application development and debugging
Adobe Scout: visual profiler for performance optimization
Apache Flex: a free SDK to compile Flash and Adobe AIR applications from source code; developed by Adobe and donated to the Apache Foundation
CrossBridge: a free SDK to cross-compile C++ code to run in Flash Player
Third-party development environments are also available:
FlashDevelop: an open-source Flash ActionScript IDE, which includes a debugger for AIR applications
Powerflasher FDT: a commercial ActionScript IDE
CodeDrive: an extension to Microsoft Visual Studio 2010 for ActionScript 3 development and debugging
MTASC: a compiler
Haxe: a multi-platform language
Game development
Adobe offers the free Adobe Gaming SDK, consisting () of several open-source AS3 libraries built on the Flash Player Stage3D APIs for GPU-accelerated graphics:
Away3D: GPU-accelerated 3D graphics and animation engine
Starling: GPU-accelerated 2D graphics that mimics the Flash display list API
Feathers: GPU-accelerated skinnable GUI library built on top of Starling
Dragon Bones: GPU-accelerated 2D skeletal animation library
A few commercial game engines target Flash Player (Stage3D) as run-time environment, such as Unity 3D and Unreal Engine 3. Before the introduction of Stage3D, a number of older 2D engines or isometric engines like Flixel saw their heyday.
Adobe also developed the CrossBridge toolkit which cross-compiles C/C++ code to run within the Flash Player, using LLVM and GCC as compiler backends, and high-performance memory-access opcodes in the Flash Player (termed "Domain Memory") to work with in-memory data quickly. CrossBridge is targeted toward the game development industry, and includes tools for building, testing, and debugging C/C++ projects in Flash Player.
Notable online video games developed in Flash include Angry Birds, FarmVille, AdventureQuest (started in 2002 and is still active as of 2020.) And the Papa Louie franchise.
Availability
Desktop platforms
Adobe Flash Player is available in two major flavors:
The plugin version for use in various web browsers
The "projector" version is a standalone player that can open SWF files directly.
On February 22, 2012, Adobe announced that it would no longer release new versions of NPAPI Flash plugins for Linux, although Flash Player 11.2 would continue to receive security updates. In August 2016 Adobe announced that, beginning with version 24, it will resume offering of Flash Player for Linux for other browsers.
The Extended Support Release (ESR) of Flash Player on macOS and Windows was a version of Flash Player kept up to date with security updates, but none of the new features or bug fixes available in later versions. In August 2016, Adobe discontinued the ESR branch and instead focused solely on the standard release.
Version 10 can be run under Windows 98/Me using KernelEx. HP offered Version 6 of the player for HP-UX, while Innotek GmbH offered versions 4 and 5 for OS/2. Other versions of the player have been available at some point for BeOS.
Mobile platforms
In 2011, Flash Player had emerged as the de facto standard for online video publishing on the desktop, with adaptive bitrate video streaming, DRLM, and fullscreen support. On mobile devices, however, after Apple refused to allow the Flash Player within the inbuilt iOS web browser, Adobe changed strategy, enabling Flash content to be delivered as native mobile applications using the Adobe Integrated Runtime.
Up until 2012, Flash Player 11 was available for the Android (ARM Cortex-A8 and above), although in June 2012, Google announced that Android 4.1 (codenamed Jelly Bean) would not support Flash by default. In August 2012, Adobe stopped updating Flash for Android.
Flash Player was supported on a select range of mobile and tablet devices, from Acer, BlackBerry 10, Dell, HTC, Lenovo, Logitech, LG, Motorola, Samsung, Sharp, SoftBank, Sony (and Sony Ericsson), and Toshiba. As of 2012, Adobe has stopped browser-based Flash Player development for mobile browsers in favor of HTML5, however, Adobe continues to support Flash content on mobile devices with the Adobe Integrated Runtime, which allows developers to publish content that runs as native applications on certain supported mobile phone platforms.
Adobe said it will optimize Flash for use on ARM architecture (ARMv7 and ARMv6 architectures used in the Cortex-A series of processors and in the ARM11 family) and release it in the second half of 2009. The company also stated it wants to enable Flash on NVIDIA Tegra, Texas Instruments OMAP 3, and Samsung ARMs. Beginning 2009, it was announced that Adobe would be bringing Flash to TV sets via Intel Media Processor CE 3100 before mid-2009. ARM Holdings later said it welcomes the move of Flash, because "it will transform mobile applications and it removes the claim that the desktop controls the Internet." However, as of May 2009, the expected ARM/Linux netbook devices had poor support for Web video and fragmented software base.
Among other devices, LeapFrog Enterprises provides Flash Player with their Leapster Multimedia Learning System and extended the Flash Player with touch-screen support. Version 9 was the most recent version available for the Linux/ARM-based Nokia 770/N800/N810 Internet tablets running Maemo OS2008. Other versions of the player have been available at some point for Symbian OS and Palm OS. The Kodak Easyshare One includes Flash Player.
The following table documents historical support for Flash Player on mobile operating systems:
Other hardware
Some CPU emulators have been created for Flash Player, including Chip8, Commodore 64, ZX Spectrum, and the Nintendo Entertainment System. They enable video games created for such platforms to run within Flash Player.
End of life
Adobe announced on July 25, 2017, that it would end support for the normal/global variant of Flash Player on January 1, 2021, and encouraged developers to use HTML5 standards in place of Flash. The announcement was coordinated with Apple, Facebook, Google, Microsoft, and Mozilla. Adobe announced that all major web browsers planned to officially remove the Adobe Flash Player component on December 31, 2020, and Microsoft removed it from the Windows OS in January 2021 via Windows Update. In a move to further reduce the number of Flash Player installations, Adobe added a "time bomb" to Flash to disable existing installations after January 12, 2021. In mid-2020, Flash Player started prompting users to uninstall itself. Adobe removed all existing download links for Flash installers. After January 26, 2021, all major web browsers including Apple Safari, Google Chrome, Microsoft Edge, and Mozilla Firefox have already permanently removed Flash support. However, Flash content continues to be accessible on the web through emulators such as Ruffle, with varying degrees of compatibility and performance, although this is not endorsed by Adobe.
Web browsers
Google Chrome
Starting from Chrome 76, Flash is disabled by default without any prompts to activate Flash content. Users who wanted to play Flash content had to manually set a browser to prompt for Flash content, and then during each browser session enable Flash plugin for every site individually. Microsoft Edge, which is based on Chromium, will follow the same plan as Google Chrome.
Google Chrome blocked the Flash plugin as "out of date" in January 2021, and fully removed it from the browser with Chrome version 88, released on January 20, 2021.
Mozilla Firefox
Starting with Firefox 85, Flash is disabled by default without any prompts to activate Flash content. To play Flash content, users had to manually set a browser to prompt for Flash content, and then during each browser session enable Flash plugin for every site individually. Firefox 85, released on January 26, 2021, completely removed support for the Flash plugin. Firefox ESR dropped support on November 2, 2021 (Firefox 78 ESR was the last version with support).
Microsoft Windows
On October 27, 2020, Microsoft released an update (named KB4577586) for Windows 10 and 8.1 which removes the embedded Adobe Flash Player component from IE11 and Edge Legacy. In July 2021, this update was automatically installed as a security patch. However, an ActiveX Flash Player plugin may still be used with IE after this update is applied.
Apple Safari
Apple dropped Flash Player support from Safari 14 alongside the release of macOS Big Sur.
Fallout
Despite the years of notice, several websites still were using Flash following December 31, 2020, including the U.S. Securities and Exchange Commission. Many of these were resolved in the weeks after the deadline. However, many educational institutions still relied on Flash for educational material and did not have a path forward for replacement.
Post-EOL support
Adobe has partnered with HARMAN to support enterprise Flash Player users until at least 2023. The HARMAN Flash player variant is labeled as version 50.x, to avoid confusion with other variants.
The China-specific variant of Flash will be supported beyond 2020, by a company known as Zhongcheng. The Projector (standalone) versions of this variant also work just fine outside of China and do not include the "Flash Helper Service"; however, some tracking code still seems to be present. They are available on a somewhat hidden "Debug" page. In addition, as the global variant of the plugin was discontinued, some users have figured how to modify and repack the China-specific variant to bring it more in line with the global variant. This includes removing the "Flash Helper Service" and removing the China only installation restriction, along with all other geo-restrictions and tracking code. A "time bomb", similar to the one found in later versions of the global variant, is also present in the unmodified China variant; this is also removed in most repacks. In theory, these repacks should provide users outside of China with the latest security updates to Flash Player, without having to deal with invasive advertisements or worry about privacy risks. One such project, "Clean Flash Installer", was served a DMCA takedown from Adobe in October 2021.
Shortly after Flash EOL, South African Revenue Service (SARS) released a custom version of Chromium browser with Adobe Flash "time bomb" removed. This browser can access only a small set of SARS online pages containing Flash-based forms required for filing financial reports.
Internet Explorer 11, along with IE mode in Edge, will continue with ActiveX support, and by extension Flash Player support. Firefox forks that plan to continue NPAPI support, and by extension Flash Player support, include Waterfox, Basilisk, Pale Moon, and K-Meleon. Various Chromium-based Chinese browsers will also continue to support Flash Player in PPAPI and/or NPAPI form, including, but not limited to, 360 Secure Browser.
Adobe Flash Player Projector
Despite the end of general support for the global variant of Flash, Adobe Flash Player Projector (also known as Adobe Flash Player Standalone) is still available for download from Adobe. It continues to be able to play all supported Flash file formats, including SWF files.
Content preservation projects
The Internet Archive hosts some Flash content and makes it playable in modern browsers via emulators, Ruffle and Emularity. Other emulators, such as CheerpX, also exist as options for Flash Player emulation on other websites. BlueMaxima's Flashpoint project claims to have collected more than 38,000 Adobe Flash Player games and animations and made them available for download.
Open source
Adobe has released some components of Adobe Flash products as open source software via Open Screen Project or donated them to open source organizations. As of 2021, most of these technologies are considered obsolete. This includes: ActionScript Virtual Machine 2 (AVM2) which implements ActionScript 3 (donated as open-source to Mozilla Foundation), Adobe Flex Framework (donated as open-source to the Apache Software Foundation and rebranded as Apache Flex, superseded by Apache Royale), CrossBridge C++ cross-compilation toolset (released on GitHub).
Criticism
Accessibility and usability
In some browsers, prior Flash versions have had to be uninstalled before an updated version could be installed. However, as of version 11.2 for Windows, there are now automatic updater options. Linux is partially supported, as Adobe is cooperating with Google to implement it via Chrome web browser on all Linux platforms.
Mixing Flash applications with HTML leads to inconsistent input handling leading to poor user experience with the site (keyboard and mouse not working as they would in an HTML-only document).
Privacy
Flash Player supports persistent local storage of data (also referred to as Local Shared Objects), which can be used similarly to HTTP cookies or Web Storage in web applications. Local storage in Flash Player allows websites to store non-executable data on a user's computer, such as authentication information, game high scores or web browser games, server-based session identifiers, site preferences, saved work, or temporary files. Flash Player will only allow content originating from exactly the same website domain to access data saved in local storage.
Because local storage can be used to save information on a computer that is later retrieved by the same site, a site can use it to gather user statistics, similar to how HTTP cookies and Web Storage can be used. With such technologies, the possibility of building a profile based on user statistics is considered by some a potential privacy concern. Users can disable or restrict use of local storage in Flash Player through a "Settings Manager" page. These settings can be accessed from the Adobe website or by right-clicking on Flash-based content and selecting "Global Settings".
Local storage can be disabled entirely or on a site-by-site basis. Disabling local storage will block any content from saving local user information using Flash Player, but this may disable or reduce the functionality of some websites, such as saved preferences or high scores and saved progress in games.
Flash Player 10.1 and upward honor the privacy mode settings in the latest versions of the Chrome, Firefox, Internet Explorer, and Safari web browsers, such that no local storage data is saved when the browser's privacy mode is in use.
Security
Adobe security bulletins and advisories announce security updates, but Adobe Flash Player release notes do not disclose the security issues addressed when a release closes security holes, making it difficult to evaluate the urgency of a particular update. A version test page allows the user to check if the latest version is installed, and uninstallers may be used to ensure that old-version plugins have been uninstalled from all installed browsers.
In February 2010, Adobe officially apologized for not fixing a known vulnerability for over a year. In June 2010 Adobe announced a "critical vulnerability" in recent versions, saying there are reports that this vulnerability is being actively exploited in the wild against both Adobe Flash Player, and Adobe Reader and Acrobat. Later, in October 2010, Adobe announced another critical vulnerability, this time also affecting Android-based mobile devices. Android users have been recommended to disable Flash or make it only on demand. Subsequent security vulnerabilities also exposed Android users, such as the two critical vulnerabilities published in February 2013 or the four critical vulnerabilities published in March 2013, all of which could lead to arbitrary code execution.
Symantec's Internet Security Threat Report states that a remote code execution in Adobe Reader and Flash Player was the second most attacked vulnerability in 2009. The same report also recommended using browser extensions to disable Flash Player usage on untrusted websites. McAfee predicted that Adobe software, especially Reader and Flash, would be primary target for attacks in 2010. Adobe applications had become, at least at some point, the most popular client-software targets for attackers during the last quarter of 2009. The Kaspersky Security Network published statistics for the third quarter of 2012 showing that 47.5% of its users were affected by one or more critical vulnerabilities. The report also highlighted that "Flash Player vulnerabilities enable cybercriminals to bypass security systems integrated into the application."
Steve Jobs criticized the security of Flash Player, noting that "Symantec recently highlighted Flash for having one of the worst security records in 2009". Adobe responded by pointing out that "the Symantec Global Internet Threat Report for 2009, found that Flash Player had the second lowest number of vulnerabilities of all Internet technologies listed (which included both web plug-ins and browsers)."
On April 7, 2016, Adobe released a Flash Player patch for a zero-day memory corruption vulnerability that could be used to deliver malware via the Magnitude exploit kit. The vulnerability could be exploited for remote code execution.
Vendor lock-in
Flash Player 11.2 does not play certain kinds of content unless it has been digitally signed by Adobe, following a license obtained by the publisher directly from Adobe.
This move by Adobe, together with the abandonment of Flex to Apache was criticized as a way to lock out independent tool developers, in favor of Adobe's commercial tools.
This has been resolved as of January 2013, after Adobe no longer requires a license or royalty from the developer. All premium features are now classified as general availability, and can be freely used by Flash applications.
Apple controversy
In April 2010, Steve Jobs, at the time CEO of Apple Inc. published an open letter explaining why Apple would not support Flash on the iPhone, iPod touch, and iPad. In the letter he blamed problems with the "openness", stability, security, performance, and touchscreen integration of the Flash Player as reasons for refusing to support it. He also claimed that when one of Apple's Macintosh computers crashes, "more often than not" the cause can be attributed to Flash, and described Flash as "buggy". Adobe's CEO Shantanu Narayen responded by saying, "If Flash [is] the number one reason that Macs crash, which I'm not aware of, it has as much to do with the Apple operating system."
Steve Jobs also claimed that a large percentage of the video on the Internet is supported on iOS, since many popular video sharing websites such as YouTube have published video content in an HTML5 compatible format, enabling videos to playback in mobile web browsers even without Flash Player.
Mainland China-specific variant
Starting with version 30, Adobe stopped distributing Flash Player directly to users from mainland China. Instead, they selected 2144.cn as a partner and released a special variant of Flash Player on a specific website, which contains a non-closable process, known as the "Flash Helper Service", that collects private information and pops up advertisement window contents, by receiving and running encrypted programs from a remote server. The partnership started in about 2017, but in version 30, Adobe disabled the usage of vanilla (global) variant of Flash Player in mainland China, forcing users to use that specific variant, which may pose a risk to its users due to China's Internet censorship. This only affected Chinese Chromium based browser users, Firefox users, and Internet Explorer users using Windows 7 and below, as Microsoft still directly distributed Flash Player for Internet Explorer and Microsoft Edge through Windows Update in Windows 8 and upward at the time. Starting in 2021, however, this variant is the only publicly supported version of Flash Player.
Release history
FutureSplash Player 1.1
New scripting features
Option to disable the menu and memory management optimizations
Macromedia Flash Player 2 (June 17, 1997)
Mostly vectors and motion, some bitmaps, limited audio
Support of stereo sound, enhanced bitmap integration, buttons, the Library, and the ability to tween color changes
Macromedia Flash Player 3 (May 31, 1998)
Added alpha transparency, licensed MP3 compression
Brought improvements to animation, playback, digital art, and publishing, as well as the introduction of simple script commands for interactivity
Macromedia Flash Player 4 (June 15, 1999)
Saw the introduction of streaming MP3s and the Motion Tween. Initially, the Flash Player plug-in was not bundled with popular web browsers and users had to visit Macromedia website to download it. As of 2000, however, the Flash Player was already being distributed with all AOL, Netscape and Internet Explorer browsers. Two years later it shipped with all releases of Windows XP. The install-base of the Flash Player reached 92 percent of all Internet users.
Macromedia Flash Player 5 (August 24, 2000)
A major advance in ability, with the evolution of Flash's scripting abilities as released as ActionScript
Saw the ability to customize the authoring environment's interface
Macromedia Generator was the first initiative from Macromedia to separate design from content in Flash files. Generator 2.0 was released in April 2001, and featured real-time server-side generation of Flash content in its Enterprise Edition. Generator was discontinued in 2002, in favor of new technologies such as Flash Remoting, which allows for seamless transmission of data between the server and the client, and ColdFusion Server.
In October 2000, usability guru Jakob Nielsen wrote a polemic article regarding usability of Flash content entitled "Flash: 99% Bad". (Macromedia later hired Nielsen to help them improve Flash usability.)
Macromedia Flash Player 6 (version 6.0.21.0, codenamed Exorcist) (March 15, 2002)
Support for the consuming Flash Remoting (AMF) and Web Service (SOAP)
Supports ondemand/live audio and video streaming (RTMP)
Support for screenreaders via Microsoft Active Accessibility
Added Sorenson Spark video codec for Flash Video
Support for video, application components, shared libraries, and accessibility
Macromedia Flash Communication Server MX, also released in 2002, allowed video to be streamed to Flash Player 6 (otherwise the video could be embedded into the Flash movie).
Macromedia Flash Player 7 (version 7.0.14.0, codenamed Mojo) (September 10, 2003)
Supports progressive audio and video streaming (HTTP)
Supports ActionScript 2.0, an object-oriented programming language for developers
Ability to create charts, graphs and additional text effects with the new support for extensions (sold separately), high fidelity import of PDF and Adobe Illustrator 10 files, mobile and device development and a forms-based development environment. ActionScript 2.0 was also introduced, giving developers a formal object-oriented approach to ActionScript. V2 Components replaced Flash MX's components, being rewritten from the ground up to take advantage of ActionScript 2.0 and object-oriented principles.
In 2004, the "Flash Platform" was introduced. This expanded Flash to more than the Flash authoring tool. Flex 1.0 and Breeze 1.0 were released, both of which used the Flash Player as a delivery method but relied on tools other than the Flash authoring program to create Flash applications and presentations. Flash Lite 1.1 was also released, enabling mobile phones to play Flash content.
Last version for Windows 95/NT4 and Mac Classic
Macromedia Flash Player 8 (version 8.0.22.0, codenamed Maelstrom) (September 13, 2005)
Support for runtime loading of GIF and PNG images
New video codec (On2 VP6)
Improved runtime performance and runtime bitmap caching
Live filters and blendmodes
File upload and download abilities
New text-rendering engine, the Saffron Type System
ExternalAPI subsystem introduced to replace fscommand
On December 3, 2005, Adobe Systems acquired Macromedia and its product portfolio (including Flash).
Macromedia Flash Player 8 (version 8.0.24.0) (April 23, 2006)
Adobe Flash Player 9 (version 9.0.15.0, codenamed Zaphod and formerly named Flash Player 8.5) (June 22, 2006)
Introduction of ActionScript Virtual Machine 2 (AVM2) with AVM1 retained for compatibility
ActionScript 3 (a superset of ECMAScript 3) via AVM2
E4X, which is a new approach to parsing XML
Support for binary sockets
Support for regular expressions and namespaces
AVM2 donated to Mozilla Foundation as open-source virtual machine named Tamarin
Adobe Flash Player 9 Update 1 (version 9.0.28.0, codenamed Marvin) (November 9, 2006)
Support for fullscreen mode
Adobe Flash Player 9 (version 9.0.45.0) (March 27, 2007)
Support for Creative Suite 3.
Adobe Flash Player 9 Update 2 (version Mac/Windows 9.0.47.0 and Linux 9.0.48.0, codenamed Hotblack) (June 11, 2007)
Security update
Adobe Flash Player 9 Update 3 (version 9.0.115.0, codenamed Moviestar or Frogstar) (December 2007)
H.264
AAC (HE-AAC, AAC Main Profile, and AAC-LC)
New Flash Video file format F4V based on the ISO base media file format (MPEG-4 Part 12)
Support for container formats based on the ISO base media file format
Last version for Windows 98/ME and other platforms
Adobe Flash Player 10 (version 10.0.12.36, codenamed Astro) (October 15, 2008)
New features
3D object transformations
Custom filters via Pixel Bender
Advanced text support
Speex audio codec
Real Time Media Flow Protocol (RTMFP)
Dynamic sound generation
Vector data type
Enhanced features
Larger bitmap support
Graphics drawing API
Context menu
Hardware acceleration
Anti-aliasing engine (Saffron 3.1)
Read/write clipboard access
WMODE
Adobe Flash Player 10 (version 10.0.32.18) (July 27, 2009)
Adobe Flash Player 10 (version 10.0.42.34) (November 16, 2009)
Adobe Flash Player 10 (version 10.0.45.2) (February 21, 2010)
Adobe Flash Player 10.1 (version 10.1.53.64, codenamed Argo) (June 10, 2010)
Reuse of bitmap data copies for better memory management
Improved garbage collector
Hardware-based H.264 video decoding
HTTP Dynamic Streaming
Peer-assisted networking and multicast
Support for browser privacy modes
Multi-touch APIs
For Macs/OSX 10.4 ppc or later
Using Cocoa UI for Macs
Use of double-buffered OpenGL context for fullscreen
Use of Core Animation
Adobe Flash Player 10.2 (version 10.2.152.26, codenamed Spicy) (February 8, 2011)
Stage Video, a full hardware-accelerated video pipeline
Internet Explorer 9 hardware-accelerated rendering support
Custom native mouse cursors
Multiple monitor fullscreen support
Enhanced subpixel rendering for text
Adobe Flash Player 10.2 (version 10.2.152.32) (February 28, 2011)
Adobe Flash Player 10.2 (version 10.2.153.1) (March 21, 2011)
Adobe Flash Player 10.2 (version 10.2.159.1) (April 15, 2011)
Adobe Flash Player 10.3 (version 10.3.181.14, codenamed Wasabi) (May 12, 2011)
Media measurement (video analytics for websites; desktop only)
Acoustic Echo Cancellation (acoustic echo cancellation, noise suppression, voice activity detection, automatic compensation for microphone input levels; desktop only)
Integration with browser privacy controls for managing local storage (ClearSiteData NPAPI)
Native control panel
Auto-update notification for Mac OS X
Last version for Mac OS X 10.5 and Windows 2000 (unofficially bypassing the XP installer)
Adobe replaced Extended Support Release 10.3 by 11.7 on July 9, 2013.
Adobe Flash Player 10.3 (version 10.3.181.23) (June 5, 2011)
Adobe Flash Player 10.3 (version 10.3.181.26) (June 14, 2011)
Adobe Flash Player 10.3 (version 10.3.181.34) (June 29, 2011)
Adobe Flash Player 10.3 (version 10.3.183.5) (August 14, 2011)
Adobe Flash Player 10.3 (version 10.3.183.7) (August 24, 2011)
Adobe Flash Player 10.3 (version 10.3.183.10) (September 21, 2011)
Adobe Flash Player 10.3 (version 10.3.183.11) (November 11, 2011)
Adobe Flash Player 10.3 (version 10.3.183.25) (September 18, 2012)
Adobe Flash Player 10.3 (version 10.3.183.29) (October 8, 2012)
Adobe Flash Player 11 (version 11.0.1.152, codenamed Serrano) (October 4, 2011)
Desktop only
Stage 3D accelerated graphics rendering
Desktop: Windows (DirectX 9), OS X (Intel processor only) and Linux (OpenGL 1.3), SwiftShader fallback
Mobile: Android and iOS (OpenGL ES 2)
H.264/AVC software encoding for cameras
Native 64-bit
Asynchronous bitmap decoding
TLS secure sockets
Desktop and mobile
Stage Video hardware acceleration
Native extension libraries
Desktop: Windows (.dll), OS X (.framework)
Mobile: Android (.jar, .so), iOS (.a)
JPEG XR decoding
G.711 audio compression for telephony
Protected HTTP Dynamic Streaming (HDS)
Unlimited bitmap size
LZMA SWF compression
Mobile only
H.264/AAC playback
Front-facing camera
Background audio playback
Device speaker control
16- and 32-bit color depth
Adobe Flash Player 11.1 (version 11.1.102.55, codenamed Anza) (November 10, 2011)
Last version of the web browser plug-in for mobile devices (made for Android 2.2 to 4.0.3)
iOS 5 native extensions for AIR
StageText: Native text input UI for Android
Security enhancements, last official version for Windows 2000
Adobe Flash Player 11.1 (version 11.1.102.62) (March 5, 2012)
Adobe Flash Player 11.2 (version 11.2.202.228) (March 28, 2012)
Adobe Flash Player 11.2 (version 11.2.202.233) (April 12, 2012)
Adobe Flash Player 11.2 (version 11.2.202.235, codenamed Brannan) (May 3, 2012)
The Windows version offers automatic updater options
Dropped support of the browser plug-in for mobile devices (Android). Android app developers are encouraged to use Adobe Air and Android web developers should switch to HTML5.
Extended support for Flash player 11.2 on Solaris as it is the last version to be supported.
Adobe replaced Extended Support Release 11.2 on Linux with 24.0 on December 13, 2016.
Adobe Flash Player 11.3 (version 11.3.300.257) (June 8, 2012)
Adobe Flash Player 11.3 (version 11.3.300.262) (June 21, 2012)
Adobe Flash Player 11.3 (version 11.3.300.265) (July 11, 2012)
Adobe Flash Player 11.3 (version 11.3.300.268) (July 26, 2012)
Adobe Flash Player 11.3 (version 11.3.300.270) (August 4, 2012)
Desktop and mobile
Fullscreen interactive mode (keyboard input during fullscreen)
Native bitmap encoding and compression (PNG, JPEG, JPEG-XR)
Draw bitmaps with quality (low, medium, high, best)
Texture streaming for Stage3D
Dropped support for Linux and Solaris
Mobile-only
Auto-orientation on specific devices
USB debugging for AIR on iOS
Adobe Flash Player 11.3 (version 11.3.300.271) (September 18, 2012)
Adobe Flash Player 11.3 (version 11.3.300.273) (October 3, 2012)
Adobe Flash Player 11.4 (version 11.4.402.259) (August 10, 2012)
Flash Player only
ActionScript workers
SandboxBridge support
Licensing support: Flash Player Premium features for gaming
Flash Player and AIR
Stage3D "constrained" profile for increased GPU reach
LZMA support for ByteArray
StageVideo attachCamera/Camera improvements
Compressed texture with alpha support for Stage3D
DXT encoding
AIR only
Deprecated Carbon APIs for AIR
Direct AIR deployment using ADT
Push notifications for iOS
Ambient AudioPlaybackMode
Exception support in Native Extensions for iOS
Adobe Flash Player 11.4 (version 11.4.402.265) (August 21, 2012)
Adobe Flash Player 11.4 (version 11.4.402.278) (September 18, 2012)
Adobe Flash Player 11.4 (version 11.4.402.287) (October 8, 2012)
Adobe Flash Player 11.5
Shared ByteArray
Invoke Event enhancement (for openurl)
Packaging multiple libraries in an ANE (iOS)
Debug stack trace in release builds of Flash Player
Statically link DRM (desktop only)
Adobe Flash Player 11.6 (codenamed Folsom)
Lossless video export from standalone and authplay.dll
Support for flash.display.graphics.readGraphicsData() that returns a Vector of IGraphicsData
Improve permissions UI related to full screen keyboard access
Prevent ActiveX abuse in Office documents
Support file access in cloud on Windows
Enhance multi-SWF support
Migration certification for ANEs
RectangleTexture
File API update so AIR apps conform to Apple data storage guidelines
Separate sampler state for Stage3D
Set device specific Retina Display resolution (iOS)
Adobe Flash Player 11.7 (version 11.7.700.169, codenamed Geary) (April 9, 2013)
SharedObject.preventBackup property
forceCPURenderModeForDevices
Remote hosting of SWF files in case of multiple SWFs
Support for uploading 16-bit texture formats
GameInput updates
Android – create captive runtime apps
Adobe replaced Extended Support Release 11.7 on Mac and Windows with 13.0 on May 13, 2014.
Adobe Flash Player 11.8 (codenamed Harrison)
Stage3D baselineExtended profile
Recursive stop on MovieClip
Flash Player & AIR Desktop Game Pad Support
Support for large textures (extendedBaseline, 4096)
Rectangle texture
DatagramSocket
ServerSocket
Substitute a redirected URL from a source URLRequest for part of the URL in a new URLRequest
Adobe Flash Player 11.9 (codenamed Irving)
OS X Mavericks Support
Mac .pkg Installation Support
Adobe Flash Player 12 (codenamed Jones) (November 14, 2013)
Improved Mac .pkg Installation Support for the work flow and UI
Support for Internet Explorer 11 on Windows 7
Support for Safe Mode in Safari 6.1 and higher
64-bit PPAPI Flash Player for Google Chrome
Graphics: Buffer Usage flag for Stage3D
Adobe Flash Player 13 (codenamed King)
Supplementary Characters Enhancement Support for Text Field
Full Screen video message tweak
this is the Extended Support Release.
Adobe Flash Player 14 (version 14.0.0.125, codenamed Lombard) (June 10, 2014)
Stage 3D Standard profile
Adobe Flash Player 14 (version 14.0.0.145) (July 8, 2014)
Adobe Flash Player 14 (version 14.0.0.179) (August 12, 2014)
Adobe Flash Player 15 (version 15.0.0.152, codenamed Market) (September 9, 2014)
Improved support for browser zoom levels
Adobe Flash Player 15 (version 15.0.0.167) (September 23, 2014)
Adobe Flash Player 15 (version 15.0.0.223) (November 11, 2014)
Adobe Flash Player 16 (version 16.0.0.235, codenamed Natoma) (December 9, 2014)
Stage3D – Standard Constrained Profile
PPAPI Installers for Windows and Mac
Adobe Flash Player 16 (version 16.0.0.257) (January 13, 2015)
Adobe Flash Player 16 (version 16.0.0.287) (January 22, 2015)
Adobe Flash Player 16 (version 16.0.0.296) (January 27, 2015)
Adobe Flash Player 16 (version 16.0.0.305) (February 5, 2015)
Adobe Flash Player 17 (version 17.0.0.134, codenamed Octavia) (March 12, 2015)
Control Panel improvements
Installer improvements for Mac
Adobe Flash Player 17 (version 17.0.0.169) (April 14, 2015)
Adobe Flash Player 17 (version 17.0.0.188) (May 12, 2015)
Adobe Flash Player 18 (version 18.0.0.160, codenamed Presidio) (June 9, 2015)
Contains fixes for Adobe Security Bulletin APSB 15–11
Adobe Flash Player 18 (version 18.0.0.194) (June 23, 2015)
Adobe Flash Player 18 (version 18.0.0.203) (July 8, 2015)
Adobe Flash Player 18 (version 18.0.0.209) (July 14, 2015)
Adobe Flash Player 18 (version 18.0.0.232) (August 11, 2015)
Adobe Flash Player 19 (version 19.0.0.185, codenamed Quint) (September 21, 2015)
Adobe Flash Player 19 (version 19.0.0.207) (October 13, 2015)
Adobe Flash Player 19 (version 19.0.0.226) (October 16, 2015)
Adobe Flash Player 19 (version 19.0.0.245) (November 10, 2015)
Adobe Flash Player 20 (version 20.0.0.228, codenamed Rankin) (December 8, 2015)
Adobe Flash Player 20 (version 20.0.0.267) (December 28, 2015)
Adobe Flash Player 20 (version 20.0.0.270) (January 1, 2016)
Adobe Flash Player 20 (version 20.0.0.286) (January 19, 2016)
Adobe Flash Player 20 (version 20.0.0.306) (February 9, 2016)
Adobe Flash Player 21 (version 21.0.0.182, codenamed Sutter) (March 10, 2016)
Adobe Flash Player 21 (version 21.0.0.197) (March 23, 2016)
Adobe Flash Player 21 (version 21.0.0.213) (April 7, 2016)
Adobe Flash Player 21 (version 21.0.0.216) (April 8, 2016)
Adobe Flash Player 21 (version 21.0.0.226) (April 21, 2016)
Adobe Flash Player 21 (version 21.0.0.242) (May 12, 2016)
Adobe Flash Player 22 (version 22.0.0.185, codenamed Townsend) (June 16, 2016)
Adobe Flash Player 22 (version 22.0.0.209) (July 12, 2016)
Adobe Flash Player 22 (version 22.0.0.210) (July 14, 2016)
Adobe Flash Player 23 (version 23.0.0.162, codenamed Underwood) (September 13, 2016)
Adobe Flash Player 23 (version 23.0.0.185) (October 11, 2016)
Adobe Flash Player 23 (version 23.0.0.205) (October 26, 2016)
Adobe Flash Player 23 (version 23.0.0.207) (November 8, 2016)
Adobe Flash Player 24 (version 24.0.0.186, codenamed Van Ness) (December 13, 2016)
Adobe Flash Player 24 (version 24.0.0.194) (January 10, 2017)
Adobe Flash Player 24 (version 24.0.0.221) (February 14, 2017)
Adobe Flash Player 25 (version 25.0.0.127, codenamed Webster) (March 14, 2017)
Adobe Flash Player 25 (version 25.0.0.148) (April 11, 2017)
Adobe Flash Player 25 (version 25.0.0.163) (April 20, 2017)
Adobe Flash Player 25 (version 25.0.0.171) (May 9, 2017)
Adobe Flash Player 26 (version 26.0.0.126, codenamed York) (June 13, 2017)
Adobe Flash Player 26 (version 26.0.0.131) (June 16, 2017)
Adobe Flash Player 26 (version 26.0.0.137) (July 11, 2017)
Adobe Flash Player 26 (version 26.0.0.151) (August 8, 2017)
Adobe Flash Player 27 (version 27.0.0.130, codenamed Zoe) (September 12, 2017)
Adobe Flash Player 27 (version 27.0.0.159) (October 10, 2017)
Adobe Flash Player 27 (version 27.0.0.170) (October 16, 2017)
Adobe Flash Player 27 (version 27.0.0.183) (October 25, 2017)
Adobe Flash Player 27 (version 27.0.0.187) (November 14, 2017)
Adobe Flash Player 28 (version 28.0.0.126, codenamed Atka) (December 12, 2017)
Adobe Flash Player 28 (version 28.0.0.137) (January 9, 2018)
Adobe Flash Player 28 (version 28.0.0.161) (February 6, 2018)
Adobe Flash Player 29 (version 29.0.0.113) (March 13, 2018)
Adobe Flash Player 29 (version 29.0.0.140) (April 10, 2018)
Adobe Flash Player 29 (version 29.0.0.171) (May 8, 2018)
Adobe Flash Player 30 (version 30.0.0.113) (June 7, 2018)
Adobe Flash Player 30 (version 30.0.0.134) (July 10, 2018)
Adobe Flash Player 30 (version 30.0.0.154) (August 14, 2018)
Adobe Flash Player 31 (version 31.0.0.108) (September 11, 2018)
Adobe Flash Player 31 (version 31.0.0.122) (October 9, 2018)
Adobe Flash Player 31 (version 31.0.0.148) (November 13, 2018)
Adobe Flash Player 32 (version 32.0.0.101) (December 5, 2018)
Adobe Flash Player 32 (version 32.0.0.114) (January 8, 2019)
Adobe Flash Player 32 (version 32.0.0.142) (February 12, 2019)
Adobe Flash Player 32 (version 32.0.0.156) (March 12, 2019)
Adobe Flash Player 32 (version 32.0.0.171) (April 9, 2019)
Adobe Flash Player 32 (version 32.0.0.192) (May 14, 2019)
Adobe Flash Player 32 (version 32.0.0.207) (June 11, 2019)
Adobe Flash Player 32 (version 32.0.0.223) (July 9, 2019)
Adobe Flash Player 32 (version 32.0.0.238) (August 13, 2019)
Adobe Flash Player 32 (version 32.0.0.255) (September 10, 2019)
Adobe Flash Player 32 (version 32.0.0.270) (October 9, 2019)
Adobe Flash Player 32 (version 32.0.0.293) (November 12, 2019)
Adobe Flash Player 32 (version 32.0.0.303) (December 10, 2019)
Adobe Flash Player 32 (version 32.0.0.314) (January 14, 2020)
Adobe Flash Player 32 (version 32.0.0.321) (January 21, 2020)
Adobe Flash Player 32 (version 32.0.0.330) (February 11, 2020)
Adobe Flash Player 32 (version 32.0.0.344) (March 10, 2020)
Adobe Flash Player 32 (version 32.0.0.363) (April 14, 2020)
Adobe Flash Player 32 (version 32.0.0.371) (May 12, 2020)
Adobe Flash Player 32 (version 32.0.0.387) (June 9, 2020)
Refuses to play Flash content after January 12, 2021, and instead displays a static warning message.
Adobe Flash Player 32 (version 32.0.0.403) (July 14, 2020)
Adobe Flash Player 32 (version 32.0.0.414) (August 11, 2020)
Adobe Flash Player 32 (version 32.0.0.433) (September 8, 2020)
Adobe Flash Player 32 (version 32.0.0.445) (October 13, 2020)
Adobe Flash Player 32 (version 32.0.0.453) (November 10, 2020)
Adobe Flash Player 32 (version 32.0.0.465) (December 8, 2020)
Final global variant update.
See also
Adobe AIR
Adobe Shockwave
Apache Flex
Microsoft Silverlight
References
Further reading
Understanding Flash Player with Adobe Scout – an article discussing the internals of the player and the Adobe Scout profiling tool
External links
Adobe Flash Player - Debug Downloads, contains the latest version of the standalone projector.
Flash Tester (explains official old working version check)
1996 software
Adobe Flash
Adobe software
Classic Mac OS media players
Computer-related introductions in 1996
IRIX software
Linux media players
MacOS media players
Macromedia software
OS/2 software
Products and services discontinued in 2020
Proprietary cross-platform software
Proprietary freeware for Linux
Solaris media players
Windows components
Windows media players
Obsolete technologies
Discontinued Adobe software |
163502 | https://en.wikipedia.org/wiki/FreeBSD%20Documentation%20License | FreeBSD Documentation License | The FreeBSD Documentation License is the license that covers most of the documentation for the FreeBSD operating system.
License
The license is very similar to the 2-clause Simplified BSD License used by the support of FreeBSD, however, it makes the applications of "source code" and "compile" less obscure in the context of documentation. It also includes an obligatory disclaimer about IEEE and Open Group manuscript in some old-fashioned sheets.
<nowiki>
The FreeBSD Documentation License
Copyright 1994-2015 The FreeBSD Project. All rights reserved.
Redistribution and use in source (SGML DocBook) and 'compiled' forms (SGML, HTML, PDF, PostScript,
RTF and so forth) with or without modification, are permitted provided that the following conditions are
met:
1. Redistributions of source code (SGML DocBook) must retain the above copyright notice, this list of
conditions and the following disclaimer as the first lines of this file unmodified.
2. Redistributions in compiled form (transformed to other DTDs, converted to PDF, PostScript, RTF
and other formats) must reproduce the above copyright notice, this list of conditions and the
following disclaimer in the documentation and/or other materials provided with the distribution.
THIS DOCUMENTATION IS PROVIDED BY THE FREEBSD DOCUMENTATION PROJECT "AS IS"
AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL
THE FREEBSD DOCUMENTATION PROJECT BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR
TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
DOCUMENTATION, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
Manual Pages
Manual Pages
Some FreeBSD manual pages contain text from the IEEE Std 1003.1, 2004 Edition, Standard for
Information Technology -- Portable Operating System Interface (POSIX®) specification. These manual
pages are subject to the following terms:
The Institute of Electrical and Electronics Engineers and The Open Group, have given us
permission to reprint portions of their documentation.
In the following statement, the phrase ``this text'' refers to portions of the system
documentation.
Portions of this text are reprinted and reproduced in electronic form in the FreeBSD manual
pages, from IEEE Std 1003.1, 2004 Edition, Standard for Information Technology --
Portable Operating System Interface (POSIX), The Open Group Base Specifications Issue
6, Copyright (C) 2001-2004 by the Institute of Electrical and Electronics Engineers, Inc and
The Open Group. In the event of any discrepancy between these versions and the original
IEEE and The Open Group Standard, the original IEEE and The Open Group Standard is the
referee document. The original Standard can be obtained online at
http://www.opengroup.org/unix/online.html.
This notice shall appear on any product containing this material.
</nowiki>
Reception
The Free Software Foundation classes this as a free documentation license, stating that "This is a permissive non-copyleft free documentation license that is compatible with the GNU FDL."
Derivatives
Based on the FreeBSD Documentation License, the BSD Documentation License was created to contain terms more generic to most projects as well as reintroducing the 3rd clause that restricts the use of documentation for endorsement purposes (as shown in the New BSD License).
See also
BSD license
GNU Free Documentation License
References
External links
The FreeBSD Documentation Project
The FreeBSD Documentation License
FreeBSD Documentation main page
FreeBSD project main page
FreeBSD
Free content licenses |
30498102 | https://en.wikipedia.org/wiki/SIM%20NJ%20%28Society%20for%20Information%20Management%20%E2%80%93%20New%20Jersey%20Chapter%29 | SIM NJ (Society for Information Management – New Jersey Chapter) | The New Jersey Chapter of The Society for Information Management (also referred to as SIM or Society for Information Management International SIMI) is a professional organization of over 250 senior IT executives, chief information officers, prominent academicians, selected consultants, and other IT thought leaders. Members of the SIM NJ Chapter work for companies in the NJ metropolitan area or reside within this region. Members come together to share and enhance their rich intellectual capital for the benefit of its members and their organizations. The chapter is organized for the purpose of bringing together senior information technology (IT) executives from leading companies in the New Jersey area to advance the use of information technology to achieve business objectives.
SIM NJ Chapter vision and mission
Vision: To be recognized as the community that is most preferred by IT leaders for delivering vital knowledge that creates business value and enables personal development.
Mission: SIM is an association of senior IT executives, prominent academicians, selected consultants, and other IT thought leaders built on the foundation of local chapters, who come together to share and enhance their rich intellectual capital for the benefit of its members and their organizations.
History
The NJ Chapter of SIM was formed in 1984. The chapter has grown significantly over the years and is now one of the largest chapters in the US. The New Jersey chapter has also had several of its members go on to become presidents of the overall Society for Information Management organization such as John Hammit, John Stevenson, and June Drewry.
Current and past presidents
NJ SIM elects its Executive Committee in April of each year as it operates on a program year basis which begins in September of each year. The terms of most Executive Committee members including the Chapter President are a twelve-month period, therefore a single term of office runs from September of one year to August of the next year.
Details of Chapter Presidents prior to 1996 are still being sought.
Membership
Membership requirements:
Candidates for membership should be senior information executives in private or public sector organizations who are Corporate/Division heads of an I.S./I.T. organization responsible for information systems and technology. Candidates who are full-time faculty members in colleges or universities should be engaged in teaching and/or research activities that are devoted primarily to areas of senior managerial interest.
To qualify for membership in SIM New Jersey, you must belong to one of the following membership categories:
Practitioners:
IT MANAGEMENT- Senior information technology executives in private or public sector organizations who are Corporate or Divisional heads of Information Technology organizations responsible for information systems and technology. A senior information technology executive's direct report(s) with significant responsibility may also be admitted for membership.
BUSINESS EXECUTIVES – Senior business executives from public or private organizations whose primary responsibility is not information management, but who play a key role in the use of information technology in their own organization (i.e., CEOs, CFOs).
OTHER LEADERS – Leaders in shaping and influencing law and government policy in areas of professional concern to information managers.
Vendors:
(Note: SIM NJ limits the number of vendor members to 20% of Practitioners)
CONSULTANT – Leading individuals and other experts from consulting firms who are able to demonstrate that their consulting activities are primarily performed at the senior level of the organizations they service and are not primarily in a sales role.
VENDORS – Leading individuals and other experts from vendor firms (software, hardware, IT Recruiting, other IT) who are able to demonstrate that their activities are primarily performed at the senior level of the organizations they service.
Academics:
(Note: SIM NJ limits the number of academic members to 10% of Practitioners)
ACADEMIC – Full-time university or college faculty members making a significant contribution to the field. An academic institutional chapter membership may be approved by the Executive Committee for the purpose of permitting participation of multiple members of academic institutions, however, one member of the academic institution must be designated as the SIM International member and will by the sole institutional member with voting rights.
Others:
REGIONAL LEADERSHIP FORUM (RLF) GRADUATES – Individuals who have graduated from the SIM International's Regional Leadership Forum (RLF) program.
Membership categories
To meet the needs of our diverse membership, SIM has four membership categories:
Individual
Academic
Corporate (enterprise membership)
SIM partner
SIM NJ adheres to a strict policy against marketing to its members or using SIM NJ events to promote business products and/or services.
Governance
Organization leadership
The SIMNJ Chapter is governed by a membership-elected Executive Committee (EC). The EC is composed of the President, the Executive Vice President, and Vice President(s) for each major area of Chapter operations including Membership, Membership Retention, Programs, Marketing, Communications, Representative to Society for Information Management, Administration, and Treasury. Also part of the EC is the board of trustees which has five members, all Past Chapter Presidents, and who provide guidance and counsel to the President of the Chapter, perform the annual financial audit, and serve as the nominating committee. All EC positions are for one-year terms with the exception of the Treasurer, the Representative to Society for Information Management, and the Board of Trustees, which serve two-year terms. SIM NJ runs a program and fiscal year that begins in September of each year and ends in August of the following the year. Election of the following year's EC leadership is held in March of each year. In addition to the Vice President(s) who are responsible for their specific areas, there may also be Committees formed to assist with planning, coordination, and implementation of activities within that office. In addition to the Vice President who serves as the Representative to Society for Information Management, each Vice President as well as the President are typically involved with peers from other SIM Chapters in broader Society for Information Management initiatives and collaboration.
Programs
SIM NJ offers its members a variety of meetings, programs, and conferences to enhance knowledge sharing and networking. The regular monthly meetings are held in each month from September through June with December and June meetings being held as member socials. The monthly meetings are held in the evenings. Attendance at these meetings is restricted to SIM NJ Members, Guests of SIM NJ Members, or members from other SIM Chapters. Examples or prior meetings includes topics such as:
The Bionic CIO: Rebuilt to be better, stronger, faster.
Technology Roundtable Discussions: Cloud Computing, Revenue Generating IT Innovations, and Demonstrating the Business Value of IT
Why IT Needs Marketing Now More Than Ever
Mentoring – Building People Capability and Creating a Learning Environment
FBI Domain Program and the Foreign Threat to US Technology
In addition to regular monthly meetings, SIM NJ also runs, hosts, or co-sponsors other CIO-level events such as The NJ CIO Executive Summit. These events have been run for several years and have come to be some of the regions best IT leadership conferences. In October 2010, NJ SIM and the Stevens Institute of Technology hosted an IT Academic Session which featured interactive presentations and discussions with top professors from Stevens and CIOs from local organizations.
Advanced Practices Council
Advanced Practices Council (APC) is a forum for senior IT executives who commission exclusive research and share cross-industry perspectives in an intimate, candid atmosphere. APC was founded in 1991 by Warren McFarlan of the Harvard Business School. This program is run by the Society for Information Management and is available to SIM NJ Chapter members.
SIMposium
SIMposium is SIM's annual practitioner-driven conference designed for and by CIOs. Working nationally with key senior IT executives, CIOs and recognized thought leaders, SIMposium addresses the topics, issues, best practices and trends that will give our audience the technology-related insight necessary to make the right decisions to impact their business strategies and future IT direction. From 1995 to 2002, SIMposium was called SIM Interchange Annual Conference.
SIM NJ Chapter socials
The SIM NJ Chapter also holds two social events each year. One is the Holiday Social held in December and the other is the Spring Social held in June. These socials are restricted to SIM NJ members and their significant others only. These events are typically held at some of the area's finest venues such as the Pleasantdale Chateau, The Manor, and the Grand Cafe. The socials provide a great opportunity for networking amongst members and their guests and have come to be very popular within the Chapter.
NJ SIM Foundation
Several years ago, the SIM NJ Chapter formed a charitable 501(c)(3) organization called the NJ SIM Foundation. The NJ SIM Foundation aims to provide charitable assistance in the way of funding, resources, and materials for those individuals or organizations in need, or in need of technology capability but cannot afford it. The NJ SIM Foundation raises funds through donations and special events run throughout the year. One of the largest contributors to the NJ SIM Foundation is the annual Charity Golf Event and Technology Exchange. The event is the one event each year where vendors and sponsors can directly market their products and services to SIM NJ members and the event's attendees. The event has a morning component that is the Technology Exchange and features keynote presentations as well as the chance for attendees to visit with the event's sponsors. The afternoon golf tournament is followed by a banquet dinner, silent auction, awards, as well as a presentation by the event's benefactor. The NJ SIM Foundation also provides annual scholarship awards for students in the IT field of study. The NJ SIM Foundation has raised over $1 million since its inception.
References
External links
SIMNET official website
New Jersey site
The NJ SIM Foundation official website
Non-profit organizations based in New Jersey |
29501475 | https://en.wikipedia.org/wiki/Kik%20Messenger | Kik Messenger | Kik Messenger, commonly called Kik, is a freeware instant messaging mobile app from the Canadian company Kik Interactive, available free of charge on iOS and Android operating systems. It uses a smartphone's data plan or Wi-Fi to transmit and receive messages, photos, videos, sketches, mobile web pages, and other content after users register a username. Kik is known for its features preserving users' anonymity, such as allowing users to register without the need to provide a telephone number or valid email address. However, the application does not employ end-to-end encryption, and the company also logs user IP addresses, which could be used to determine the user's ISP and approximate location. This information, as well as "reported" conversations, are regularly surrendered upon request by law enforcement organizations, sometimes without the need for a court order.
Kik was originally intended to be a music-sharing app before transitioning to messaging, briefly offering users the ability to send a limited number of SMS text messages directly from the application. During the first 15 days after Kik's re-release as a messaging app, over 1 million accounts were created. In May 2016, Kik Messenger announced that they had approximately 300 million registered users, and was used by approximately 40% of United States' teenagers.
Kik Messenger was acquired by Medialab in October 2019.
History
Kik Interactive was founded in 2009 by a group of students from the University of Waterloo in Canada who wished to create new technologies for use on mobile smartphones. Kik Messenger is the first app developed by Kik Interactive, and was released on October 19, 2010. Within 15 days of its release, Kik Messenger reached one million user registrations, with Twitter being credited as a catalyst for the new application's popularity.
On November 24, 2010, Research In Motion (RIM) removed Kik Messenger from BlackBerry App World and limited the functionality of the software for its users. RIM also sued Kik Interactive for patent infringement and misuse of trademarks. In October 2013, the companies settled the lawsuit, with the terms undisclosed.
In November 2014, Kik announced a $38.3 million Series C funding round and its first acquisition, buying GIF Messenger "Relay". The funding was from Valiant Capital Partners, Millennium Technology Value Partners, and SV Angel. By this time, Kik had raised a total of $70.5 million.
On August 16, 2015, Kik received a $50 million investment from Chinese Internet giant Tencent, the parent company of the popular Chinese messaging service WeChat. The investment earned the company a billion dollar valuation. Company CEO Ted Livingston stated Kik's aspirations to become "the WeChat of the West" and said that attracting younger users was an important part of the company's strategy.
In 2017 Kik decided against more VC funding, instead raising nearly $100 million in a high-profile initial coin offering (ICO) on the Ethereum blockchain. In this crowd sale, they sold "Kin" digital tokens to the contributors.
In November 2017, Kik Messenger was silently removed from the Windows Store. As of 23 January 2018, neither the developers nor Microsoft have provided a statement or an explanation on the removal of the app.
In June 2018, the Kin Coin was officially released on the Kik platform in Beta.
In July 2018, the Kin Foundation released the Kinit beta app on the Google Play store, restricted to US residents only. It offers different ways of earning and spending the Kin coin natively; for example, a user can do simple surveys to earn Kin and spend it on digital goods like gift cards.
In September 2019, Kik's CEO and founder Ted Livingston, announced in a blog post that Kik Messenger would be shut down on 19 October 2019, with over 100 employees laid off. However this decision was later reversed and in October 2019, Medialab acquired Kik Messenger.
Features
A main attraction of Kik that differentiates it from other messaging apps is its anonymity. To register for the Kik service, a user must enter a first and last name, e-mail address, and birth date (which must show that the user is at least 13 years old), and select a username. The Kik registration process does not request or require the entry of a phone number (although the user has the option to enter one), unlike some other messaging services that require a user to provide a functioning mobile phone number.
The New York Times has reported that, according to law enforcement, Kik's anonymity features go beyond those of most widely used apps. As of February 2016, Kik's guide for law enforcement said that the company cannot locate user accounts based on first and last name, e-mail address and/or birth date; the exact username is required to locate a particular account. The guide further said that the company does not have access to content or "historical user data" such as photographs, videos, and the text of conversations, and that photographs and videos are automatically deleted shortly after they are sent. A limited amount of data from a particular account (identified by exact username), including first and last name, birthdate, e-mail address, link to a current profile picture, device-related information, and user location information such as the most recently used IP address, can be preserved for a period of 90 days pending receipt of a valid order from law enforcement. Kik's anonymity has also been cited as a protective safety measure for good faith users, in that "users have screennames; the app doesn't share phone numbers or email addresses."
Kik introduced several new user features in 2015, including a full-screen in-chat browser that allows users to find and share content from the web; a feature allowing users to send previously recorded videos in Kik Messenger for Android and iOS; and "Kik Codes", which assigns each user a unique code similar to a QR code, making it easier to connect and chat with other users. Kik joined the Virtual Global Taskforce, a global anti-child-abuse organization, in March 2015. Kik began using Microsoft's PhotoDNA in March 2015 to premoderate images added by users. That same month, Kik released native video capture allowing users to record up to 15 seconds in the chat window. In October 2015, Kik partnered with the Ad Council as part of an anti-bullying campaign. The campaign was featured on the app and Kik released stickers in collaboration with the campaign. Kik released a feature to send GIFs as emojis in November 2015. Kik added SafePhoto to its safety features in October 2016 which "detects, reports, and deletes known child exploitation images" sent through the platform. Kik partnered with ConnectSafely in 2016 to produce a "parents handbook" and joined The Technology Coalition, an anti-sexual exploitation group including Facebook, Google, Twitter and LinkedIn.
Bots
Kik added promoted chats in 2014, which used bots to converse with users about promoted brands through keywords activating responses. The feature allows companies to communicate with more potential clients than would be possible manually. Promoted messages reach target audiences by gender, country and device. In April 2016, Kik added a bot store to its app, which allows users to order food or products through an automated chat. Third-party companies release bots which will access the company's offerings. The bot shop added a web bubble (also known as "wubbles") feature to allow rich media content to be shared in conversation threads, as well as suggested responses and a feature allowing bots to be active in group threads. An update, released in September 2016, added concierge bots which can give users tips, tutorials, or recommendations within a specific brand.
Security
On November 4, 2014, Kik scored 1 out of 7 points on the Electronic Frontier Foundation's secure messaging scorecard. Kik received a point for encryption during transit but lost points because communications are not encrypted with a key to which the provider does not have access, users cannot verify contacts' identities, past messages are not secure if the encryption keys are stolen, the code is not open to independent review, the security design is not properly documented, and there had not been a recent independent security audit.
Awards and recognition
On October 1, 2014, Sony Music and Kik Interactive were given a Smarties award by the Mobile Marketing Association (MMA) for their global music marketing campaign with One Direction. In October 2016, company CEO Ted Livingston was recognized as Toronto's most brilliant tech innovator by Toronto Life for his work with Kik. Livingston was also recognized for being one of the "Most Creative People in Business" on Fast Companys 2017 list.
Controversies
Minors' use of Kik and explicit content
Like many other social media services Kik has garnered negative attention due to instances of minors exchanging explicit messages and photos with adults causing law enforcement and the media to frequently express concerns about the app. Automated spam bots have also been used to distribute explicit images and text over Kik Messenger. A state law enforcement official interviewed by The New York Times in February 2016 identified Kik as "the problem app of the moment". Police said they found Kik's response frustrating and one detective said obtaining information from Kik was a "bureaucratic nightmare". Constable Jason Cullum of Northamptonshire Police paedophile online investigation team stated delays in obtaining information from the company increased the risk to children. Cullum stated, "It's incredibly frustrating. We're banging our heads against a brick wall. There's a child that's going to be abused for probably another 12 months before we know who that is." Since its acquisition by Medialab Kik has revamped its policies and launched a variety of tools and resources including a guide for law enforcement and parents.
Prior to 2015, Kik Interactive addressed this issue by informing parents and police about their options to combat child exploitation. In March 2015, the company adopted a more aggressive strategy by utilizing Microsoft's PhotoDNA cloud service to automatically detect, delete, and report the distribution of child exploitation images on its app. (Some experts have noted that because PhotoDNA operates by comparing images against an existing database of exploitative images, it does not effectively prevent "realtime" online child abuse and may not detect material not yet added to its comparison database.) Kik Interactive also began collaborating internationally with law enforcement by joining the Virtual Global Taskforce, a partnership between businesses, child protection agencies, and international police services that combats online child exploitation and abuse. The company also sponsors an annual conference on crimes against children.
Kik has been criticized for providing inadequate parental control over minors' use of the app. The ability to share messages without alerting parents has been noted as "one of the reasons why teens like Kik". Parents cannot automatically view their child's Kik communications remotely from another device, but instead must have the password to their child's user account and view the communications on the same device used by their child. As of February 2016, Kik's parents' guide stresses that teens between 13 and 18 should have a parent's permission to use Kik, but there is no technical way to enforce the requirement or to guarantee that a minor will not enter a false birthdate. Kik Interactive has said that it uses "typical" industry standards for age verification, that "perfect age verification" is "not plausible", and that the company deletes accounts of users under 13 when it finds them, or when a parent requests the deletion.
Open-source module name
In March 2016, Kik Interactive was involved in a high-profile dispute over use of the name "kik" with independent code developer Azer Koçulu, the author of numerous open-source software modules published on npm, a package manager widely used by JavaScript projects to install dependencies. Koçulu had published an extension to Node.js on npm under the name "kik". Kik Interactive contacted him objecting to his use of the name, for which the company claimed intellectual property rights, and asked him to change the name. When Koçulu refused, Kik Interactive contacted npm management, who agreed to transfer ownership of the module to Kik without Koçulu's consent. Koçulu then unpublished all of his modules from npm, including a popular eleven-line code module called "left-pad" upon which many JavaScript projects depended. Although Koçulu subsequently published left-pad on GitHub, its sudden removal from npm caused many projects (including Kik itself) to stop working, due to their dependency on the Node and Babel packages. In view of widespread software disruption, npm restored Koçulu's left-pad and made Cameron Westland of Autodesk its maintainer. The incident sparked controversies over the assertion of intellectual property rights and the use of dependencies in software development.
Cryptocurrency
Kin is an ERC-20 cryptocurrency token issued on the public Ethereum blockchain. Kin was first announced in early 2017 which marked a pivot in Kik's strategy, a response to difficulties faced from competing with larger social networks such as Facebook. Kin was launched in September 2017 with an initial coin offering (ICO) raising $98 million from 10,000 participants. The purpose of the token is to facilitate value transfers in digital services such as gaming applications and social media, and was initially launched on Kik Messenger to leverage the application's 15 million monthly active users.
The enforcement division of the U.S. Securities and Exchange Commission considers the cryptocurrency offering to have been an unregulated security issue and is expected to begin legal action against the company. Kik has challenged the SEC's ability to regulate cryptocurrencies.
On September 7, 2017, only days before the Kin ICO, Kik announced that Canadian citizens would be barred from participating, citing weak guidance from the Ontario Securities Commission for the decision. This partly suppressed participation in the ICO, with only $98 million raised of the $125 million goal.
By 2019 the value of Kin had fallen by 99%. In the twelve months leading up to March 29, 2021, its value had grown about 6300%, or about 64 times its prior value.
See also
Comparison of cross-platform instant messaging clients
References
External links
Companies based in Waterloo, Ontario
Instant messaging
Instant messaging clients
Computer-related introductions in 2010
Canadian brands
Ethereum tokens |
61514054 | https://en.wikipedia.org/wiki/AES-GCM-SIV | AES-GCM-SIV | AES-GCM-SIV is a mode of operation for the Advanced Encryption Standard which provides similar performance to Galois/Counter Mode as well as misuse resistance in the event of the reuse of a cryptographic nonce. The construction is defined in RFC 8452.
About
AES-GCM-SIV is designed to preserve both privacy and integrity even if nonces are repeated. To accomplish this, encryption is a function of a nonce, the plaintext message, and optional additional associated data (a.k.a. AAD). In the event a nonce is misused (i.e. used more than once), nothing is revealed except in the case that same message is encrypted multiple times with the same nonce. When that happens, an attacker is able to observe repeat encryptions, since encryption is a deterministic function of the nonce and message. However, beyond that, no additional information is revealed to the attacker. For this reason, AES-GCM-SIV is an ideal choice in cases that unique nonces cannot be guaranteed, such as multiple servers or network devices encrypting messages under the same key without coordination.
Operation
Like Galois/Counter Mode, AES-GCM-SIV combines the well-known counter mode of encryption with the Galois mode of authentication. The key feature is the use of a synthetic initialization vector which is computed with Galois field multiplication using a construction called POLYVAL (a little-endian variant of Galois/Counter Mode's GHASH). POLYVAL is run over the combination of nonce, plaintext, and additional data, so that the IV is different for each combination.
POLYVAL is defined over GF(2128) by the polynomial:
Note that GHASH is defined over the "reverse" polynomial:
This change provides efficiency benefits on little-endian architectures.
Implementations
Implementations of AES-GCM-SIV are available, among others, in the following languages:
C
C#
Go
Java
PHP
Python
Rust
See also
Authenticated encryption
Galois/Counter Mode
Stream cipher
External links
: AES-GCM-SIV: Nonce Misuse-Resistant Authenticated Encryption
BIU: Webpage for the AES-GCM-SIV Mode of Operation
References
Block cipher modes of operation
Finite fields
Message authentication codes
Authenticated-encryption schemes |
66358899 | https://en.wikipedia.org/wiki/2018%20EC4 | 2018 EC4 | is a small asteroid and Mars trojan orbiting near the of Mars (60 degrees behind Mars on its orbit).
Discovery, orbit and physical properties
was first observed on 10 March 2018 by the Mt. Lemmon Survey, but it had already been imaged (but not identified as an asteroid) by the Pan-STARRS 1 telescope system at Haleakala on 29 October 2011. Its orbit is characterized by low eccentricity (0.061), moderate inclination (21.8°) and a semi-major axis of 1.52 AU. Upon discovery, it was classified as Mars-crosser by the Minor Planet Center. Its orbit is well determined as it is currently (January 2021) based on 70 observations with a data-arc span of 3,131 days. has an absolute magnitude of 20.1 which gives a characteristic diameter of 300 m.
Mars trojan and orbital evolution
Recent calculations indicate that it is a stable Mars trojan with a libration period of 1250 yr and an amplitude of 17°. These values are similar to those of 5261 Eureka and related objects and it may be a member of the so-called Eureka family.
See also
5261 Eureka (1990 MB)
References
Further reading
Three new stable L5 Mars Trojans de la Fuente Marcos, C., de la Fuente Marcos, R. 2013, Monthly Notices of the Royal Astronomical Society: Letters, Vol. 432, Issue 1, pp. 31–35.
Orbital clustering of Martian Trojans: An asteroid family in the inner solar system? Christou, A. A. 2013, Icarus, Vol. 224, Issue 1, pp. 144–153.
External links
data at MPC.
Mars trojans
Minor planet object articles (unnumbered)
20180310 |
16926574 | https://en.wikipedia.org/wiki/Polyhedra%20DBMS | Polyhedra DBMS | Polyhedra is a family of relational database management systems offered by ENEA AB, a Swedish company. The original version of Polyhedra (now referred to as Polyhedra IMDB) was an in-memory database management system which could be used in high availability configurations; in 2006 Polyhedra Flash DBMS was introduced to allow databases to be stored in flash memory. All versions employ the client–server model to ensure the data are protected from misbehaving application software, and they use the same SQL, ODBC and type-4 JDBC interfaces. Polyhedra is targeted primarily for embedded use by Original Equipment Manufacturers (OEMs), and big-name customers include Ericsson, ABB, Emerson, Lockheed Martin, United Utilities and Siemens AG.
Company
Polyhedra development was started in 1991 by Perihelion Technology Ltd, a subsidiary of Perihelion Software Ltd (PSL); initially, the project had a working title the "Perihelion Application Toolkit", but was soon renamed Polyhedra (using a left-over trademark from another PSL project). There was a management buyout of PTL in 1994, and the company name changed to Polyhedra plc to match the name of the product. Polyhedra plc was in turn acquired by Enea AB in 2001. All development and support is still done in the English town of Shepton Mallet, where PSL was based.
Features
Tim King, the founder of Perihelion Software Ltd, developed a relational DBMS for historical data as part of his PhD work; Dave Stoneham, who set up PTL, had previously developed a SCADA system. Building on these experiences, Polyhedra was originally developed "to bring the benefits of relational technology to the embedded market". To this end, it had to be small footprint, very fast... and it had to avoid the need for polling, which is a performance killer. Consequently, it was designed from the start to:
keep the working copy of the data in-memory (though there is now a variant that keeps the data in a flash-based file);
use a client–server architecture to protect the data from corruption by rogue application code;
have an 'active query' mechanism to update client applications when relevant database changes occur;
have a very simple processing model where a transaction is either a schema change, a query, or a request for a set of inserts, updates and/or deletes - such alterations can either be expressed via SQL statements or by updating through the active queries with (in conjunction with active queries) an optimistic concurrency mechanism to handle clashing updates;
have a table inheritance mechanism which, when combined with Database triggers (via the CL language, see below), allows the database designer to program the database in an object-oriented fashion. Table inheritance also avoids or reduces the need for supplementary tables whose primary key is a foreign key to another table, and thus can simplify many queries and updates.
have a Historian module to allow large volumes of times-series data to be captured, stored, archived and queried in an efficient fashion.
Polyhedra IMDB achieves data persistence through the use of snapshots and journal logging; Polyhedra Flash DBMS uses shadow paging, with 2 levels of recursion. In addition, Polyhedra can be used in hot-standby configurations for improved availability. The transactional model used by all Polyhedra products ensures atomicity, consistency and isolation (as defined by the ACID properties); durability is enforced in Polyhedra Flash DBMS, while in Polyhedra IMDB clients can choose the durability model when they issue their transactions.
"The Polyhedra DBMS system is fundamentally different compared to other relational systems, because of its active behaviour. This is achieved through two mechanisms, active queries and by the control language (CL). An active query looks quite like a normal query where some data is retrieved and/or written, but instead the query stays in the database until explicitly aborted. When a change in the data occurs that would alter the result of the query, the application is notified. The CL, which is a fully object-oriented script language that supports encapsulation, information hiding and inheritance, can determine the behaviour of data in the database. This means that methods, private or public, can be associated with data performing operations on them without involving the application."
Polyhedra is not a general-purpose DBMS, as the restricted transactional model does not meet all needs, and its fault-tolerance model is based on the hot-standby approach (to minimise hardware costs) rather than clustering (which is better for load-sharing). However, its limitations are benefits in embedded use, where the emphasis in a deployed application is on performance and cost rather than handling continually varying usage patterns.
Most of the Polyhedra products are made available for purchase under a proprietary license, but in 2012 Enea released Polyhedra Lite under a freeware license.
Release history
1991 Development started.
1993 Polyhedra 1.0: first commercial release of an in-memory Relational DBMS (RDBMS).
1995 Ported to Windows and Linux.
1996 Polyhedra 2.0: added hot standby configurations for use in applications needing high availability. First port to an RTOS (pSOS)
1997 Polyhedra 3.0: new in-memory data storage engine, for improved space and time efficiency.
1999 Polyhedra 3.1: adds new data types, ODBC API. OSE port.
2001 Polyhedra 4.0: JDBC support, additional index type, read-only replicas, multi-threading.
2002 Polyhedra 4.1: client–server comms overhauled for substantial performance improvements, especially for client apps using the ODBC API (now deemed the 'native' API for all platforms).
2003 Polyhedra 5.0: UNICODE, schema migration (SQL 'ALTER TABLE').
2004 Polyhedra 6.0: 64-bit support re-introduced, for Linux and Solaris. (It previously had been available on DEC Alpha under Digital UNIX until usage of that platform generally died out.) Polyhedra64 has subsequently been ported to Windows x64.
2006 Polyhedra Flash DBMS introduced, based on a fork of the Polyhedra IMDB code base.
2007 Polyhedra 7.0: Polyhedra IMDB and Polyhedra Flash DBMS code bases unified, for ease of support and greater commonality of features. Also, enhanced resource management and multi-threading.
2008 Polyhedra 8.0: Polyhedra Flash DBMS now supports hot standby configurations for use in applications needing high availability, in a similar way to Polyhedra IMDB. Polyhedra 8.1 added Linux/MIPS support, the ability to monitor active queries, and enhancements to the historian.
2009 Polyhedra 8.2: Linux ODBC drivers and IPv6
2010
Polyhedra 8.3: Some SQL enhancements and streaming output from historian.
Polyhedra 8.4: performance enhancements
2011 Polyhedra 8.5: better integration with 3rd-party tools, and improved performance on Windows. Replica servers can be used in a fan-out configuration for better scaling.
2012 Polyhedra 8.6: 64-bit integer data type. Polyhedra Lite introduced: a free-to-use, reduced-functionality version of Polyhedra32 IMDB, available for Windows, and for Linux on x86 and the Raspberry Pi.
2013
Polyhedra 8.7: locking and cascaded deletes.
Polyhedra 8.8: encrypted communications
2014 Polyhedra 8.9: SQL enhancements (GROUP BY and HAVING, DISTINCT, outer joins), security enhancements, and online backups for time-series data.
2015 Polyhedra 9.0: read-only partial database replication via a subscription mechanism, an ADO.NET data provider for Polyhedra, and enhancements to the proprietary 'callback API' that can yield significant performance enhancements.
2016 Polyhedra 9.1: bi-directional subscription and partial table replication, internal resource monitoring, and a Python DB-API module with extensions for Polyhedra-specific features such as active queries.
2017
Polyhedra 9.2: reduced memory usage, RDI (Remove Device Interface) API, OPC UA RDI, and SQL EXPLAIN command.
Polyhedra 9.3: server initiated replication.
2018 Polyhedra 9.4: Embedded database API and limited SQL function-based indexes.
2019 Polyhedra 9.5: Backup standby, MQTT interface and Grafana interface.
2020 Polyhedra 9.6: REST API, WebSocket Server and IMDB API Enhancements.
2021 Polyhedra 9.7: IMDB API BLOB caching, multiple database support.
References
External links
Proprietary database management systems |
47967 | https://en.wikipedia.org/wiki/Authentication | Authentication | Authentication (from authentikos, "real, genuine", from αὐθέντης authentes, "author") is the act of proving an assertion, such as the identity of a computer system user. In contrast with identification, the act of indicating a person or thing's identity, authentication is the process of verifying that identity. It might involve validating personal identity documents, verifying the authenticity of a website with a digital certificate, determining the age of an artifact by carbon dating, or ensuring that a product or document is not counterfeit.
Methods
Authentication is relevant to multiple fields. In art, antiques and anthropology, a common problem is verifying that a given artifact was produced by a certain person or in a certain place or period of history. In computer science, verifying a user's identity is often required to allow access to confidential data or systems.
Authentication can be considered to be of three types:
The first type of authentication is accepting proof of identity given by a credible person who has first-hand evidence that the identity is genuine. When authentication is required of art or physical objects, this proof could be a friend, family member or colleague attesting to the item's provenance, perhaps by having witnessed the item in its creator's possession. With autographed sports memorabilia, this could involve someone attesting that they witnessed the object being signed. A vendor selling branded items implies authenticity, while he or she may not have evidence that every step in the supply chain was authenticated. Centralized authority-based trust relationships back most secure internet communication through known public certificate authorities; decentralized peer-based trust, also known as a web of trust, is used for personal services such as email or files (Pretty Good Privacy, GNU Privacy Guard) and trust is established by known individuals signing each other's cryptographic key at Key signing parties, for instance.
The second type of authentication is comparing the attributes of the object itself to what is known about objects of that origin. For example, an art expert might look for similarities in the style of painting, check the location and form of a signature, or compare the object to an old photograph. An archaeologist, on the other hand, might use carbon dating to verify the age of an artifact, do a chemical and spectroscopic analysis of the materials used, or compare the style of construction or decoration to other artifacts of similar origin. The physics of sound and light, and comparison with a known physical environment, can be used to examine the authenticity of audio recordings, photographs, or videos. Documents can be verified as being created on ink or paper readily available at the time of the item's implied creation.
Attribute comparison may be vulnerable to forgery. In general, it relies on the facts that creating a forgery indistinguishable from a genuine artifact requires expert knowledge, that mistakes are easily made, and that the amount of effort required to do so is considerably greater than the amount of profit that can be gained from the forgery.
In art and antiques, certificates are of great importance for authenticating an object of interest and value. Certificates can, however, also be forged, and the authentication of these poses a problem. For instance, the son of Han van Meegeren, the well-known art-forger, forged the work of his father and provided a certificate for its provenance as well; see the article Jacques van Meegeren.
Criminal and civil penalties for fraud, forgery, and counterfeiting can reduce the incentive for falsification, depending on the risk of getting caught.
Currency and other financial instruments commonly use this second type of authentication method. Bills, coins, and cheques incorporate hard-to-duplicate physical features, such as fine printing or engraving, distinctive feel, watermarks, and holographic imagery, which are easy for trained receivers to verify.
The third type of authentication relies on documentation or other external affirmations. In criminal courts, the rules of evidence often require establishing the chain of custody of evidence presented. This can be accomplished through a written evidence log, or by testimony from the police detectives and forensics staff that handled it. Some antiques are accompanied by certificates attesting to their authenticity. Signed sports memorabilia is usually accompanied by a certificate of authenticity. These external records have their own problems of forgery and perjury, and are also vulnerable to being separated from the artifact and lost.
In computer science, a user can be given access to secure systems based on user credentials that imply authenticity. A network administrator can give a user a password, or provide the user with a key card or other access device to allow system access. In this case, authenticity is implied but not guaranteed.
Consumer goods such as pharmaceuticals, perfume, fashion clothing can use all three forms of authentication to prevent counterfeit goods from taking advantage of a popular brand's reputation (damaging the brand owner's sales and reputation). As mentioned above, having an item for sale in a reputable store implicitly attests to it being genuine, the first type of authentication. The second type of authentication might involve comparing the quality and craftsmanship of an item, such as an expensive handbag, to genuine articles. The third type of authentication could be the presence of a trademark on the item, which is a legally protected marking, or any other identifying feature which aids consumers in the identification of genuine brand-name goods. With software, companies have taken great steps to protect from counterfeiters, including adding holograms, security rings, security threads and color shifting ink.
Authentication factors
The ways in which someone may be authenticated fall into three categories, based on what are known as the factors of authentication: something the user knows, something the user has, and something the user is. Each authentication factor covers a range of elements used to authenticate or verify a person's identity prior to being granted access, approving a transaction request, signing a document or other work product, granting authority to others, and establishing a chain of authority.
Security research has determined that for a positive authentication, elements from at least two, and preferably all three, factors should be verified. The three factors (classes) and some of elements of each factor are:
the knowledge factors: Something the user knows (e.g., a password, partial password, pass phrase, personal identification number (PIN), challenge response (the user must answer a question or pattern), security question).
the ownership factors: Something the user has (e.g., wrist band, ID card, security token, implanted device, cell phone with built-in hardware token, software token, or cell phone holding a software token).
the inherence factors: Something the user is or does (e.g., fingerprint, retinal pattern, DNA sequence (there are assorted definitions of what is sufficient), signature, face, voice, unique bio-electric signals, or other biometric identifier).
Single-factor authentication
As the weakest level of authentication, only a single component from one of the three categories of factors is used to authenticate an individual’s identity. The use of only one factor does not offer much protection from misuse or malicious intrusion. This type of authentication is not recommended for financial or personally relevant transactions that warrant a higher level of security.
Multi-factor authentication
Multi-factor authentication involves two or more authentication factors (something you know, something you have, or something you are). Two-factor authentication is a special case of multi-factor authentication involving exactly two factors.
For example, using a bankcard (something the user has) along with a PIN (something the user knows) provides two-factor authentication. Business networks may require users to provide a password (knowledge factor) and a pseudorandom number from a security token (ownership factor). Access to a very-high-security system might require a mantrap screening of height, weight, facial, and fingerprint checks (several inherence factor elements) plus a PIN and a day code (knowledge factor elements), but this is still a two-factor authentication.
Authentication types
The most frequent types of authentication available in use for authenticating online users differ in the level of security provided by combining factors from the one or more of the three categories of factors for authentication:
Strong authentication
The U.S. government's National Information Assurance Glossary defines strong authentication as
layered authentication approach relying on two or more authenticators to establish the identity of an originator or receiver of information. The European Central Bank (ECB) has defined strong authentication as "a procedure based on two or more of the three authentication factors". The factors that are used must be mutually independent and at least one factor must be "non-reusable and non-replicable", except in the case of an inherence factor and must also be incapable of being stolen off the Internet. In the European, as well as in the US-American understanding, strong authentication is very similar to multi-factor authentication or 2FA, but exceeding those with more rigorous requirements.
The Fast IDentity Online (FIDO) Alliance has been striving to establish technical specifications for strong authentication.
Continuous authentication
Conventional computer systems authenticate users only at the initial log-in session, which can be the cause of a critical security flaw. To resolve this problem, systems need continuous user authentication methods that continuously monitor and authenticate users based on some biometric trait(s). A study used behavioural biometrics based in writing styles as a continuous authentication method.
Recent research has shown the possibility of using smartphones’ sensors and accessories to extract some behavioral attributes such as touch dynamics, keystroke dynamics and gait recognition. These attributes are known as behavioral biometrics and could be used to verify or identify users implicitly and continuously on smartphones. The authentication systems that have been built based on these behavioral biometric traits are known as active or continuous authentication systems.
Digital authentication
The term digital authentication, also known as electronic authentication or e-authentication, refers to a group of processes where the confidence for user identities is established and presented via electronic methods to an information system. The digital authentication process creates technical challenges because of the need to authenticate individuals or entities remotely over a network.
The American National Institute of Standards and Technology (NIST) has created a generic model for digital authentication that describes the processes that are used to accomplish secure authentication:
Enrollment – an individual applies to a credential service provider (CSP) to initiate the enrollment process. After successfully proving the applicant’s identity, the CSP allows the applicant to become a subscriber.
Authentication – After becoming a subscriber, the user receives an authenticator e.g., a token and credentials, such as a user name. He or she is then permitted to perform online transactions within an authenticated session with a relying party, where they must provide proof that he or she possesses one or more authenticators.
Life-cycle maintenance – the CSP is charged with the task of maintaining the user’s credential over the course of its lifetime, while the subscriber is responsible for maintaining his or her authenticator(s).
The authentication of information can pose special problems with electronic communication, such as vulnerability to man-in-the-middle attacks, whereby a third party taps into the communication stream, and poses as each of the two other communicating parties, in order to intercept information from each. Extra identity factors can be required to authenticate each party's identity.
Product authentication
Counterfeit products are often offered to consumers as being authentic. Counterfeit consumer goods, such as electronics, music, apparel, and counterfeit medications, have been sold as being legitimate. Efforts to control the supply chain and educate consumers help ensure that authentic products are sold and used. Even security printing on packages, labels, and nameplates, however, is subject to counterfeiting.
In their anti-counterfeiting technology guide, the EUIPO Observatory on Infringements of Intellectual Property Rights categorizes the main anti-counterfeiting technologies on the market currently into five main categories: electronic, marking, chemical and physical, mechanical, and technologies for digital media.
Products or their packaging can include a variable QR Code. A QR Code alone is easy to verify but offers a weak level of authentication as it offers no protection against counterfeits, unless scan data is analysed at the system level to detect anomalies. To increase the security level, the QR Code can be combined with a digital watermark or copy detection pattern that are robust to copy attempts, and can be authenticated with a smartphone.
A secure key storage device can be used for authentication in consumer electronics, network authentication, license management, supply chain management, etc. Generally the device to be authenticated needs some sort of wireless or wired digital connection to either a host system or a network. Nonetheless, the component being authenticated need not be electronic in nature as an authentication chip can be mechanically attached and read through a connector to the host e.g. an authenticated ink tank for use with a printer. For products and services that these secure coprocessors can be applied to, they can offer a solution that can be much more difficult to counterfeit than most other options while at the same time being more easily verified.
Packaging
Packaging and labeling can be engineered to help reduce the risks of counterfeit consumer goods or the theft and resale of products. Some package constructions are more difficult to copy and some have pilfer-indicating seals. Counterfeit goods, unauthorized sales (diversion), material substitution and tampering can all be reduced with these anti-counterfeiting technologies. Packages may include authentication seals and use security printing to help indicate that the package and contents are not counterfeit; these too are subject to counterfeiting. Packages also can include anti-theft devices, such as dye-packs, RFID tags, or electronic article surveillance tags that can be activated or detected by devices at exit points and require specialized tools to deactivate. Anti-counterfeiting technologies that can be used with packaging include:
Taggant fingerprinting – uniquely coded microscopic materials that are verified from a database
Encrypted micro-particles – unpredictably placed markings (numbers, layers and colors) not visible to the human eye
Holograms – graphics printed on seals, patches, foils or labels and used at point of sale for visual verification
Micro-printing – second-line authentication often used on currencies
Serialized barcodes
UV printing – marks only visible under UV light
Track and trace systems – use codes to link products to database tracking system
Water indicators – become visible when contacted with water
DNA tracking – genes embedded onto labels that can be traced
Color-shifting ink or film – visible marks that switch colors or texture when tilted
Tamper evident seals and tapes – destructible or graphically verifiable at point of sale
2d barcodes – data codes that can be tracked
RFID chips
NFC chips
Information content
Literary forgery can involve imitating the style of a famous author. If an original manuscript, typewritten text, or recording is available, then the medium itself (or its packaging – anything from a box to e-mail headers) can help prove or disprove the authenticity of the document. However, text, audio, and video can be copied into new media, possibly leaving only the informational content itself to use in authentication. Various systems have been invented to allow authors to provide a means for readers to reliably authenticate that a given message originated from or was relayed by them. These involve authentication factors like:
A difficult-to-reproduce physical artifact, such as a seal, signature, watermark, special stationery, or fingerprint.
A shared secret, such as a passphrase, in the content of the message.
An electronic signature; public-key infrastructure is often used to cryptographically guarantee that a message has been signed by the holder of a particular private key.
The opposite problem is detection of plagiarism, where information from a different author is passed off as a person's own work. A common technique for proving plagiarism is the discovery of another copy of the same or very similar text, which has different attribution. In some cases, excessively high quality or a style mismatch may raise suspicion of plagiarism.
Literacy and literature authentication
In literacy, authentication is a readers’ process of questioning the veracity of an aspect of literature and then verifying those questions via research. The fundamental question for authentication of literature is – Does one believe it? Related to that, an authentication project is therefore a reading and writing activity which students documents the relevant research process (). It builds students' critical literacy. The documentation materials for literature go beyond narrative texts and likely include informational texts, primary sources, and multimedia. The process typically involves both internet and hands-on library research. When authenticating historical fiction in particular, readers consider the extent that the major historical events, as well as the culture portrayed (e.g., the language, clothing, food, gender roles), are believable for the period.
History and state-of-the-art
Historically, fingerprints have been used as the most authoritative method of authentication, but court cases in the US and elsewhere have raised fundamental doubts about fingerprint reliability. Outside of the legal system as well, fingerprints have been shown to be easily spoofable, with British Telecom's top computer-security official noting that "few" fingerprint readers have not already been tricked by one spoof or another. Hybrid or two-tiered authentication methods offer a compelling solution, such as private keys encrypted by fingerprint inside of a USB device.
In a computer data context, cryptographic methods have been developed (see digital signature and challenge–response authentication) which are currently not spoofable if and only if the originator's key has not been compromised. That the originator (or anyone other than an attacker) knows (or doesn't know) about a compromise is irrelevant. It is not known whether these cryptographically based authentication methods are provably secure, since unanticipated mathematical developments may make them vulnerable to attack in future. If that were to occur, it may call into question much of the authentication in the past. In particular, a digitally signed contract may be questioned when a new attack on the cryptography underlying the signature is discovered.
Authorization
The process of authorization is distinct from that of authentication. Whereas authentication is the process of verifying that "you are who you say you are", authorization is the process of verifying that "you are permitted to do what you are trying to do". While authorization often happens immediately after authentication (e.g., when logging into a computer system), this does not mean authorization presupposes authentication: an anonymous agent could be authorized to a limited action set.
Access control
One familiar use of authentication and authorization is access control. A computer system that is supposed to be used only by those authorized must attempt to detect and exclude the unauthorized. Access to it is therefore usually controlled by insisting on an authentication procedure to establish with some degree of confidence the identity of the user, granting privileges established for that identity.
See also
Access Control Service
Atomic authorization
Authentication Open Service Interface Definition
Authenticity in art
Authorization
Basic access authentication
Biometrics
CAPTCHA
Chip Authentication Program
Closed-loop authentication
Diameter (protocol)
EAP
Electronic authentication
Encrypted key exchange (EKE)
Fingerprint Verification Competition
Geolocation
Hash-based message authentication code
Identification (information)
Java Authentication and Authorization Service
Multi-factor authentication
OAuth – an open standard for authorization
OpenID Connect – an authentication method for the web
OpenID – an authentication method for the web
Public-key cryptography
RADIUS
Reliance authentication
Secure Remote Password protocol (SRP)
Secure Shell
Security printing
Self-sovereign identity
SQRL
Strong authentication
Tamper-evident technology
Time-based authentication
Two-factor authentication
Usability of web authentication systems
Woo–Lam
References
External links
" New NIST Publications Describe Standards for Identity Credentials and Authentication Systems"
Applications of cryptography
Access control
Packaging
Notary
Computer access control |
17695943 | https://en.wikipedia.org/wiki/E-Government%20in%20South%20Korea | E-Government in South Korea | E-government began in South Korea in the 1980s when the Ministry of Government Administration and Home Affairs (MOGAHA) began to implement ICT within government, based on the "National Backbone Computer Network" consisting of five national networks. An Information Super-Highway was launched in 1993, followed by the creation of the Ministry of Information and Communication (MIC). Public access to government information services began to move online in the late 1990s; and a drive for "Participatory Government" gave further impetus for e-government after 2002, led by the 2003 "E-Government Roadmap" which sets a number of specific targets.
Defining e-Government
The term "Electronic Government" first appeared in official documents in September 1993, on page 112 of a report for government reform by the Clinton administration ("Creating A Government that Works Better and Costs Less: From Red Tape to Results"). The United Nations has defined the concept "e-government", or "Digital Government", as "The employment of the Internet and the world-wide-web for delivering government information and services to the citizens". (United Nations, 2006; AOEMA, 2005). E-Government involves "[t]he utilization of IT, ICTs, and other web-based telecommunication technologies services to improve and/or enhance on the efficiency and effectiveness of service delivery in the public sector." (Jeong, 2007).
E-government uses technologies to facilitate the smooth operation of government functions, and the disbursement of government information and services to the people. E-government has enhanced office automation in the public sector, through the utilization of the Internet and wireless technologies to connect telephones, fax machines, and printers. This is especially relevant for those public-sector fields where staff are constantly on the move, such as police officers and project managers.
History
The beginning (1960s–1970s)
South Korea's E-Government project first started as part of the office automation efforts for statistical analysis work in the Economy Planning Board (EPB) with the introduction of computers in 1967. At the time, it was the Committee on Coordination for Development of Computerized Organization established in 1967 under the Ministry of Science and Technology (MOST) that supplied computers to each ministry in the government.
In a survey conducted a decade later on supply and management of computers in government agencies by MOST in 1977, it was found that computers had greatly contributed to fast and accurate results in simple arithmetic tasks such as payroll and personnel management, calculations for phone bills, grading tests and so on, in central agencies such as the Ministry of Culture and Education, Ministry of Communication and Postal Service, and the National Tax Agency. However, in 1978, the need for informatization rather than simple automation of menial tasks brought about the start of the E-Government initiative aiming to realize a more advanced model of E-Government.
Moves to introduce ICT into government took the form of E-Government projects for building necessary infrastructure to achieve this end, under the "Five Year Basic Plan on Informatizaton of Public Administration". These efforts, led by the Ministry of Government Administration and Home Affairs (MOGAHA), paved the way for implementation of South Korea's advanced informatization policies in the 1980s.
Building the infrastructure for E-government (1980s–1990s)
The decision to build a "National Backbone Computer Network" and subsequent enactments of laws such as the Computer Program Protection Act and Supply and Utilization of Computer Network Act in 1986, and the Software Development Promotion Act in 1987, secured technology and infrastructure vital to realizing e-governance. These efforts led to a concrete plan and project engagements for the "National Backbone Computer Network" project that would become the communications and information network for the public sector. Under this plan, five national network projects – administration, finance, education and research, defense, and security – were launched.
This period was also a turning point for the infrastructure of e-governance in Korea. In 1993, a basic plan for building the foundation for the Information Super-Highway was announced, and the Ministry of Information and Communication (MIC) was launched in 1994. The following year saw the enactment by the National Assembly of the "Framework on Informatization Promotion Act", which became the basis for policies on informatization and e-governance. Based on this Act, the "Informatization Promotion Committee" was created along with the "Informatization Promotion Fund," to act as the steering head for informatization and E-Government initiatives. This act also provided a firm basis for implementing E-Government initiatives such as the Chief Information Officer (CIO) system. During the latter half of the 90s, the first Informatization Promotion conference was held at the Blue House on October 14, 1996, where President Kim Young-sam's ideas on E-Government were announced in the form of a report, "Informatization Strategy for Strengthening National Competitiveness". In 1997, an evaluation system for informatization projects was introduced while plans were made for implementation of the second stage of advanced ICT.
Full-scale implementation of E-Government (2000–Present)
With the inauguration of the Kim Dae-jung administration in 1998, the official government homepage went online and Internet-based civil services, such as real estate registration, became available. Presidential executive orders for appointing CIOs in the public sector and guidelines for sharing administrative information were established as well. In 1999 a comprehensive E-Government implementation plan was created, while civil services based on integrated civil application information system and comprehensive statistical information system were introduced.
By 2001, halfway into the term of the Kim Dae-jung administration, South Korea passed the first comprehensive legislation on E-Government, the "Promotion of Digitalization of Administrative Work for E-Government Realization Act". The SCEG began work in February of the same year, holding 12 executive and two general meetings where detailed plans for implementation, as well as funding for the 11 newly selected key E-Government projects, listed in (Table 1), were drawn up and reported to the President on 7 May 2001.
With the inauguration of Participatory Government under President Roh Moo-hyun from 2002, policies for E-Government became focused on ways to extend informatization. The former PCGI was restructured into PCGID (Presidential Committee on Government Innovation and Decentralization), encompassing E-Government, administrative reform, fiscal and tax reform, and decentralization. From each of the sub-committees in charge of these areas, implementation plans centered on the presidential agenda were announced as Roadmap tasks.
For e-governance, the "Participatory Government's Vision and Direction of E-Government" was announced in May 2003, and the "E-Government Roadmap" based on the vision of realizing the "World's Best Open E-Government" was released in August of the same year. The roadmap is divided into four areas, 10 agenda, 31 tasks and managed in terms of 45 detailed subtasks. It sets specific targets, including: to increase online public services to 85%; to attain a top 10 ranking in the world for business support competitiveness; to reduce visits for civil service applicants to 3 per year; and to raise the utilization rate of E-Government programs to 60%.
Development and implementation issues
See also
Government of South Korea
References
External links
Korea e-Government Portal site
Korea e-Government Webzine
Korea's Government Integrated Data Center Sets a New Benchmark on e-Government IT
South Korea
Government of South Korea
Science and technology in South Korea |
25266 | https://en.wikipedia.org/wiki/Quake%20%28video%20game%29 | Quake (video game) | Quake is a first-person shooter game developed by id Software and published by GT Interactive. The first game in the Quake series, it was originally released for MS-DOS, Microsoft Windows and Linux in 1996, followed by Mac OS and Sega Saturn in 1997 and Nintendo 64 in 1998. In the game, players must find their way through various maze-like, medieval environments while battling monsters using an array of weaponry. The overall atmosphere is dark and gritty, with many stone textures and a rusty, capitalized font. Quake takes heavy inspiration from gothic fiction and the works of H. P. Lovecraft.
The successor to id Software's Doom series, Quake built upon the technology and gameplay of its predecessor. Unlike the Doom engine before it, the Quake engine offered full real-time 3D rendering and had early support for 3D acceleration through OpenGL. After Doom helped popularize multiplayer deathmatches, Quake added various multiplayer options. Online multiplayer became increasingly common, with the QuakeWorld update and software such as QuakeSpy making the process of finding and playing against others on the Internet easier and more reliable. Quake featured music composed by Trent Reznor and his band Nine Inch Nails.
Despite receiving critical acclaim, Quake's development was controversial in the history of id Software. Due to creative differences and a lack of leadership, the majority of the team left the company after the game's release, including co-founder John Romero. A remastered version of Quake was developed by Nightdive Studios and published by Bethesda Softworks and was released for Microsoft Windows, Nintendo Switch, PlayStation 4, and Xbox One consoles in August 2021, including the original game's extended content and two episodes developed by MachineGames. The PlayStation 5 and Xbox Series X/S versions were released in October 2021.
Gameplay
In Quake single-player mode, players explore levels, facing monsters and finding secret areas before reaching an exit. Switches or keys open doors, and reaching the exit takes the player to the next level. Before accessing an episode, there is a set of three pathways with easy, medium, and hard skill levels. The fourth skill level, "Nightmare", was "so bad that it was hidden, so people won't wander in by accident"; the player must drop through water before the episode four entrance and go into a secret passage to access it.
Quake single-player campaign is organized into four individual episodes with seven to eight levels in each (including one secret level per episode, one of which is a "low gravity" level that challenges the player's abilities in a different way). If the player's character dies, they must restart at the beginning of that level. The game may be saved at any time in the PC versions and between levels in the console versions. Upon completing an episode, the player is returned to the hub "START" level, where another episode can be chosen. Each episode starts the player from scratch, without any previously collected items. Episode one (which formed the shareware or downloadable demo version of Quake) has the most traditional ideology of a boss in the last level. The ultimate objective at the end of each episode is to recover a magic rune. After all of the runes are collected, the floor of the hub level opens up to reveal an entrance to the "END" level which contains a final puzzle.
Multiplayer
In multiplayer mode, players on several computers connect to a server (which may be a dedicated machine or on one of the player's computers), where they can either play the single-player campaign together in co-op (cooperative) mode, or play against each other in multiplayer. When players die in multiplayer mode, they can immediately respawn, but will lose any items that were collected. Similarly, items that have been picked up previously respawn after some time, and may be picked up again. The most popular multiplayer modes are all forms of deathmatch. Deathmatch modes typically consist of either free-for-all (no organization or teams involved), one-on-one duels, or organized teamplay with two or more players per team (or clan). Players frequently implement mods during teamplay. Monsters are not normally present in teamplay, as they serve no purpose other than to get in the way and reveal the positions of the players.
The gameplay in Quake was considered unique for its time because of the different ways the player can maneuver through the game. Bunny hopping or strafe jumping allow faster movement, while rocket jumping enables the player to reach otherwise-inaccessible areas at the cost of some self-damage. The player can start and stop moving suddenly, jump unnaturally high, and change direction while moving through the air. Many of these non-realistic behaviors contribute to Quakes appeal. Multiplayer Quake was one of the first games singled out as a form of electronic sport. A notable participant was Dennis Fong, who won John Carmack's Ferrari 328 at the Microsoft-sponsored Red Annihilation tournament in 1997.
Plot
In the single-player game, the player takes the role of the unnamed protagonist, named Ranger in later games (voiced by Trent Reznor), sent into a portal in order to stop an enemy code-named "Quake". The government had been experimenting with teleportation technology and developed a working prototype called a "Slipgate"; the mysterious Quake compromised the Slipgate by connecting it with its own teleportation system, using it to send death squads to the "Human" dimension in order to test the martial capabilities of humanity.
The sole surviving protagonist in "Operation Counterstrike" is Ranger, who must advance, starting each of the four episodes from an overrun human military base, before fighting his way into other dimensions, reaching them via the Slipgate or their otherworld equivalent. After passing through the Slipgate, Ranger's main objective is to collect four magic runes from four dimensions of Quake; these are the key to stopping the enemy and ending the invasion of Earth.
The single-player campaign consists of 30 separate levels, or "maps", divided into four episodes (with a total of 26 regular maps and four secret ones), as well as a hub level to select a difficulty setting and episode, and the game's final boss level. Each episode represents individual dimensions that the player can access through magical portals (as opposed to the technological Slipgate) that are discovered over the course of the game. The various realms consist of a number of gothic, medieval, and lava-filled caves and dungeons, with a recurring theme of hellish and satanic imagery reminiscent of Doom (such as pentagrams and images of demons on the walls). The game's setting is inspired by dark fantasy influences, including H. P. Lovecraft's Cthulhu Mythos. Dimensional Shamblers appear as enemies, the "Spawn" enemies are called "Formless Spawn of Tsathoggua" in the manual, the boss of the first episode is named Chthon, and the main villain is named Shub-Niggurath (though actually resembling a Dark Young). Some levels have Lovecraftian names, such as the Vaults of Zin and The Nameless City. In addition, six levels exclusively designed for multiplayer deathmatch are included. Originally, the game was supposed to include more Lovecraftian bosses, but this concept was scrapped due to time constraints.
Development
A preview included with id's very first release, 1990's Commander Keen, advertised a game entitled The Fight for Justice as a follow-up to the Commander Keen trilogy. It would feature a character named Quake, "the strongest, most dangerous person on the continent", armed with thunderbolts and a "Ring of Regeneration". Conceived as a VGA full-color side-scrolling role-playing game, The Fight for Justice was never released.
Lead designer and director John Romero later conceived of Quake as an action game taking place in a fully 3D world, inspired by Sega AM2's 3D fighting game Virtua Fighter. Quake was also intended to feature Virtua Fighter influenced third-person melee combat, but id Software considered it to be risky. Because the project was taking too long, the third-person melee was eventually dropped. This led to creative differences between Romero and id Software, and eventually his departure from the company after Quake was released. Even though he led the project, Romero did not receive any money from Quake. In 2000, Romero released Daikatana, the game that he envisioned Quake being, and despite its shaky development, and being considered one of the worst games of all time, he said Daikatana was "more fun to make than Quake" due to the lack of creative interference.
Quake was given as a title to the game that id Software was working on shortly after the release of Doom II. The earliest information released described Quake as focusing on a Thor-like character who wields a giant hammer, and is able to knock away enemies by throwing the hammer (complete with real-time inverse kinematics). Initially, the levels were supposed to be designed in an Aztec style, but the choice was dropped some months into the project. Early screenshots then showed medieval environments and dragons. The plan was for the game to have more RPG-style elements. However, work was very slow on the engine, since John Carmack, the main programmer of Quake, was not only developing a fully 3D engine, but also a TCP/IP networking model (Carmack later said that he should have done two separate projects which developed those things). Working with a game engine that was still in development presented difficulties for the designers.
Eventually, the whole id Software team began to think that the original concept may not have been as wise a choice as they first believed. Thus, the final game was very stripped down from its original intentions, and instead featured gameplay similar to Doom and its sequel, although the levels and enemies were closer to medieval RPG style rather than science-fiction. In a December 1, 1994, post to an online bulletin board, John Romero wrote, "Okay, people. It seems that everyone is speculating on whether Quake is going to be a slow, RPG-style light-action game. Wrong! What does id do best and dominate at? Can you say "action"? I knew you could. Quake will be constant, hectic action throughout – probably more so than Doom".
Quake was programmed by John Carmack, Michael Abrash, and John Cash. The levels and scenarios were designed by American McGee, Sandy Petersen, John Romero, and Tim Willits, and the graphics were designed by Adrian Carmack, Kevin Cloud and Paul Steed. Cloud created the monster and player graphics using Alias.
The game engine developed for Quake, the Quake engine, popularized several major advances in the first-person shooter genre: polygonal models instead of prerendered sprites; full 3D level design instead of a 2.5D map; prerendered lightmaps; and allowing end users to partially program the game (in this case with QuakeC), which popularized fan-created modifications (mods).
Before the release of the full game or the shareware version of Quake, id Software released QTest on February 24, 1996. It was described as a technology demo and was limited to three multiplayer maps. There was no single-player support and some of the gameplay and graphics were unfinished or different from their final versions. QTest gave gamers their first peek into the filesystem and modifiability of the Quake engine, and many entity mods (that placed monsters in the otherwise empty multiplayer maps) and custom player skins began appearing online before the full game was even released.
Initially, the game was designed so that when the player ran out of ammunition, the player character would hit enemies with a gun-butt. Shortly before release this was replaced with an axe.
The release of Quake marks the end of the classic lineup at id Software. Due to conflicts with game design and ideas, animosity grew during development that majority of the staff resigned from id after the game's release including Romero, Abrash, Shawn Green, Jay Wilbur, Petersen and Mike Wilson. Petersen revealed in July 2021 that the lack of a team leader was the cause of it all. He volunteered to take lead as he had five years of experience as project manager in MicroProse, but he was turned down by Carmack.
Music and sound design
Quakes music and sound design was done by Trent Reznor and Nine Inch Nails, using ambient soundscapes and synthesized drones to create atmospheric tracks. In an interview, Reznor remarked that the Quake soundtrack "is not music, it's textures and ambiences and whirling machine noises and stuff. We tried to make the most sinister, depressive, scary, frightening kind of thing... It's been fun." The game includes an homage to Reznor in the form of ammo boxes for the "Nailgun" and "Super Nailgun" decorated with the Nine Inch Nails logo.
Some digital re-releases of the game lack the CD soundtrack that came with the original shareware release. The 2021 remastered version includes the soundtrack.
Ports
The first port to be completed was the Linux port Quake 0.91 by id Software employee Dave D. Taylor on July 5, 1996, followed by a SPARC Solaris port later that year also by Taylor. The first commercially released port was the port to Mac OS, done by MacSoft and Lion Entertainment, Inc. (the latter company ceased to exist just prior to the port's release, leading to MacSoft's involvement) in late August 1997. ClickBOOM announced version for Amiga-computers in 1998. Finally in 1999, a retail version of the Linux port was distributed by Macmillan Digital Publishing USA in a bundle with the three add-ons as Quake: The Offering.
Quake was also ported to home console systems. On December 2, 1997, the game was released for the Sega Saturn. Initially GT Interactive was to publish this version itself, but it later cancelled the release and the Saturn rights were picked up by Sega. Sega then took the project away from the original development team, who had been encountering difficulties getting the port to run at a decent frame rate, and assigned it to Lobotomy Software. The Sega Saturn port used Lobotomy Software's own 3D game engine, SlaveDriver (the same game engine that powered the Sega Saturn versions of PowerSlave and Duke Nukem 3D), instead of the original Quake engine. It is the only version of Quake that is rated "T" for Teen instead of "M" for Mature.
Quake had also been ported to the Sony PlayStation by Lobotomy Software, but the port was cancelled due to difficulties in finding a publisher. A port of Quake for the Atari Jaguar was also advertized as 30% complete in a May 1996 issue of Ultimate Future Games magazine, but it was never released. Another port of Quake was slated for Panasonic M2 but never occurred due to the cancellation of the system.
On March 24, 1998, the game was released for the Nintendo 64 by Midway Games. This version was developed by the same programming team that worked on Doom 64, at id Software's request. The Nintendo 64 version was originally slated to be released in 1997, but Midway delayed it until March 1998 to give the team time to implement the deathmatch modes.
Both console ports required compromises because of the limited CPU power and ROM storage space for levels. For example, the levels were rebuilt in the Saturn version in order to simplify the architecture, thereby reducing demands on the CPU. The Sega Saturn version includes 28 of the 32 single-player levels from the original PC version of the game, though the secret levels, Ziggurat Vertigo (E1M8), The Underearth (E2M7), The Haunted Halls (E3M7), and The Nameless City (E4M8), were removed. Instead, it has four exclusive secret levels: Purgatorium, Hell's Aerie, The Coliseum, and Watery Grave. It also contains an exclusive unlockable, "Dank & Scuz", which is a story set in the Quake milieu and presented in the form of a slide show with voice acting. There are no multiplayer modes in the Sega Saturn version; as a result of this, all of the deathmatch maps from the PC version were removed from the Sega Saturn port. The Nintendo 64 version includes 25 single-player levels from the PC version, though it is missing The Grisly Grotto (E1M4), The Installation (E2M1), The Ebon Fortress (E2M4), The Wind Tunnels (E3M5), The Sewage System (E4M1), and Hell's Atrium (E4M5) levels. It also does not use the hub "START" map where the player chooses a difficulty level and an episode; the difficulty level is chosen from a menu when starting the game, and all of the levels are played in sequential order from The Slipgate Complex (E1M1) to Shub Niggurath's Pit (END). The Nintendo 64 version, while lacking the cooperative multiplayer mode, includes two player deathmatch. All six of the deathmatch maps from the PC version are in the Nintendo 64 port, and an exclusive deathmatch level, The Court of Death, is also included.
Two ports of Quake for the Nintendo DS exist, QuakeDS and CQuake. Both run well, however, multiplayer does not work on QuakeDS. Since the source code for Quake was released, a number of unofficial ports have been made available for PDAs and mobile phones, such as PocketQuake, as well as versions for the Symbian S60 series of mobile phones and Android mobile phones. The Rockbox project also distributes a version of Quake that runs on some MP3 players.
In 2005, id Software signed a deal with publisher Pulse Interactive to release a version of Quake for mobile phones. The game was engineered by Californian company Bear Naked Productions. Initially due to be released on only two mobile phones, the Samsung Nexus (for which it was to be an embedded game) and the LG VX360. Quake mobile was reviewed by GameSpot on the Samsung Nexus and they cited its US release as October 2005; they also gave it a Best Mobile Game" in their E3 2005 Editor's Choice Awards. It is unclear as to whether the game actually did ship with the Samsung Nexus. The game is only available for the DELL x50v and x51v both of which are PDAs not mobile phones. Quake Mobile does not feature the Nine Inch Nails soundtrack due to space constraints. Quake Mobile runs the most recent version of GL Quake (Quake v.1.09 GL 1.00) at 800x600 resolution and 25 fps. The most recent version of Quake Mobile is v.1.20 which has stylus support. There was an earlier version v.1.19 which lacked stylus support. The two Quake expansion packs, Scourge of Armagon and Dissolution of Eternity, are also available for Quake Mobile.
A Flash-based version of the game by Michael Rennie runs Quake at full speed in any Flash-enabled web browser. Based on the shareware version of the game, it includes only the first episode and is available for free on the web.
Mods and add-ons
Quake can be heavily modified by altering the graphics, audio, or scripting in QuakeC, and has been the focus of many fan created "mods". The first mods were small gameplay fixes and patches initiated by the community, usually enhancements to weapons or gameplay with new enemies. Later mods were more ambitious and resulted in Quake fans creating versions of the game that were drastically different from id Software's original release.
The first major Quake mod was Team Fortress. This mod consists of Capture the Flag gameplay with a class system for the players. Players choose a class, which creates various restrictions on weapons and armor types available to that player, and also grants special abilities. For example, the bread-and-butter Soldier class has medium armor, medium speed, and a well-rounded selection of weapons and grenades, while the Scout class is lightly armored, very fast, has a scanner that detects nearby enemies, but has very weak offensive weapons. One of the other differences with CTF is the fact that the flag is not returned automatically when a player drops it: running over one's flag in Threewave CTF would return the flag to the base, and in TF the flag remains in the same spot for preconfigured time and it has to be defended on remote locations. This caused a shift in defensive tactics compared to Threewave CTF. Team Fortress maintained its standing as the most-played online Quake modification for many years. Team Fortress would go on to become Team Fortress Classic and get a sequel, Team Fortress 2.
Another popular mod was Threewave Capture the Flag (CTF), primarily authored by Dave 'Zoid' Kirsch. Threewave CTF is a partial conversion consisting of new levels, a new weapon (a grappling hook), power-ups, new textures, and new gameplay rules. Typically, two teams (red and blue) would compete in a game of Capture the flag, though a few maps with up to four teams (red, blue, green, and yellow) were created. Capture the Flag soon became a standard game mode included in most popular multiplayer games released after Quake. Rocket Arena provides the ability for players to face each other in small, open arenas with changes in the gameplay rules so that item collection and detailed level knowledge are no longer factors. A series of short rounds, with the surviving player in each round gaining a point, instead tests the player's aiming and dodging skills and reflexes. Clan Arena is a further modification that provides team play using Rocket Arena rules. One mod category, "bots", was introduced to provide surrogate players in multiplayer mode.
Arcane Dimensions is a singleplayer mod. It's a partial conversion with breakable objects and walls, enhanced particle system, numerous visual improvements and new enemies and weapons. The level design is much more complex in terms of geometry and gameplay than in the original game.
There are a large number of custom levels that have been made by users and fans of Quake. , new maps are still being made, over twenty years since the game's release. Custom maps are new maps that are playable by loading them into the original game. Custom levels of various gameplay types have been made, but most are in the single-player and deathmatch genres. More than 1500 single-player and a similar number of deathmatch maps have been made for Quake.
Reception
Sales
According to David Kushner in Masters of Doom, id Software released a retail shareware version of Quake before the game's full retail distribution by GT Interactive. These shareware copies could be converted into complete versions through passwords purchased via phone. However, Kushner wrote that "gamers wasted no time hacking the shareware to unlock the full version of the game for free." This problem, combined with the scale of the operation, led id Software to cancel the plan. As a result, the company was left with 150,000 unsold shareware copies in storage. The venture damaged Quakes initial sales and caused its retail push by GT Interactive to miss the holiday shopping season. Following the game's full release, Kushner remarked that its early "sales were good — with 250,000 units shipped — but not a phenomenon like Doom II."
In the United States, Quake placed sixth on PC Data's monthly computer game sales charts for November and December 1996. Its shareware edition was the sixth-best-selling computer game of 1996 overall, while its retail SKU claimed 20th place. The shareware version sold 393,575 copies and grossed $3,005,519 in the United States during 1996. It remained in PC Data's monthly top 10 from January to April 1997, but was absent by May. During its first 12 months, Quake sold 373,000 retail copies and earned $18 million in the United States, according to PC Data. Its final retail sales for 1997 were 273,936 copies, which made it the country's 16th-highest computer game seller for the year.
Sales of Quake reached 550,000 units in the United States alone by December 1999. In 1997, id estimated that there may be as many as 5 million copies of Quake circulating. The game sold over 1.4 million copies by December 1997.
Critical reviews
Quake was critically acclaimed on the PC. Aggregating review websites GameRankings and Metacritic gave the original PC version 93% and 94/100, and the Nintendo 64 port 76% and 74/100. A Next Generation critic lauded the game's realistic 3D physics and genuinely unnerving sound effects. GamePro said Quake had been over-hyped but is excellent nonetheless, particularly its usage of its advanced 3D engine. The review also praised the sound effects, atmospheric music, and graphics, though it criticized that the polygons used to construct the enemies are too obvious at close range.
Less than a month after Quake was released (and a month before they actually reviewed the game), Next Generation listed it as number 9 on their "Top 100 Games of All Time", saying that it is similar to Doom but supports a maximum of eight players instead of four. In 1996, Computer Gaming World declared Quake the 36th-best computer game ever released, and listed "telefragged" as #1 on its list of "the 15 best ways to die in computer gaming". In 1997, the Game Developers Choice Awards gave Quake three spotlight awards for Best Sound Effects, Best Music or Soundtrack and Best On-Line/Internet Game.
Entertainment Weekly gave the game a B+ and called it "an extended bit of subterranean mayhem that offers three major improvements over its immediate predecessor [Doom]." He identified these as the graphics, the audio design, and the amount of violent action.
Next Generation reviewed the Macintosh version of the game, rating it four stars out of five, and stated that "Though replay value is limited by the lack of interactive environments or even the semblance of a plot, there's no doubt that Quake and its engine are something powerful and addictive."
The Saturn version received mostly negative reviews, as critics generally agreed that it did not bring over the elements that make the game enjoyable. In particular, critics reviled the absence of the multiplayer mode, which they felt had eclipsed the single player campaign as the reason to play Quake. Kraig Kujawa wrote in Electronic Gaming Monthly, "Quake is not a great one-player game - it gained its notoriety on the Net as a multiplayer." and his co-reviewer Sushi-X concluded "Without multiplayer, I'd pass." Most reviews also said the controls are much worse than the PC original, in particular the difficulty of aiming at enemies without the benefit of either mouse-controlled camera or a second analog stick. GamePro noted that the graphics are very pixelated and blurry, to the point where people unfamiliar with Quake would not be able to discern what they're looking at. They concluded, "Quake may not be the worst Saturn game available, but it certainly doesn't live up to its PC heritage." Most critics did find the port technically impressive, particularly the added light sourcing. However, Next Generation pointed out that "Porting Quake to a console is nothing more than an excuse for bragging rights. It's simply a way to show that the limited architecture of a 32-bit system has the power to push the same game that those mighty Pentium PCs take for granted." Even Rich Leadbetter of Sega Saturn Magazine, which gave the port a 92%, acknowledged that it was a proverbial dancing bear, noting several conspicuous compromises the port made and stating as his concluding argument, "Look, it's Quake on the Saturn - the machine has no right to be doing this!" GameSpot opined that the game's lack of plot makes the single-player campaign feel too shallow and lacking in motivation to appeal to most gamers. Most critics compared the port unfavorably to the Saturn version of Duke Nukem 3D (which came out just a few months earlier), mainly in terms of gameplay.
Next Generation reviewed the Nintendo 64 version of the game, rating it three stars out of five, and stated that "As a whole, Quake 64 doesn't live up to the experience offered by the high-end, 3D-accelerated PC version; it is, however, an entertaining gaming experience that is worthy of a close look and a nice addition to the blossoming number of first-person shooters for Nintendo 64."
Next Generation reviewed the arcade version of the game, rating it three stars out of five, and stated that "For those who don't have LAN or internet capabilities, check out arcade Quake. It's a blast."
In 1998, PC Gamer declared it the 28th-best computer game ever released, and the editors called it "one of the most addictive, adaptable, and pulse-pounding 3D shooters ever created". In 2003, Quake was inducted into GameSpot's list of the greatest games of all time.
Speedruns
As an example of the dedication that Quake has inspired in its fan community, a group of expert players recorded speedrun demos (replayable recordings of the player's movement) of Quake levels completed in record time on the "Nightmare" skill level. The footage was edited into a continuous 19 minutes, 49 seconds demo called Quake done Quick and released on June 10, 1997. Owners of Quake could replay this demo in the game engine, watching the run unfold as if they were playing it themselves.
Most full-game speedruns are a collaborative effort by a number of runners (though some have been done by single runners on their own). Although each particular level is credited to one runner, the ideas and techniques used are iterative and collaborative in nature, with each runner picking up tips and ideas from the others, so that speeds keep improving beyond what was thought possible as the runs are further optimized and new tricks or routes are discovered. Further time improvements of the continuous whole game run were achieved into the 21st century. In addition, many thousands of individual level runs are kept at Speed Demos Archive's Quake section, including many on custom maps. Speedrunning is a counterpart to multiplayer modes in making Quake one of the first games promoted as a virtual sport.
Legacy
The source code of the Quake and QuakeWorld engines was licensed under the GNU GPL-2.0-or-later on December 21, 1999. The id Software maps, objects, textures, sounds, and other creative works remain under their original proprietary license. The shareware distribution of Quake is still freely redistributable and usable with the GPLed engine code. One must purchase a copy of Quake in order to receive the registered version of the game which includes more single-player episodes and the deathmatch maps. Based on the success of the first Quake game, and later published Quake II and Quake III Arena, Quake 4 was released in October 2005, developed by Raven Software using the Doom 3 engine.
Quake was the game primarily responsible for the emergence of the machinima artform of films made in game engines, thanks to edited Quake demos such as Ranger Gone Bad and Blahbalicious, the in-game film The Devil's Covenant, and the in-game-rendered, four-hour epic film The Seal of Nehahra. On June 22, 2006, it had been ten years since the original uploading of the game to cdrom.com archives. Many Internet forums had topics about it, and it was a front-page story on Slashdot. On October 11, 2006, John Romero released the original map files for all of the levels in Quake under the GPL.
Quake has four sequels: Quake II, Quake III Arena, Quake 4, and Enemy Territory: Quake Wars. In 2002, a version of Quake was produced for mobile phones. A copy of Quake was also released as a compilation in 2001, labeled Ultimate Quake, which included the original Quake, Quake II, and Quake III Arena which was published by Activision. In 2008, Quake was honored at the 59th Annual Technology & Engineering Emmy Awards for advancing the art form of user modifiable games. John Carmack accepted the award. Years after its original release, Quake is still regarded by many critics as one of the greatest and most influential games ever made.
Expansions and ports
There were two official expansion packs released for Quake. The expansion packs pick up where the first game left off, include all of the same weapons, power-ups, monsters, and gothic atmosphere/architecture, and continue/finish the story of the first game and its protagonist. An unofficial third expansion pack, Abyss of Pandemonium, was developed by the Impel Development Team, published by Perfect Publishing, and released on April 14, 1998; an updated version, version 2.0, titled Abyss of Pandemonium – The Final Mission was released as freeware. An authorized expansion pack, Q!ZONE was developed and published by WizardWorks, and released in 1996. An authorized level editor, Deathmatch Maker was developed by Virtus Corporation and published by Macmillan Digital Publishing in 1997. It contained an exclusive Virtus' Episode. In honor of Quake'''s 20th anniversary, MachineGames, an internal development studio of ZeniMax Media, who are the current owners of the Quake IP, released online a new expansion pack for free, called Episode 5: Dimension of the Past.
Quake Mission Pack No. 1: Scourge of ArmagonQuake Mission Pack No. 1: Scourge of Armagon was the first official mission pack, released on March 5, 1997. Developed by Hipnotic Interactive, it features three episodes divided into seventeen new single-player levels (three of which are secret), a new multiplayer level, a new soundtrack composed by Jeehun Hwang, and gameplay features not originally present in Quake, including rotating structures and breakable walls. Unlike the main Quake game and Mission Pack No. 2, Scourge does away with the episode hub, requiring the three episodes to be played sequentially. The three new enemies include Centroids, large cybernetic scorpions with nailguns; Gremlins, small goblins that can steal weapons and multiply by feeding on enemy corpses; and Spike Mines, floating orbs that detonate when near the player. The three new weapons include the Mjolnir, a large lightning emitting hammer; the Laser Cannon, which shoots bouncing bolts of energy; and the Proximity Mine Launcher, which fires grenades that attach to surfaces and detonate when an opponent comes near. The three new power-ups include the Horn of Conjuring, which summons an enemy to protect the player; the Empathy Shield, which halves the damage taken by the player between the player and the attacking enemy; and the Wetsuit, which renders the player invulnerable to electricity and allows the player to stay underwater for a period of time. The storyline follows Armagon, a general of Quake's forces, planning to invade Earth via a portal known as the 'Rift'. Armagon resembles a giant gremlin with cybernetic legs and a combined rocket launcher/laser cannon for arms.
Tim Soete of GameSpot gave it a score 8.6 out of 10.
Quake Mission Pack No. 2: Dissolution of EternityQuake Mission Pack No. 2: Dissolution of Eternity was the second official mission pack, released on March 19, 1997. Developed by Rogue Entertainment, it features two episodes divided into fifteen new single-player levels, a new multiplayer level, a new soundtrack, and several new enemies and bosses. Notably, the pack lacks secret levels. The eight new enemies include Electric Eels, Phantom Swordsmen, Multi-Grenade Ogres (which fire cluster grenades), Hell Spawn, Wraths (floating, robed undead), Guardians (resurrected ancient Egyptian warriors), Mummies, and statues of various enemies that can come to life. The four new types of bosses include Lava Men, Overlords, large Wraths, and a dragon guarding the "temporal energy converter". The two new power-ups include the Anti Grav Belt, which allows the player to jump higher; and the Power Shield, which lowers the damage the player receives. Rather than offering new weapons, the mission pack gives the player four new types of ammo for existing weapons, such as "lava nails" for the Nailgun, cluster grenades for the Grenade Launcher, rockets that split into four in a horizontal line for the Rocket Launcher, and plasma cells for the Thunderbolt, as well as a grappling hook to help with moving around the levels.
Tim Soete of GameSpot gave it a score of 7.7 out of 10.
VQuake
In late 1996, id Software released VQuake, a port of the Quake engine to support hardware accelerated rendering on graphics cards using the Rendition Vérité chipset. Aside from the expected benefit of improved performance, VQuake offered numerous visual improvements over the original software-rendered Quake. It boasted full 16-bit color, bilinear filtering (reducing pixelation), improved dynamic lighting, optional anti-aliasing, and improved source code clarity, as the improved performance finally allowed the use of gotos to be abandoned in favor of proper loop constructs. As the name implied, VQuake was a proprietary port specifically for the Vérité; consumer 3D acceleration was in its infancy at the time, and there was no standard 3D API for the consumer market. After completing VQuake, John Carmack vowed to never write a proprietary port again, citing his frustration with Rendition's Speedy3D API.
QuakeWorld
To improve the quality of online play, id Software released QuakeWorld in December 1996, a build of Quake that featured significantly revamped network code including the addition of client-side prediction. The original Quake network code would not show the player the results of his actions until the server sent back a reply acknowledging them. For example, if the player attempted to move forward, his client would send the request to move forward to the server, and the server would determine whether the client was actually able to move forward or if he ran into an obstacle, such as a wall or another player. The server would then respond to the client, and only then would the client display movement to the player. This was fine for play on a LAN, a high bandwidth, very low latency connection, but the latency over a dial-up Internet connection is much larger than on a LAN, and this caused a noticeable delay between when a player tried to act and when that action was visible on the screen. This made gameplay much more difficult, especially since the unpredictable nature of the Internet made the amount of delay vary from moment to moment. Players would experience jerky, laggy motion that sometimes felt like ice skating, where they would slide around with seemingly no ability to stop, due to a build-up of previously-sent movement requests. John Carmack has admitted that this was a serious problem that should have been fixed before release, but it was not caught because he and other developers had high-speed Internet access at home.
After months of private beta testing, QuakeWorld, written by John Carmack with help from John Cash and Christian Antkow, was released on December 13, 1996. The client portion followed on December 17. Official id Software development stopped with the test release of QuakeWorld 2.33 on December 21, 1998. The last official stable release was 2.30. QuakeWorld has been described by IGN as the first popular online first-person shooter.
With the help of client-side prediction, which allowed players to see their own movement immediately without waiting for a response from the server, QuakeWorld network code allowed players with high-latency connections to control their character's movement almost as precisely as when playing in single-player mode. The Netcode parameters could be adjusted by the user so that QuakeWorld performed well for users with high and low latency. The trade off to client-side prediction was that sometimes other players or objects would no longer be quite where they had appeared to be, or, in extreme cases, that the player would be pulled back to a previous position when the client received a late reply from the server which overrode movement the client had already previewed; this was known as "warping". As a result, some serious players, particularly in the U.S., still preferred to play online using the original Quake engine (commonly called NetQuake) rather than QuakeWorld. However, the majority of players, especially those on dial-up connections, preferred the newer network model, and QuakeWorld soon became the dominant form of online play. Following the success of QuakeWorld, client-side prediction has become a standard feature of nearly all real-time online games. As with all other Quake upgrades, QuakeWorld was released as a free, unsupported add-on to the game and was updated numerous times through 1998.
GLQuake
On January 22, 1997, id Software released GLQuake. This was designed to use the OpenGL 3D API to access hardware 3D graphics acceleration cards to rasterize the graphics, rather than having the computer's CPU fill in every pixel. In addition to higher framerates for most players, GLQuake provided higher resolution modes and texture filtering. GLQuake also experimented with reflections, transparent water, and even rudimentary shadows. GLQuake came with a driver enabling the subset of OpenGL used by the game to function on the 3dfx Voodoo Graphics card, the only consumer-level card at the time capable of running GLQuake well. Previously, John Carmack had experimented with a version of Quake specifically written for the Rendition Vérité chip used in the Creative Labs PCI 3D Blaster card. This version had met with only limited success, and Carmack decided to write for generic APIs in the future rather than tailoring for specific hardware.
WinQuake
On March 11, 1997, id Software released WinQuake, a version of the non-OpenGL engine designed to run under Microsoft Windows; the original Quake had been written for DOS, allowing for launch from Windows 95, but could not run under Windows NT-based operating systems because it required direct access to hardware. WinQuake instead accessed hardware via Win32-based APIs such as DirectSound, DirectInput, and DirectDraw that were supported on Windows 95, Windows NT 4.0 and later releases. Like GLQuake, WinQuake also allowed higher resolution video modes. This removed the last barrier to widespread popularity of the game. In 1998, LBE Systems and Laser-Tron released Quake: Arcade Tournament Edition in the arcades in limited quantities.
Dimension of the Past
To celebrate Quakes 20th anniversary, a mission pack was developed by MachineGames and released on June 24, 2016. It features 10 new single-player levels and a new multiplayer level, but does not use new gameplay additions from Scourge of Armagon and Dissolution of Eternity. Chronologically, it is set between the main game and the expansions.
Sequels
After the departure of Sandy Petersen, the remaining id employees chose to change the thematic direction substantially for Quake II, making the design more technological and futuristic, rather than maintaining the focus on Lovecraftian fantasy. Quake 4 followed the design themes of Quake II, whereas Quake III Arena mixed these styles; it had a parallel setting that housed several "id all-stars" from various games as playable characters. The mixed settings occurred because Quake II originally began as a separate product line. The id designers fell back on the project's nickname of "Quake II" because the game's fast-paced, tactile feel felt closer to a Quake game than a new franchise. Since any sequel to the original Quake had already been vetoed, it became a way of continuing the series without continuing the storyline or setting of the first game. In June 2011, John Carmack made an offhand comment that id Software was considering going back to the "...mixed up Cthulhu-ish Quake 1 world and rebooting [in] that direction."
Vulkan rendering API
On July 20, 2016, Axel Gneiting, an id Tech employee responsible for implementing the Vulkan rendering path to the id Tech 6 engine used in Doom (2016), released a port called vkQuake under the GPLv2.
Remastered Edition and Dimension of the Machine
At the launch of the 2021 QuakeCon@Home on August 19, 2021, Bethesda released a remastered version of Quake for Microsoft Windows, Nintendo Switch, PlayStation 4, PlayStation 5, Xbox One, and Xbox Series X/S consoles, developed by Night Dive Studios. In addition to support for modern systems and improved rendering techniques, the remastered version includes both mission packs, Scourge of Armagon and Dissolution of Eternity. It also includes two episodes created by MachineGames: the previously-released Dimension of the Past and a new one called Dimension of the Machine. A port of Quake 64 was also included in its entirety via the newly-implemented "Add-On" menu.
See alsoDiary of a Camper, a short film made in QuakeBinary space partitioning, a technology used in Quake''
Notes
References
External links
1996 video games
Acorn Archimedes games
Cancelled Atari Jaguar games
Cancelled Panasonic M2 games
Cancelled PlayStation (console) games
Commercial video games with freely available source code
Cooperative video games
Cthulhu Mythos games
Dark fantasy video games
DOS games
DOS games ported to Windows
First-person shooters
Games commercially released with DOSBox
GT Interactive Software games
Horror video games
Id Software games
Id Tech games
Linux games
Classic Mac OS games
MacSoft games
Multiplayer and single-player video games
Multiplayer online games
Nintendo 64 games
Quake (series)
Science fantasy video games
Sega Saturn games
Video games about demons
Video games based on works by H. P. Lovecraft
Video games developed in the United States
Video games scored by Aubrey Hodges
Video games set in antiquity
Video games with expansion packs |
29816614 | https://en.wikipedia.org/wiki/Computer%20security%20compromised%20by%20hardware%20failure | Computer security compromised by hardware failure | Computer security compromised by hardware failure is a branch of computer security applied to hardware.
The objective of computer security includes protection of information and property from theft, corruption, or natural disaster, while allowing the information and property to remain accessible and productive to its intended users. Such secret information could be retrieved by different ways. This article focus on the retrieval of data thanks to misused hardware or hardware failure. Hardware could be misused or exploited to get secret data. This article collects main types of attack that can lead to data theft.
Computer security can be comprised by devices, such as keyboards, monitors or printers (thanks to electromagnetic or acoustic emanation for example) or by components of the computer, such as the memory, the network card or the processor (thanks to time or temperature analysis for example).
Devices
Monitor
The monitor is the main device used to access data on a computer. It has been shown that monitors radiate or reflect data on their environment, potentially giving attackers access to information displayed on the monitor.
Electromagnetic emanations
Video display units radiate:
narrowband harmonics of the digital clock signals ;
broadband harmonics of the various 'random' digital signals such as the video signal.
Known as compromising emanations or TEMPEST radiation, a code word for a U.S. government programme aimed at attacking the problem, the electromagnetic broadcast of data has been a significant concern in sensitive computer applications. Eavesdroppers can reconstruct video screen content from radio frequency emanations. Each (radiated) harmonic of the video signal shows a remarkable resemblance to a broadcast TV signal. It is therefore possible to reconstruct the picture displayed on the video display unit from the radiated emission by means of a normal television receiver. If no preventive measures are taken, eavesdropping on a video display unit is possible at distances up to several hundreds of meters, using only a normal black-and-white TV receiver, a directional antenna and an antenna amplifier. It is even possible to pick up information from some types of video display units at a distance of over 1 kilometer. If more sophisticated receiving and decoding equipment is used, the maximum distance can be much greater.
Compromising reflections
What is displayed by the monitor is reflected on the environment. The time-varying diffuse reflections of the light emitted by a CRT monitor can be exploited to recover the original monitor image. This is an eavesdropping technique for spying at a distance on data that is displayed on an arbitrary computer screen, including the currently prevalent LCD monitors.
The technique exploits reflections of the screen's optical emanations in various objects that one commonly finds in close proximity to the screen and uses those reflections to recover the original screen content. Such objects include eyeglasses, tea pots, spoons, plastic bottles, and even the eye of the user. This attack can be successfully mounted to spy on even small fonts using inexpensive, off-the-shelf equipment (less than 1500 dollars) from a distance of up to 10 meters. Relying on more expensive equipment allowed to conduct this attack from over 30 meters away, demonstrating that similar attacks are feasible from the other side of the street or from a close by building.
Many objects that may be found at a usual workplace can be exploited to retrieve information on a computer's display by an outsider. Particularly good results were obtained from reflections in a user's eyeglasses or a tea pot located on the desk next to the screen. Reflections that stem from the eye of the user also provide good results. However, eyes are harder to spy on at a distance because they are fast-moving objects and require high exposure times. Using more expensive equipment with lower exposure times helps to remedy this problem.
The reflections gathered from curved surfaces on close by objects indeed pose a substantial threat to the confidentiality of data displayed on the screen. Fully invalidating this threat without at the same time hiding the screen from the legitimate user seems difficult, without using curtains on the windows or similar forms of strong optical shielding. Most users, however, will not be aware of this risk and may not be willing to close the curtains on a nice day. The reflection of an object, a computer display, in a curved mirror creates a virtual image that is located behind the reflecting surface. For a flat mirror this virtual image has the same size and is located behind the mirror at the same distance as the original object. For curved mirrors, however, the situation is more complex.
Keyboard
Electromagnetic emanations
Computer keyboards are often used to transmit confidential data such as passwords. Since they contain electronic components, keyboards emit electromagnetic waves. These emanations could reveal sensitive information such as keystrokes. Electromagnetic emanations have turned out to constitute a security threat to computer equipment. The figure below presents how a keystroke is retrieved and what material is necessary.
The approach is to acquire the raw signal directly from the antenna and to process the entire captured electromagnetic spectrum. Thanks to this method, four different kinds of compromising electromagnetic emanations have been detected, generated by wired and wireless keyboards. These emissions lead to a full or a partial recovery of the keystrokes. The best practical attack fully recovered 95% of the keystrokes of a PS/2 keyboard at a distance up to 20 meters, even through walls. Because each keyboard has a specific fingerprint based on the clock frequency inconsistencies, it can determine the source keyboard of a compromising emanation, even if multiple keyboards from the same model are used at the same time.
The four different kinds way of compromising electromagnetic emanations are described below.
The Falling Edge Transition Technique
When a key is pressed, released or held down, the keyboard sends a packet of information known as a scan code to the computer. The protocol used to transmit these scan codes is a bidirectional serial communication, based on four wires: Vcc (5 volts), ground, data and clock. Clock and data signals are identically generated. Hence, the compromising emanation detected is the combination of both signals. However, the edges of the data and the clock lines are not superposed. Thus, they can be easily separated to obtain independent signals.
The Generalized Transition Technique
The Falling Edge Transition attack is limited to a partial recovery of the keystrokes. This is a significant limitation. The GTT is a falling edge transition attack improved, which recover almost all keystrokes. Indeed, between two traces, there is exactly one data rising edge. If attackers are able to detect this transition, they can fully recover the keystrokes.
The Modulation Technique
Harmonics compromising electromagnetic emissions come from unintentional emanations such as radiations emitted by the clock, non-linear elements, crosstalk, ground pollution, etc. Determining theoretically the reasons of these compromising radiations is a very complex task. These harmonics correspond to a carrier of approximately 4 MHz which is very likely the internal clock of the micro-controller inside the keyboard. These harmonics are correlated with both clock and data signals, which describe modulated signals (in amplitude and frequency) and the full state of both clock and data signals. This means that the scan code can be completely recovered from these harmonics.
The Matrix Scan Technique
Keyboard manufacturers arrange the keys in a matrix. The keyboard controller, often an 8-bit processor, parses columns one-by-one and recovers the state of 8 keys at once. This matrix scan process can be described as 192 keys (some keys may not be used, for instance modern keyboards use 104/105 keys) arranged in 24 columns and 8 rows. These columns are continuously pulsed one-by-one for at least 3μs. Thus, these leads may act as an antenna and generate electromagnetic emanations. If an attacker is able to capture these emanations, he can easily recover the column of the pressed key. Even if this signal does not fully describe the pressed key, it still gives partial information on the transmitted scan code, i.e. the column number.
Note that the matrix scan routine loops continuously. When no key is pressed, we still have a signal composed of multiple equidistant peaks. These emanations may be used to remotely detect the presence of powered computers. Concerning wireless keyboards, the wireless data burst transmission can be used as an electromagnetic trigger to detect exactly when a key is pressed, while the matrix scan emanations are used to determine the column it belongs to.
Summary
Some techniques can only target some keyboards. This table sums up which technique could be used to find keystroke for different kind of keyboard.
In their paper called "Compromising Electromagnetic Emanations of Wired and Wireless Keyboards", Martin Vuagnoux and Sylvain Pasini tested 12 different keyboard models, with PS/2, USB connectors and wireless communication in different setups: a semi-anechoic chamber, a small office, an adjacent office and a flat in a building. The table below presents their results.
Acoustic emanations
Attacks against emanations caused by human typing have attracted interest in recent years. In particular, works showed that keyboard acoustic emanations do leak information that can be exploited to reconstruct the typed text.
PC keyboards, notebook keyboards are vulnerable to attacks based on differentiating the sound emanated by different keys. This attack takes as input an audio signal containing a recording of a single word typed by a single person on a keyboard, and a dictionary of words. It is assumed that the typed word is present in the dictionary. The aim of the attack is to reconstruct the original word from the signal. This attack, taking as input a 10-minute sound recording of a user typing English text using a keyboard, and then recovering up to 96% of typed characters. This attack is inexpensive because the other hardware required is a parabolic microphone and non-invasive because it does not require physical intrusion into the system. The attack employs a neural network to recognize the key being pressed. It combines signal processing and efficient data structures and algorithms, to successfully reconstruct single words of 7-13 characters from a recording of the clicks made when typing them on a keyboard. The sound of clicks can differ slightly from key to key, because the keys are positioned at different positions on the keyboard plate, although the clicks of different keys sound similar to the human ear.
On average, there were only 0.5 incorrect recognitions per 20 clicks, which shows the exposure of keyboard to the eavesdropping using this attack.
The attack is very efficient, taking under 20 seconds per word on a standard PC. A 90% or better success rate of finding the correct word for words of 10 or more characters, and a success rate of 73% over all the words tested. In practice, a human attacker can typically determine if text is random. An attacker can also identify occasions when the user types user names and passwords. Short audio signals containing a single word, with seven or more characters long was considered. This means that the signal is only a few seconds long. Such short words are often chosen as a password. The dominant factors affecting the attack's success are the word length, and more importantly, the number of repeated characters within the word.
This is a procedure that makes it possible to efficiently uncover a word out of audio recordings of keyboard click sounds. More recently, extracting information out of another type of emanations was demonstrated: acoustic emanations from mechanical devices such as dot-matrix printers.
Video Eavesdropping on Keyboard
While extracting private information by watching somebody typing on a keyboard might seem to be an easy task, it becomes extremely challenging if it has to be automated. However, an automated tool is needed in the case of long-lasting surveillance procedures or long user activity, as a human being is able to reconstruct only a few characters per minute. The paper "ClearShot: Eavesdropping on Keyboard Input from Video" presents a novel approach to automatically recovering the text being typed on a keyboard, based solely on a video of the user typing.
Automatically recognizing the keys being pressed by a user is a hard problem that requires sophisticated motion analysis. Experiments show that, for a human, reconstructing a few sentences requires lengthy hours of slow-motion analysis of the video. The attacker might install a surveillance device in the room of the victim, might take control of an existing camera by exploiting a vulnerability in the camera's control software, or might simply point a mobile phone with an integrated camera at the laptop's keyboard when the victim is working in a public space.
Balzarotti's analysis is divided into two main phases (figure below).
The first phase analyzes the video recorded by the camera using computer vision techniques. For each frame of the video, the computer vision analysis computes the set of keys that were likely pressed, the set of keys that were certainly not pressed, and the position of space characters. Because the results of this phase of the analysis are noisy, a second phase, called the text analysis, is required. The goal of this phase is to remove errors using both language and context-sensitive techniques. The result of this phase is the reconstructed text, where each word is represented by a list of possible candidates, ranked by likelihood.
Printer
Acoustic emanations
With acoustic emanations, an attack that recovers what a dot-matrix printer processing English text is printing is possible. It is based on a record of the sound the printer makes, if the microphone is close enough to it. This attack recovers up to 72% of printed words, and up to 95% if knowledge about the text are done, with a microphone at a distance of 10 cm from the printer.
After an upfront training phase ("a" in the picture below), the attack ("b" in the picture below) is fully automated and uses a combination of machine learning, audio processing, and speech recognition techniques, including spectrum features, Hidden Markov Models and linear classification. The fundamental reason why the reconstruction of the printed text works is that, the emitted sound becomes louder if more needles strike the paper at a given time. There is a correlation between the number of needles and the intensity of the acoustic emanation.
A training phase was conducted where words from a dictionary are printed and characteristic sound features of these words are extracted and stored in a database. The trained characteristic features was used to recognize the printed English text. But, this task is not trivial. Major challenges include :
Identifying and extracting sound features that suitably capture the acoustic emanation of dot-matrix printers;
Compensating for the blurred and overlapping features that are induced by the substantial decay time of the emanations;
Identifying and eliminating wrongly recognized words to increase the overall percentage of correctly identified words (recognition rate).
Computer components
Network Interface Card
Timing attack
Timing attacks enable an attacker to extract secrets maintained in a security system by observing the time it takes the system to respond to various queries.
SSH is designed to provide a secure channel between two hosts. Despite the encryption and authentication mechanisms it uses, SSH has weaknesses. In interactive mode, every individual keystroke that a user types is sent to the remote machine in a separate IP packet immediately after the key is pressed, which leaks the inter-keystroke timing information of users’ typing. Below, the picture represents the command su processed through a SSH connection.
A very simple statistical techniques suffice to reveal sensitive information such as the length of users’ passwords or even root passwords. By using advanced statistical techniques on timing information collected from the network, the eavesdropper can learn significant information about what users type in SSH sessions. Because the time it takes the operating system to send out the packet after the keypress is in general negligible comparing to the interkeystroke timing, this also enables an eavesdropper to learn the precise interkeystroke timings of users’ typing from the arrival times of packets.
Memory
Physical chemistry
Data remanence problems not only affect obvious areas such as RAM and non-volatile memory cells but can also occur in other areas of the device through hot-carrier effects (which change the characteristics of the semiconductors in the device) and various other effects which are examined alongside the more obvious memory-cell remanence problems. It is possible to analyse and recover data from these cells and from semiconductor devices in general long after it should (in theory) have vanished.
Electromigration, which means to physically move the atom to new locations (to physically alter the device itself) is another type of attack. It involves the relocation of metal atoms due to high current densities, a phenomenon in which atoms are carried along by an "electron wind" in the opposite direction to the conventional current, producing voids at the negative electrode and hillocks and whiskers at the positive electrode. Void formation leads to a local increase in current density and Joule heating (the interaction of electrons and metal ions to produce thermal energy), producing further electromigration effects. When the external stress is removed, the disturbed system tends to relax back to its original equilibrium state, resulting in a backflow which heals some of the electromigration damage. In the long term though, this can cause device failure, but in less extreme cases it simply serves to alter a device's operating characteristics in noticeable ways.
For example, the excavations of voids leads to increased wiring resistance and the growth of whiskers leads to contact formation and current leakage. An example of a conductor which exhibits whisker growth due to electromigration is shown in the figure below:
One example which exhibits void formation (in this case severe enough to have led to complete failure) is shown in this figure:
Temperature
Contrary to popular assumption, DRAMs used in most modern computers retain their contents for several seconds after power is lost, even at room temperature and even if removed from a motherboard.
Many products do cryptographic and other security-related computations using secret keys or other variables that the equipment's operator must not be able to read out or alter. The usual solution is for the secret data to be kept in volatile memory inside a tamper-sensing enclosure. Security processors typically store secret key material in static RAM, from which power is removed if the device is tampered with. At temperatures below −20 °C, the contents of SRAM can be ‘frozen’. It is interesting to know the period of time for which a static RAM device will retain data once the power has been removed. Low temperatures can increase the data retention time of SRAM to many seconds or even minutes.
Read/Write exploits thanks to FireWire
Maximillian Dornseif presented a technique in these slides, which let him take the control of an Apple computer thanks to an iPod. The attacks needed a first generic phase where the iPod software was modified so that it behaves as master on the FireWire bus. Then the iPod had full read/write access on the Apple Computer when the iPod was plugged into a FireWire port. FireWire is used by : audio devices, printers, scanners, cameras, gps, etc. Generally, a device connected by FireWire has full access (read/write). Indeed, OHCI Standard (FireWire standard) reads :
So, any device connected by FireWire can read and write data on the computer memory. For example, a device can :
Grab the screen contents ;
Just search the memory for strings such as login, passwords ;
Scan for possible key material ;
Search cryptographic keys stored in RAM ;
Parse the whole physical memory to understand logical memory layout.
or
Mess up the memory ;
Change screen content ;
Change UID/GID of a certain process ;
Inject code into a process ;
Inject an additional process.
Processor
Cache attack
To increase the computational power, processors are generally equipped with a cache memory which decreases the memory access latency. Below, the figure shows the hierarchy between the processor and the memory. First the processor looks for data in the cache L1, then L2, then in the memory.
When the data is not where the processor is looking for, it is called a cache-miss. Below, pictures show how the processor fetch data when there are two cache levels.
Unfortunately caches contain only a small portion of the application data and can introduce additional latency to the memory transaction in the case of a miss. This involves also additional power consumption which is due to the activation of memory devices down in the memory hierarchy. The miss penalty has been already used to attack symmetric encryption algorithms, like DES. The basic idea proposed in this paper is to force a cache miss while the processor is executing the AES encryption algorithm on a known plain text. The attacks allow an unprivileged process to attack other process running in parallel on the same processor, despite partitioning methods such as memory protection, sandboxing and virtualization.
Timing attack
By carefully measuring the amount of time required to perform private key operations, attackers may be able to find fixed Diffie-Hellman exponents, factor RSA keys, and break other cryptosystems. Against a vulnerable system, the attack is computationally inexpensive and often requires only known ciphertext.
The attack can be treated as a signal detection problem. The signal consists of the timing variation due to the target exponent bit, and noise results from measurement inaccuracies and timing variations due to unknown exponent bits. The properties of the signal and noise determine the number of timing measurements required to for the attack. Timing attacks can potentially be used against other cryptosystems, including symmetric functions.
Privilege escalation
A simple and generic processor backdoor can be used by attackers as a means to privilege escalation to get to privileges equivalent to those of any given running operating system. Also, a non-privileged process of one of the non-privileged invited domain running on top of a virtual machine monitor can get to privileges equivalent to those of the virtual machine monitor.
Loïc Duflot studied Intel processors in the paper "CPU bugs, CPU backdoors and consequences on security" ; he explains that the processor defines four different privilege rings numbered from 0 (most privileged) to 3 (least privileged). Kernel code is usually running in ring 0, whereas user-space code is generally running in ring 3. The use of some security-critical assembly language instructions is restricted to ring 0 code. In order to escalate privilege through the backdoor, the attacker must :
activate the backdoor by placing the CPU in the desired state ;
inject code and run it in ring 0 ;
get back to ring 3 in order to return the system to a stable state. Indeed, when code is running in ring 0, system calls do not work : Leaving the system in ring 0 and running a random system call (exit() typically) is likely to crash the system.
The backdoors Loïc Duflot presents are simple as they only modify the behavior of three assembly language instructions and have very simple and specific activation conditions, so that they are very unlikely to be accidentally activated. Recent inventions have begun to target these types of processor-based escalation attacks.
References
Bibliography
Acoustic
Cache attack
Chemical
Electromagnetic
FireWire
Processor bug and backdoors
Temperature
Timing attacks
Other
Computer security
Risk analysis |
45083975 | https://en.wikipedia.org/wiki/Philip%20Newcomb | Philip Newcomb | Philip H. Newcomb (born 1950s) is an American software engineer and CEO of The Software Revolution, Inc., known for his work in the field of formal methods of software engineering.
Biography
Newcomb started his studies at the Indiana University in 1972, and obtained his BSc in Cognitive Psychology in 1976. In 1977 he did graduate work in computer science at the University of Washington and at Carnegie Mellon University. In 1984 he continued his studies at the Ball State University, where he obtained his MA in Computer Science in 1988.
In 1983 Newcomb started as researcher at the Boeing Artificial Intelligence Center in Seattle, working in the field of formal methods for software engineering and artificial intelligence. He became senior principal scientist, and in 1989 director of the Software Reverse, Reengineering and Reuse Program. In 1995 he founded his own The Software Revolution, Inc., delivering solutions for software modernization.
In 2012 he was awarded the Stevens Award in recognition of his outstanding contributions to the literature or practice of methods for software and systems development.
Selected publications
Ulrich, William M., and Philip Newcomb. Information Systems Transformation: Architecture-Driven Modernization Case Studies. Morgan Kaufmann, 2010.
Articles, a selection:
Newcomb, Philip, and Lawrence Markosian. "Automating the modularization of large COBOL programs: application of an enabling technology for reengineering." Reverse Engineering, 1993., Proceedings of Working Conference on. IEEE, 1993.
Markosian, L., Newcomb, P., Brand, R., Burson, S., & Kitzmiller, T. (1994). "Using an enabling technology to reengineer legacy systems." Communications of the ACM, 37(5), 58-70.
Newcomb, Philip, and Gordon Kotik. "Reengineering procedural into object-oriented systems." 2013 20th Working Conference on Reverse Engineering (WCRE). IEEE Computer Society, 1995.
Newcomb, Philip. "Architecture-driven modernization (ADM)." 2013 20th Working Conference on Reverse Engineering (WCRE). IEEE Computer Society, 2005.
References
External links
Philip H. Newcomb at tsri.com
1950s births
Living people
American computer scientists
Information systems researchers
Indiana State University alumni
University of Washington alumni
Carnegie Mellon University alumni
Ball State University alumni |
189937 | https://en.wikipedia.org/wiki/Universally%20unique%20identifier | Universally unique identifier | A universally unique identifier (UUID) is a 128-bit label used for information in computer systems. The term globally unique identifier (GUID) is also used.
When generated according to the standard methods, UUIDs are, for practical purposes, unique. Their uniqueness does not depend on a central registration authority or coordination between the parties generating them, unlike most other numbering schemes. While the probability that a UUID will be duplicated is not zero, it is close enough to zero to be negligible.
Thus, anyone can create a UUID and use it to identify something with near certainty that the identifier does not duplicate one that has already been, or will be, created to identify something else. Information labeled with UUIDs by independent parties can therefore be later combined into a single database or transmitted on the same channel, with a negligible probability of duplication.
Adoption of UUIDs is widespread, with many computing platforms providing support for generating them and for parsing their textual representation.
History
In the 1980s Apollo Computer originally used UUIDs in the Network Computing System (NCS) and later in the Open Software Foundation's (OSF) Distributed Computing Environment (DCE). The initial design of DCE UUIDs was based on the NCS UUIDs, whose design was in turn inspired by the (64-bit) unique identifiers defined and used pervasively in Domain/OS, an operating system designed by Apollo Computer. Later, the Microsoft Windows platforms adopted the DCE design as "globally unique identifiers" (GUIDs). RFC 4122 registered a URN namespace for UUIDs and recapitulated the earlier specifications, with the same technical content.
When in July 2005 RFC 4122 was published as a proposed IETF standard, the ITU had also standardized UUIDs, based on the previous standards and early versions of RFC 4122.
Standards
UUIDs are standardized by the Open Software Foundation (OSF) as part of the Distributed Computing Environment (DCE).
UUIDs are documented as part of ISO/IEC 11578:1996 "Information technology – Open Systems Interconnection – Remote Procedure Call (RPC)" and more recently in ITU-T Rec. X.667 | ISO/IEC 9834-8:2005.
The Internet Engineering Task Force (IETF) published the Standards-Track RFC 4122, technically equivalent to ITU-T Rec. X.667 | ISO/IEC 9834-8.
Format
In its canonical textual representation, the 16 octets of a UUID are represented as 32 hexadecimal (base-16) digits, displayed in five groups separated by hyphens, in the form 8-4-4-4-12 for a total of 36 characters (32 hexadecimal characters and 4 hyphens). For example:
123e4567-e89b-12d3-a456-426614174000
xxxxxxxx-xxxx-Mxxx-Nxxx-xxxxxxxxxxxx
The four-bit M and the 1 to 3 bit fields code the format of the UUID itself.
The four bits of digit M are the UUID version, and the 1 to 3 most significant bits of digit N code the UUID variant. (See below.) In the example, M is 1, and N is a (10xx2), meaning that this is a version-1, variant-1 UUID; that is, a time-based DCE/RFC 4122 UUID.
The canonical 8-4-4-4-12 format string is based on the record layout for the 16 bytes of the UUID:
These fields correspond to those in version 1 and 2 UUIDs (that is, time-based UUIDs), but the same 8-4-4-4-12 representation is used for all UUIDs, even for UUIDs constructed differently.
RFC 4122 Section 3 requires that the characters be generated in lower case, while being case-insensitive on input.
Microsoft GUIDs are sometimes represented with surrounding braces:
{123e4567-e89b-12d3-a456-426652340000}
This format should not be confused with "Windows Registry format", which refers to the format within the curly braces.
RFC 4122 defines a Uniform Resource Name (URN) namespace for UUIDs. A UUID presented as a URN appears as follows:
urn:uuid:123e4567-e89b-12d3-a456-426655440000
Encoding
The binary encoding of UUIDs varies between systems. Variant 1 UUIDs, nowadays the most common variant, are encoded in a big-endian format. For example, 00112233-4455-6677-8899-aabbccddeeff is encoded as the bytes 00 11 22 33 44 55 66 77 88 99 aa bb cc dd ee ff.
Variant 2 UUIDs, historically used in Microsoft's COM/OLE libraries, use a mixed-endian format, whereby the first three components of the UUID are little-endian, and the last two are big-endian. For example, 00112233-4455-6677-c899-aabbccddeeff is encoded as the bytes 33 22 11 00 55 44 77 66 c8 99 aa bb cc dd ee ff. See the section on Variants for details on why the '88' byte becomes 'c8' in Variant 2.
Variants
The "variant" field of UUIDs, or the N position indicate their format and encoding. RFC 4122 defines four variants of lengths 1 to 3 bits:
Variant 0 (indicated by the one-bit pattern 0xxx2, N = 0..7) is for backwards compatibility with the now-obsolete Apollo Network Computing System 1.5 UUID format developed around 1988. The first 6 octets of the UUID are a 48-bit timestamp (the number of 4-microsecond units of time since 1 January 1980 UTC); the next 2 octets are reserved; the next octet is the "address family"; and the final 7 octets are a 56-bit host ID in the form specified by the address family. Though different in detail, the similarity with modern version-1 UUIDs is evident. The variant bits in the current UUID specification coincide with the high bits of the address family octet in NCS UUIDs. Though the address family could hold values in the range 0..255, only the values 0..13 were ever defined. Accordingly, the variant-0 bit pattern 0xxx avoids conflicts with historical NCS UUIDs, should any still exist in databases.
Variant 1 (10xx2, N = 8..b, 2 bits) are referred to as RFC 4122/DCE 1.1 UUIDs, or "Leach–Salz" UUIDs, after the authors of the original Internet Draft.
Variant 2 (110x2, = c..d, 3 bits) is characterized in the RFC as "reserved, Microsoft Corporation backward compatibility" and was used for early GUIDs on the Microsoft Windows platform. It differs from variant 1 only by the endianness in binary storage or transmission: variant-1 UUIDs use "network" (big-endian) byte order, while variant-2 GUIDs use "native" (little-endian) byte order for some subfields of the UUID.
Reserved is defined as the 3-bit variant bit pattern 111x2 (N = e..f).
Variants 1 and 2 are used by the current UUID specification. In their textual representations, variants 1 and 2 are the same, except for the variant bits. In the binary representation, there is an endianness difference. When byte swapping is required to convert between the big-endian byte order of variant 1 and the little-endian byte order of variant 2, the fields above define the swapping. The first three fields are unsigned 32- and 16-bit integers and are subject to swapping, while the last two fields consist of uninterpreted bytes, not subject to swapping. This byte swapping applies even for versions 3, 4, and 5, where the canonical fields do not correspond to the content of the UUID.
While some important GUIDs, such as the identifier for the Component Object Model IUnknown interface, are nominally variant-2 UUIDs, many identifiers generated and used in Microsoft Windows software and referred to as "GUIDs" are standard variant-1 RFC 4122/DCE 1.1 network-byte-order UUIDs, rather than little-endian variant-2 UUIDs. The current version of the Microsoft guidgen tool produces standard variant-1 UUIDs. Some Microsoft documentation states that "GUID" is a synonym for "UUID", as standardized in RFC 4122. RFC 4122 itself states that UUIDs "are also known as GUIDs". All this suggests that "GUID", while originally referring to a variant of UUID used by Microsoft, has become simply an alternative name for UUID, with both variant-1 and variant-2 GUIDs being extant.
Versions
For both variants 1 and 2, five "versions" are defined in the standards, and each version may be more appropriate than the others in specific use cases. Version is indicated by the M in the string representation.
Version-1 UUIDs are generated from a time and a node ID (usually the MAC address); version-2 UUIDs are generated from an identifier (usually a group or user ID), time, and a node ID; versions 3 and 5 produce deterministic UUIDs generated by hashing a namespace identifier and name; and version-4 UUIDs are generated using a random or pseudo-random number.
Nil UUID
The "nil" UUID, a special case, is the UUID 00000000-0000-0000-0000-000000000000; that is, all bits set to zero.
Version 1 (date-time and MAC address)
Version 1 concatenates the 48-bit MAC address of the "node" (that is, the computer generating the UUID), with a 60-bit timestamp, being the number of 100-nanosecond intervals since midnight 15 October 1582 Coordinated Universal Time (UTC), the date on which the Gregorian calendar was first adopted. RFC 4122 states that the time value rolls over around 3400 AD, depending on the algorithm used, which implies that the 60-bit timestamp is a signed quantity. However some software, such as the libuuid library, treats the timestamp as unsigned, putting the rollover time in 5236 AD. The rollover time as defined by ITU-T Rec. X.667 is 3603 AD.
A 13-bit or 14-bit "uniquifying" clock sequence extends the timestamp in order to handle cases where the processor clock does not advance fast enough, or where there are multiple processors and UUID generators per node. When UUIDs are generated faster than the system clock could advance, the lower bits of the timestamp fields can be generated by incrementing it every time a UUID is being generated, to simulate a high-resolution timestamp. With each version 1 UUID corresponding to a single point in space (the node) and time (intervals and clock sequence), the chance of two properly generated version-1 UUIDs being unintentionally the same is practically nil. Since the time and clock sequence total 74 bits, 274 (1.8, or 18 sextillion) version-1 UUIDs can be generated per node ID, at a maximal average rate of 163 billion per second per node ID.
In contrast to other UUID versions, version-1 and -2 UUIDs based on MAC addresses from network cards rely for their uniqueness in part on an identifier issued by a central registration authority, namely the Organizationally Unique Identifier (OUI) part of the MAC address, which is issued by the IEEE to manufacturers of networking equipment. The uniqueness of version-1 and version-2 UUIDs based on network-card MAC addresses also depends on network-card manufacturers properly assigning unique MAC addresses to their cards, which like other manufacturing processes is subject to error. Additionally some operating system permit the end user to customise the MAC address, notably OpenWRT.
Usage of the node's network card MAC address for the node ID means that a version-1 UUID can be tracked back to the computer that created it. Documents can sometimes be traced to the computers where they were created or edited through UUIDs embedded into them by word processing software. This privacy hole was used when locating the creator of the Melissa virus.
RFC 4122 does allow the MAC address in a version-1 (or 2) UUID to be replaced by a random 48-bit node ID, either because the node does not have a MAC address, or because it is not desirable to expose it. In that case, the RFC requires that the least significant bit of the first octet of the node ID should be set to 1. This corresponds to the multicast bit in MAC addresses, and setting it serves to differentiate UUIDs where the node ID is randomly generated from UUIDs based on MAC addresses from network cards, which typically have unicast MAC addresses.
Version 2 (date-time and MAC address, DCE security version)
RFC 4122 reserves version 2 for "DCE security" UUIDs; but it does not provide any details. For this reason, many UUID implementations omit version 2. However, the specification of version-2 UUIDs is provided by the DCE 1.1 Authentication and Security Services specification.
Version-2 UUIDs are similar to version 1, except that the least significant 8 bits of the clock sequence are replaced by a "local domain" number, and the least significant 32 bits of the timestamp are replaced by an integer identifier meaningful within the specified local domain. On POSIX systems, local-domain numbers 0 and 1 are for user ids (UIDs) and group ids (GIDs) respectively, and other local-domain numbers are site-defined. On non-POSIX systems, all local domain numbers are site-defined.
The ability to include a 40-bit domain/identifier in the UUID comes with a tradeoff. On the one hand, 40 bits allow about 1 trillion domain/identifier values per node ID. On the other hand, with the clock value truncated to the 28 most significant bits, compared to 60 bits in version 1, the clock in a version 2 UUID will "tick" only once every 429.49 seconds, a little more than 7 minutes, as opposed to every 100 nanoseconds for version 1. And with a clock sequence of only 6 bits, compared to 14 bits in version 1, only 64 unique UUIDs per node/domain/identifier can be generated per 7-minute clock tick, compared to 16,384 clock sequence values for version 1. Thus, Version 2 may not be suitable for cases where UUIDs are required, per node/domain/identifier, at a rate exceeding about one every seven minutes.
Versions 3 and 5 (namespace name-based)
Version-3 and version-5 UUIDs are generated by hashing a namespace identifier and name. Version 3 uses MD5 as the hashing algorithm, and version 5 uses SHA-1.
The namespace identifier is itself a UUID. The specification provides UUIDs to represent the namespaces for URLs, fully qualified domain names, object identifiers, and X.500 distinguished names; but any desired UUID may be used as a namespace designator.
To determine the version-3 UUID corresponding to a given namespace and name, the UUID of the namespace is transformed to a string of bytes, concatenated with the input name, then hashed with MD5, yielding 128 bits. Then 6 or 7 bits are replaced by fixed values, the 4-bit version (e.g. 00112 for version 3), and the 2- or 3-bit UUID "variant" (e.g. 102 indicating a RFC 4122 UUIDs, or 1102 indicating a legacy Microsoft GUID). Since 6 or 7 bits are thus predetermined, only 121 or 122 bits contribute to the uniqueness of the UUID.
Version-5 UUIDs are similar, but SHA-1 is used instead of MD5. Since SHA-1 generates 160-bit digests, the digest is truncated to 128 bits before the version and variant bits are replaced.
Version-3 and version-5 UUIDs have the property that the same namespace and name will map to the same UUID. However, neither the namespace nor name can be determined from the UUID, even if one of them is specified, except by brute-force search. RFC 4122 recommends version 5 (SHA-1) over version 3 (MD5), and warns against use of UUIDs of either version as security credentials.
Version 4 (random)
A version 4 UUID is randomly generated. As in other UUIDs, 4 bits are used to indicate version 4, and 2 or 3 bits to indicate the variant (102 or 1102 for variants 1 and 2 respectively). Thus, for variant 1 (that is, most UUIDs) a random version-4 UUID will have 6 predetermined variant and version bits, leaving 122 bits for the randomly generated part, for a total of 2122, or 5.3 (5.3 undecillion) possible version-4 variant-1 UUIDs. There are half as many possible version-4 variant-2 UUIDs (legacy GUIDs) because there is one fewer random bit available, 3 bits being consumed for the variant.
Collisions
Collision occurs when the same UUID is generated more than once and assigned to different referents. In the case of standard version-1 and version-2 UUIDs using unique MAC addresses from network cards, collisions are unlikely to occur, with an increased possibility only when an implementation varies from the standards, either inadvertently or intentionally.
In contrast to version-1 and version-2 UUID's generated using MAC addresses, with version-1 and -2 UUIDs which use randomly generated node ids, hash-based version-3 and version-5 UUIDs, and random version-4 UUIDs, collisions can occur even without implementation problems, albeit with a probability so small that it can normally be ignored. This probability can be computed precisely based on analysis of the birthday problem.
For example, the number of random version-4 UUIDs which need to be generated in order to have a 50% probability of at least one collision is 2.71 quintillion, computed as follows:
This number is equivalent to generating 1 billion UUIDs per second for about 85 years. A file containing this many UUIDs, at 16 bytes per UUID, would be about 45 exabytes.
The smallest number of version-4 UUIDs which must be generated for the probability of finding a collision to be p is approximated by the formula
Thus, the probability to find a duplicate within 103 trillion version-4 UUIDs is one in a billion.
Uses
Significant uses include ext2/ext3/ext4 filesystem userspace tools (e2fsprogs uses libuuid provided by util-linux), LVM, LUKS encrypted partitions, GNOME, KDE, and macOS, most of which are derived from the original implementation by Theodore Ts'o.
One of the uses of UUIDs in Solaris (using Open Software Foundation implementation) is identification of a running operating system instance for the purpose of pairing crash dump data with Fault Management Event in the case of kernel panic.
In COM
There are several flavors of GUIDs used in Microsoft's Component Object Model (COM):
– interface identifier; (The ones that are registered on a system are stored in the Windows Registry at )
– class identifier; (Stored at )
– type library identifier; (Stored at )
– category identifier; (its presence on a class identifies it as belonging to certain class categories, listed at )
As database keys
UUIDs are commonly used as a unique key in database tables. The function in Microsoft SQL Server version 4 Transact-SQL returns standard random version-4 UUIDs, while the function returns 128-bit identifiers similar to UUIDs which are committed to ascend in sequence until the next system reboot. The Oracle Database function does not return a standard GUID, despite the name. Instead, it returns a 16-byte 128-bit RAW value based on a host identifier and a process or thread identifier, somewhat similar to a GUID. PostgreSQL contains a datatype and can generate most versions of UUIDs through the use of functions from modules. MySQL provides a function, which generates standard version-1 UUIDs.
The random nature of standard UUIDs of versions 3, 4, and 5, and the ordering of the fields within standard versions 1 and 2 may create problems with database locality or performance when UUIDs are used as primary keys. For example, in 2002 Jimmy Nilsson reported a significant improvement in performance with Microsoft SQL Server when the version-4 UUIDs being used as keys were modified to include a non-random suffix based on system time. This so-called "COMB" (combined time-GUID) approach made the UUIDs non-standard and significantly more likely to be duplicated, as Nilsson acknowledged, but Nilsson only required uniqueness within the application. By reordering and encoding version 1 and 2 UUIDs so that the timestamp comes first, insertion performance loss can be averted.
Some web frameworks, such as Laravel, have support for "timestamp first" UUIDs that may be efficiently stored in an indexed database column. This makes a COMB UUID using version 4 format, but where the first 48-bits make up a timestamp laid out like in UUIDv1. More specified formats based on the COMB UUID idea include:
"ULID", which ditches the 4 bits used to indicate version 4, and uses a base32 encoding by default.
UUID versions 6 through 8, a formal proposal of three COMB UUID formats.
See also
Birthday attack
Object identifier (OID)
Uniform Resource Identifier (URI)
Snowflake ID
References
External links
Standards
Recommendation ITU-T X.667 (Free access)
ISO/IEC 9834-8:2014 (Paid)
ITU-T UUID Generator
Universally Unique Identifiers (UUIDs)
Technical Articles
Technical Note TN2166 - Secrets of the GPT - Apple Developer
UUID Documentation - Commons Id
Class UUID - Java Documentation
CLSID Key - Microsoft Docs
Universal Unique Identifier - The Open Group Library
Miscellaneous
UUID Decoder tool
A Brief History of the UUID
Understanding How UUIDs Are Generated
Implementation in various languages
Golang - google/uuid
PHP - ramsey/uuid
C++ - Boost UUID
Linux or C - libuuid
Python - uuid.py
Java - java.util.UUID
C# - System.Guid
Javascript - Crypto.randomUUID
Unique identifiers
Windows administration
1996 establishments |
5060742 | https://en.wikipedia.org/wiki/Pathworks | Pathworks | PATHWORKS (it was usually written in all caps) was the trade name used by Digital Equipment Corporation of Maynard, Massachusetts for a series of programs that eased the interoperation of Digital's minicomputers and servers with personal computers. It was available for both PC and Mac systems, with support for MS-DOS, OS/2 and Microsoft Windows on the PC. Before it was named PATHWORKS, it was known as PCSA (Personal Computing Systems Architecture).
The server part of Pathworks ran on OpenVMS and Ultrix (and later Digital UNIX) and enabled a system or cluster to act as a file and print server for client IBM PC compatible and Macintosh workstations. A version of Pathworks server for OS/2 was also available, allowing a PC with OS/2 to act as a server to other PCs. Pathworks server was derived from LanMan/X, the portable version of OS/2 LAN Manager.
PATHWORKS was one of DEC's most successful products ever. Analysis of sales showed that on average, each PATHWORKS license dragged at least $3,000 USD in server revenue (server HW, SW, storage, printers, networking, and services), so it was a major driver for DEC's revenue in the mid and late 1980s.
Later versions of PATHWORKS were known as Advanced Server for OpenVMS (or Advanced Sever for Unix for Tru64). Advanced Server was replaced on OpenVMS by Samba at the time of the porting of VMS to Itanium. This was due to the amount of effort required to keep Advanced Server compatible with new versions of Windows and the Server Message Block (SMB) protocol.
Features
Once installed onto the PCs, the Pathworks client provided the following features:
DECnet, and later TCP/IP, end-node connectivity with the host and client systems
PowerTerm 525 Terminal emulation software from Ericom
eXcursion, an X11 server for Windows.
File-transfer software. This was a DECnet-DOS file transfer utility, although it was somewhat superfluous because the PATHWORKS server software presented VMS or UNIX files to the PC clients as if they were PC files being served by a Windows server.
The PATHWORKS server software provided access to server file storage and print services using the native Microsoft protocols. Later versions of PATHWORKS servers on VMS supported NetWare and Macintosh clients, but they never achieved the volumes of the Microsoft clients. For clients running a GUI such as Windows 3.x, additional components available included an X window system server, allowing clients to access graphical apps running on VMS or UNIX hosts, and clients for DEC's ALL-IN-1 email and groupware system. Although primitive by modern standards, PATHWORKS was very sophisticated for its time; far more than just a file and print server, it made client microcomputers into terminals and workstations on a DEC network.
Implementation
LanMan normally ran across Microsoft's basic, non-routable NetBIOS/NetBEUI NBF protocol, but Pathworks included a DECnet stack, including layers like the LAT transport used for terminal sessions. The complexity of DECnet by 1980s PC standards meant that the Pathworks client was a huge software stack to have resident in MS-DOS; configuring the Pathworks client was a complex task, made more so by the need to preserve enough conventional memory for DOS applications to run. To keep a reasonable amount of base memory free mandated the use of QEMM or a similar memory manager. This problem went away once 386-based PCs became prevalent and MS Windows provided built-in support for large amounts of memory.
References
OpenVMS software
Lawrence W. White - Pathworks Product Manager
Bob Nusbaum - PATHWORKS Product Manager |
21263 | https://en.wikipedia.org/wiki/Korean%20People%27s%20Army | Korean People's Army | The Korean People's Army (KPA; ) is the military force of North Korea and the armed wing of the Workers' Party of Korea (WPK). Under the Songun policy, it is the central institution of North Korean society. Kim Jong-un serves as Supreme Commander and the chairman of the Central Military Commission. The KPA consists of five branches: the Ground Force, the Naval Force, the Air and Anti-Air Force, the Strategic Rocket Forces, and the Special Operation Force.
The KPA considers its primary adversaries to be the Republic of Korea Armed Forces and United States Forces Korea, across the Korean Demilitarized Zone, as it has since the Armistice Agreement of July 1953. it is the second largest military organisation in the world, with of the North Korean population actively serving, in reserve or in a paramilitary capacity.
History
Korean People's Revolutionary Army 1932–1948
Kim Il-sung's anti-Japanese guerrilla army, the , was established on 25 April 1932. This revolutionary army was transformed into the regular army on 8 February 1948. Both these are celebrated as army days, with decennial anniversaries treated as major celebrations, except from 1978 to 2014 when only the 1932 anniversary was celebrated.
Korean Volunteer Army 1939–1948
In 1939, the Korean Volunteer Army (KVA), was formed in Yan'an, China. The two individuals responsible for the army were Kim Tu-bong and Mu Chong. At the same time, a school was established near Yan'an for training military and political leaders for a future independent Korea. By 1945, the KVA had grown to approximately 1,000 men, mostly Korean deserters from the Imperial Japanese Army. During this period, the KVA fought alongside the Chinese communist forces from which it drew its arms and ammunition. After the defeat of the Japanese, the KVA accompanied the Chinese communist forces into eastern Jilin, intending to gain recruits from ethnic Koreans in China, particularly from Yanbian, and then enter Korea.
Soviet Korean Units
Just after World War II and during the Soviet Union's occupation of the part of Korea north of the 38th Parallel, the Soviet 25th Army headquarters in Pyongyang issued a statement ordering all armed resistance groups in the northern part of the peninsula to disband on 12 October 1945. Two thousand Koreans with previous experience in the Soviet army were sent to various locations around the country to organise constabulary forces with permission from Soviet military headquarters, and the force was created on 21 October 1945.
Formation of National Army
The headquarters felt a need for a separate unit for security around railways, and the formation of the unit was announced on 11 January 1946. That unit was activated on 15 August of the same year to supervise existing security forces and creation of the national armed forces.
Military institutes such as the Pyongyang Academy (became No. 2 KPA Officers School in Jan. 1949) and the Central Constabulary Academy (became KPA Military Academy in Dec. 1948) soon followed for the education of political and military officers for the new armed forces.
After the military was organised and facilities to educate its new recruits were constructed, the Constabulary Discipline Corps was reorganised into the Korean People's Army General Headquarters. The previously semi-official units became military regulars with the distribution of Soviet uniforms, badges, and weapons that followed the inception of the headquarters.
The State Security Department, a forerunner to the Ministry of People's Defense, was created as part of the Interim People's Committee on 4 February 1948. The formal creation of the Korean People's Army was announced four days later on 8 February, the day after the Fourth Plenary Session of the People's Assembly approved the plan to separate the roles of the military and those of the police, seven months before the government of the Democratic People's Republic of Korea was proclaimed on 9 September 1948. In addition, the Ministry of State for the People's Armed Forces was established, which controlled a central guard battalion, two divisions, and an independent mixed and combined arms brigade.
Conflicts and events
Before the outbreak of the Korean War, Joseph Stalin equipped the KPA with modern tanks, trucks, artillery, and small arms (at the time, the South Korean Army had nothing remotely comparable either in numbers of troops or equipment). During the opening phases of the Korean War in 1950, the KPA quickly drove South Korean forces south and captured Seoul, only to lose 70,000 of their 100,000-strong army in the autumn after U.S. amphibious landings at the Battle of Incheon and a subsequent drive to the Yalu River. On 4 November, China openly staged a military intervention. On 7 December, Kim Il-sung was deprived of the right of command of KPA by China. The KPA subsequently played a secondary minor role to Chinese forces in the remainder of the conflict. By the time of the Armistice in 1953, the KPA had sustained 290,000 casualties and lost 90,000 men as POWs.
In 1953, the Military Armistice Commission (MAC) was able to oversee and enforce the terms of the armistice. The Neutral Nations Supervisory Commission (NNSC), made up of delegations from Czechoslovakia, Poland, Sweden and Switzerland, carried out inspections to ensure implementation of the terms of the Armistice that prevented reinforcements or new weapons being brought into Korea.
Soviet thinking on the strategic scale was replaced since December 1962 with a people's war concept. The Soviet idea of direct warfare was replaced with a Maoist war of attrition strategy. Along with the mechanisation of some infantry units, more emphasis was put on light weapons, high-angle indirect fire, night fighting, and sea denial.
Date of establishment history
Until 1977, original Korean People's Army's official date of establishment was 8 February 1948. But in 1978, changed to 25 April 1932, Kim Il-sung's anti-Japanese guerrilla army – Korean People's Revolutionary Army, considered the predecessor of the Korean People's Army, was formed on 25 April 1932. The date of establishment was officially changed back to 8 February 1948 by 2019, however.
Organization
Commission and leadership
The primary path for command and control of the KPA extends through the National Defence Commission which was led by its chairman Kim Jong-il until 2011, to the Ministry of Defence and its General Staff Department. From there on, command and control flows to the various bureaus and operational units. A secondary path, to ensure political control of the military establishment, extends through the Workers' Party of Korea's Central Military Commission.
Since 1990, numerous and dramatic transformations within North Korea have led to the current command and control structure. The details of the majority of these changes are simply unknown to the world. What little is known indicates that many changes were the natural result of the deaths of the aging leadership including Kim Il-sung (July 1994), Minister of People's Armed Forces O Chin-u (February 1995) and Minister of Defence Choi Kwang (February 1997).
The vast majority of changes were undertaken to secure the power and position of Kim Jong-il. Formerly the State Affairs Commission (SAC), from its founding in 1972 (originally the National Defence Commission), was part of the (CPC) while the Ministry of Defence, from 1982 onward, was under direct presidential control. At the Eighteenth session of the sixth Central People's Committee, held on 23 May 1990, the SAC became established as its own independent commission, rising to the same status as the CPC (now the Cabinet of North Korea) and not subordinated to it, as was the case before. Concurrent with this, Kim Jong-il was appointed first vice-chairman of the State Affairs Commission. The following year, on 24 December 1991, Kim Jong-il was appointed Supreme Commander of the Korean People's Army. Four months later, on 20 April 1992, Kim Jong-il was awarded the rank of Marshal and his father, in virtue of being the KPA's founding commander in chief, became Grand Marshal as a result and one year later he became the Chairman of the State Affairs Commission, by now under Supreme People's Assembly control under the then 1992 constitution as amended.
Almost all officers of the KPA began their military careers as privates; only very few people are admitted to a military academy without prior service. The results is an egalitarian military system where officers are familiar with the life of a military private and "military nobility" is all but nonexistent.
Within the KPA, between December 1991 and December 1995, nearly 800 high officers (out of approximately 1,200) received promotions and preferential assignments. Three days after Kim Jong-il became Marshal, eight generals were appointed to the rank of Vice-Marshal. In April 1997, on the 85th anniversary of Kim Il-sung's birthday, Kim Jong-il promoted 127 general and admiral grade officers. The following April he ordered the promotions of another 22 generals and flag officers. Along with these changes, many KPA officers were appointed to influential positions within the Korean Workers' Party. These promotions continue today, simultaneous with the celebration of Kim Il-sung's birthday and the KPA anniversary celebrations every April and since recently in July to honour the end of the Korean War. Under Kim Jong-il's leadership, political officers dispatched from the party monitored every move of a general's daily life, according to analysts similar to the work of Soviet political commissars during the early and middle years of the military establishment.
Today the KPA exercises full control of both the Politburo and the Central Military Commission of the WPK, the KPA General Political and General Staff Departments and the Ministry of Defence, all having KPA representatives with a minimum general officer rank. Following changes made during the 4th session of the 13th Supreme People's Assembly on 29 June 2016, the State Affairs Commission has overseen the Ministry of Defence as part of its systemic responsibilities. All members of the State Affairs Commission have membership status (regular or alternate) on the WPK Political Bureau.
Ground force formations
I Corps (Hoeyang County, Kangwon Province)
II Corps (Pyongsan County, North Hwanghae Province)
III Corps (Nampo, South Pyongan)
IV Corps (Haeju, South Hwanghae Province)
V Corps (Sepo County, Kangwon Province)
VII Corps (Hamhung, South Hamgyong Province)
Pyongyang Defense Command
XII Corps
IX (Chongjin, North Hamgyong Province)
X Corps (Hyesan, Ryanggang Province)
XI Corps (Tokchon, South Pyongan Province)
Mechanised infantry divisions:
108th Division
425th Division
806th Division
815th Division
820th Tank Corps
Conscription and terms of service
North Korea has conscription for males for 10 years. Females are conscripted up until the age of 23. Article 86 of the North Korean Constitution states: "National defence is the supreme duty and honour of citizens.
Citizens shall defend the country and serve in the armed forces as
required by law."
KPA soldiers serve three years of military service in the KPA, which also runs its own factories, farms and trading arms.
Paramilitary organisations
The Young Red Guards are the youth cadet corps of the KPA for secondary level and university level students. Every Saturday, they hold mandatory 4-hour military training drills, and have training activities on and off campus to prepare them for military service when they turn 18 or after graduation, as well as for contingency measures in peacetime.
Under the Ministry of Social Security and the wartime control of the Ministry of Defence, and formerly the Korean People's Security Forces, the Korean People's Internal Security Forces (KPISF) forms the national gendarmerie and civil defence force of the KPA. The KPISF has its units in various fields like civil defence, traffic management, civil disturbance control, and local security. It has its own special forces units. The service shares the ranks of the KPA (with the exception of Marshals) but wears different uniforms.
Budget and commercial interests
The KPA's annual budget is approximately US$6 billion. In 2009, the U.S. Institute for Science and International Security reported that North Korea may possess fissile material for around two to nine nuclear warheads. The North Korean Songun ("Military First") policy elevates the KPA to the primary position in the government and society.
According to North Korea's state news agency, military expenditures for 2010 made up 15.8 percent of the state budget. Most analyses of North Korea's defence sector, however, estimate that defence spending constitutes between one-quarter and one-third of all government spending. , according to the International Institute of Strategic Studies, North Korea's defence budget consumed some 25 percent of central government spending. In the mid-1970s and early 1980s, according to figures released by the Polish Arms Control and Disarmament Agency, between 32 and 38 percent of central government expenditures went towards defence.
North Korea sells missiles and military equipment to many countries worldwide. In April 2009, the United Nations named the Korea Mining and Development Trading Corporation (KOMID) as North Korea's primary arms dealer and main exporter of equipment related to ballistic missiles and conventional weapons. It also named Korea Ryonbong as a supporter of North Korea's military related sales.
Historically, North Korea has assisted a vast number of revolutionary, insurgent and terrorist groups in more than 62 countries. A cumulative total of more than 5,000 foreign personnel have been trained in North Korea, and over 7,000 military advisers, primarily from the Reconnaissance General Bureau, have been dispatched to some forty-seven countries. Some of the organisations which received North Korean aid include the Polisario Front, Janatha Vimukthi Peramuna, the Communist Party of Thailand, the Palestine Liberation Organization and the Islamic Revolutionary Guard Corps. The Zimbabwean Fifth Brigade received its initial training from KPA instructors. North Korean troops allegedly saw combat during the Libyan–Egyptian War and the Angolan Civil War. Up to 200 KPAF pilots took part in the Vietnam War, scoring several kills against U.S. aircraft. Two KPA anti-aircraft artillery regiments were sent to North Vietnam as well.
North Korean instructors trained Hezbollah fighters in guerrilla warfare tactics around 2004, prior to the Second Lebanon War. During the Syrian Civil War, Arabic-speaking KPA officers may have assisted the Syrian Arab Army in military operations planning and have supervised artillery bombardments in the Aleppo area.
Service branches
Ground Force
The Korean People's Army Ground Force (KPAGF) is the main branch of the Korean People's Army responsible for land-based military operations. It is the de facto army of North Korea.
Naval Force
The Korean People's Army Naval Force (KPANF) is organized into two fleets (West Fleet and East Fleet, the latter being the larger of the two) which, owing to the limited range and general disrepair of their vessels, are not able to support each other, let alone meet for joint operations. The East Fleet is headquartered at T'oejo-dong and the West Fleet at Nampho. A number of training, shipbuilding and maintenance units and a naval air wing report directly to Naval Command Headquarters at Pyongyang.
Air and Anti-Air Force
The Korean People's Army Air and Anti-Air Force (KPAAF) is also responsible for North Korea's air defence forces through the use of anti-aircraft artillery and surface-to-air missiles (SAM). While much of the equipment is outdated, the high saturation of multilayered, overlapping, mutually supporting air defence sites provides a formidable challenge to enemy air attacks.
Strategic Rocket Force
The Korean People's Army Strategic Rocket Force (KPASRF) is a major division of the KPA that controls North Korea's nuclear and conventional strategic missiles. It is mainly equipped with surface-to-surface missiles of Soviet and Chinese design, as well as locally developed long-range missiles.
Special Operation Force
The Korean People's Army Special Operation Force (KPASOF) is an asymmetric force with a total troop size of 200,000. Since the Korean War, it has continued to play a role of concentrating infiltration of troops into the territory of South Korea and conducting sabotage.
Capabilities
After the Korean War, North Korea maintained a powerful, but smaller military force than that of South Korea. In 1967 the KPA forces of about 345,000 were much smaller than the South Korean ground forces of about 585,000. North Korea's relative isolation and economic plight starting from the 1980s has now tipped the balance of military power into the hands of the better-equipped South Korean military. In response to this predicament, North Korea relies on asymmetric warfare techniques and unconventional weaponry to achieve parity against high-tech enemy forces. North Korea is reported to have developed a wide range of technologies towards this end, such as stealth paint to conceal ground targets, midget submarines and human torpedoes, blinding laser weapons, and probably has a chemical weapons program and is likely to possess a stockpile of chemical weapons. The Korean People's Army operates ZM-87 anti-personnel lasers, which are banned under the United Nations Protocol on Blinding Laser Weapons.
Since the 1980s, North Korea has also been actively developing its own cyber warfare capabilities. , the secretive Bureau 121 – the elite North Korean cyber warfare unit – comprises approximately 1,800 highly trained hackers. In December 2014, the Bureau was accused of hacking Sony Pictures and making threats, leading to the cancellation of The Interview, a political satire comedy film based on the assassination of Kim Jong-un. The Korean People's Army has also made advances in electronic warfare by developing GPS jammers. Current models include vehicle-mounted jammers with a range of -. Jammers with a range of more than 100 km are being developed, along with electromagnetic pulse bombs. The Korean People's Army has also made attempts to jam South Korean military satellites. North Korea does not have satellites capable of obtaining satellite imagery useful for military purposes, and appears to use imagery from foreign commercial platforms.
Despite the general fuel and ammunition shortages for training, it is estimated that the wartime strategic reserves of food for the army are sufficient to feed the regular troops for 500 days, while fuel and ammunition – amounting to 1.5 million and 1.7 million tonnes respectively – are sufficient to wage a full-scale war for 100 days.
The KPA does not operate aircraft carriers, but has other means of power projection. Korean People's Air Force Il-76MD aircraft provide a strategic airlift capacity of 6,000 troops, while the Navy's sea lift capacity amounts to 15,000 troops. The Strategic Rocket Forces operate more than 1,000 ballistic missiles according to South Korean officials in 2010, although the U.S. Department of Defense reported in 2012 that North Korea has fewer than 200 missile launchers. North Korea acquired 12 Foxtrot class and Golf-II class missile submarines as scrap in 1993. Some analysts suggest that these have either been refurbished with the help of Russian experts or their launch tubes have been reverse-engineered and externally fitted to regular submarines or cargo ships. However GlobalSecurity reports that the submarines were rust-eaten hulks with the launch tubes inactivated under Russian observation before delivery, and the U.S. Department of Defense does not list them as active.
A photograph of Kim Jong-un receiving a briefing from his top generals on 29 March 2013 showed a list that purported to show that the military had a minimum of 40 submarines, 13 landing ships, 6 minesweepers, 27 support vessels and 1,852 aircraft.
The Korean People's Army operates a very large amount of equipment, including 4,100 tanks, 2,100 APCs, 8,500 field artillery pieces, 5,100 multiple rocket launchers, 11,000 air defence guns and some 10,000 MANPADS and anti-tank guided missiles in the Ground force; about 500 vessels in the Navy and 730 combat aircraft in the Air Force, of which 478 are fighters and 180 are bombers. North Korea also has the largest special forces in the world, as well as the largest submarine fleet. The equipment is a mixture of World War II vintage vehicles and small arms, widely proliferated Cold War technology, and more modern Soviet or locally produced weapons.
North Korea possesses a vast array of long range artillery in shelters just north of the Korean Demilitarized Zone. It has been a long-standing cause for concern that a preemptive strike or retaliatory strike on Seoul using this arsenal of artillery north of the Demilitarized Zone would lead to a massive loss of life in Seoul. Estimates on how many people would die in an attack on Seoul vary. When the Clinton administration mobilised forces over the reactor at Yongbyon in 1994, planners concluded that retaliation by North Korea against Seoul could kill 40,000 people. Other estimates projects hundreds of thousands or possibly millions of fatalities if North Korea uses chemical munitions.
Military equipment
Weapons
The KPA possess a variety of Chinese and Soviet sourced equipment and weaponry, as well as locally produced versions and improvements of the former. Soldiers are mostly armed with indigenous Kalashnikov-type rifles as the standard issue weapon. Front line troops are issued the Type 88, while the older Type 58 assault rifle and Type 68A/B have been shifted to rear echelon or home guard units.
A rifle of unknown nomenclature was seen during the 2017 Day of the Sun military parade, appearing to consist of a grenade launcher and a standard assault rifle, similar to the U.S OICW or South Korean S&T Daewoo K11.
North Korea generally designates rifles as "Type XX", similar to the Chinese naming system. On 15 November 2018, North Korea successfully tested a "newly developed ultramodern tactical weapon". Leader Kim Jong-un observed the test at the Academy of Defense Science and called it a "decisive turn" in bolstering the combat power of the North Korean army.
There is a Korean People's Army Military Hardware Museum located in Pyongyang that displays a range of the equipment used.
Chemical weapons
The U.S. Department of Defense believes North Korea probably has a chemical weapons program and is likely to possess a stockpile of such weapons.
Nuclear capabilities
North Korea has tested a series of different missiles, including short-, medium-, intermediate-, and intercontinental- range, and submarine-launched ballistic missiles. Estimates of the country's nuclear stockpile vary: some experts believe Pyongyang has between fifteen and twenty nuclear weapons, while U.S. intelligence believes the number to be between thirty and sixty. The regime conducted two tests of an intercontinental ballistic missile (ICBM) capable of carrying a large nuclear warhead in July 2017. The Pentagon confirmed North Korea's ICBM tests, and analysts estimate that the new missile has a potential range of and, if fired on a flatter trajectory, could be capable of reaching mainland U.S. territory.
Nuclear tests
On 9 October 2006, the North Korean government announced that it had unsuccessfully attempted a nuclear test for the first time. Experts at the United States Geological Survey and Japanese seismological authorities detected an earthquake with a preliminary estimated magnitude of 4.3 from the site in North Korea, proving the official claims to be true.
North Korea also went on to claim that it had developed a nuclear weapon in 2009. It is widely believed to possess a stockpile of relatively simple nuclear weapons. The IAEA has met , the Director General of the General Department of Atomic Energy (GDAE) of North Korea, to discuss nuclear matters. Ri Je-son was also mentioned in this role in 2002 in a United Nations article.
On 3 September 2017, the North Korean leadership announced that it had conducted a nuclear test with what it claimed to be its first hydrogen bomb detonation. The detonation took place at an underground location at the Punggye-ri nuclear test site in North Hamgyong Province at 12:00 pm local time. South Korean officials claimed the test yielded 50 kilotons of explosive force, with many international observers claiming the test likely involved some form of a thermonuclear reaction.
2006 North Korean nuclear test
2009 North Korean nuclear test
2013 North Korean nuclear test
January 2016 North Korean nuclear test
September 2016 North Korean nuclear test
September 2017 North Korean nuclear test
Other
Tonghae Satellite Launching Ground
Ryanggang explosion
Yongbyon Nuclear Scientific Research Center
Songun
Asymmetric warfare
The launching of Kwangmyŏngsŏng-3 and Kwangmyŏngsŏng-3 Unit 2 in 2012.
Uniforms
KPA officers and soldiers are most often seen wearing a mix of olive green or tan uniforms. The basic dress uniform consists of a tunic and pants (white tunics for general officers in special occasions); female soldiers wear knee length skirts but can sometimes wear pants.
Caps or peaked caps, especially for officers (and sometimes berets for women) are worn in spring and summer months and a Russian style fur hat (the Ushanka hats) in winter. A variant of the Disruptive Pattern Material, the Disruptive Pattern Combat Uniform (green), the ERDL pattern, the M81 Woodland and the Tigerstripe is also being worn by a few and rare images of North Korean army officers and service personnel. In Non-Dress uniforms, a steel helmet (the North Korean produced Type 40 helmet, a copy of the Soviet SSH40) seems to be the most common headgear, and is sometimes worn with a camouflage covering.
Standard military boots are worn for combat, women wear low heel shoes or heel boots for formal parades.
Camouflage uniforms are slowly becoming more common in the KPA. During the April 15, 2012 parade, Kevlar helmets were displayed in certain KPA units and similar helmets are currently used by KPA special operations forces.
During the parade on 10 October 2020, a range of at least five new pixelated camouflage patterns and new soldiers' combat gear such as body armor, bulletproof helmets of all branches were shown for the first time. Even though it was difficult to tell the patterns apart from each other, two different green based designs, an arid camouflage design, blue camouflage design, and a two-color pixelated camouflage pattern for mountain and winter warfare were all observed. Also, the use of Multicam pattern uniforms by North Korean military personnel was first documented in 2020 during the same parade, although uniforms in this design may well have appeared in the armed forces inventory much earlier.
See also
April 25 Sports Club
Central Military Band of the Korean People's Army
Joson Inmingun
Korean conflict
Republic of Korea Armed Forces
Songun
Worker-Peasant Red Guards
Notes
References
Homer T. Hodge, "North Korea's Military Strategy", Parameters, Spring 2003, pp. 68–81.
April 2007. Carlisle: Strategic Studies Institute.
Further reading
External links
North Korea Military-Political Background
KPA Equipment Holdings
CIA World Factbook
KPA Journal
Military units and formations established in the 1940s
Workers' Party of Korea
History of the Workers' Party of Korea
Military wings of socialist parties
bn:উত্তর কোরিয়ার সামরিক বাহিনী |
9057237 | https://en.wikipedia.org/wiki/Neil%20Wiseman | Neil Wiseman | Neil Ernest Wiseman (19 May 1934 – 13 June 1995) was a British computer scientist. Wiseman's pioneering research in computer graphics began in 1965, and resulted in a number of inventions and patents. These included a pen-following screen menu, which anticipated the pop-up menu, and one of the first systems for distributed Computer Graphics. His work brought him three patents, over 70 research publications, and more than 40 students who gained PhDs. In 1986 the Computer Laboratory appointed him to a personal Readership in computer graphics.
Education and early life
Born in Cowlinge near Newmarket, Suffolk, Wiseman joined the Pye electronics company in Cambridge as an apprentice in 1950. 1954–1957 he studied for a BSc (Eng) degree in electrical engineering at Queen Mary College, University of London. During this time he started working for the Mathematical Laboratory, Cambridge during his vacations, e.g. on the construction of a high speed photo-electric paper tape reader. His ability recognised, arrangements were made for him to spend two years at the University of Illinois to study for a master's degree in electrical engineering (awarded 1959). Here he worked as a research assistant in the Digital Computer Laboratory on the design of circuits for the new Illinois computer. On his return to Britain his call-up for National Service was deferred to enable him to take employment with Elliott Brothers (London) Ltd at Borehamwood, Hertfordshire – on behalf of the Ministry of Aviation. He worked for two years at Elliott Brothers as research engineer in charge of the advanced circuits and logical techniques group in the Data Processing Laboratory. It was here that he started working with tunnel diodes, which showed great promise as a high-speed technology.
Career
In 1961 after ten years of intermittent contact Wiseman joined the staff of the University Mathematical Laboratory, Cambridge, now the University of Cambridge Computer Laboratory, as Chief Engineer. He continued working with tunnel diodes and constructed a prototype store capable of running at 250 megahertz, a phenomenal speed for the time. The arrival of one of the world's first mini-computers, the DEC PDP-7 and its type 340 vector display, presented new challenges. Wiseman designed a high-speed data-link to connect this to the main Titan computer, which probably counts as the world's first distributed system. It proved a valuable research tool for work on computer aided design, both for mechanical components and for his own work on electronic circuits. The Rainbow integrated CAD system combined electronic design, computer graphics, data structures and the control of change in large bodies of data. He also began work on screen editors for text and later a television camera was connected to the PDP-7.
In 1970 Wiseman was approved for a PhD through the submission of published work and was appointed to a University Lectureship. He was immediately seconded to the Cambridge University Press where he employed his experiences with PDP-7 display in a project to design and implement a computerised type-setting system. Returning to the Computer Laboratory in 1973 he resumed his work on the Rainbow integrated CAD system with the new PDP-11 computer and Vector General display. He attracted a great number of PhD students who went on to academic posts around the world and in research laboratories in Britain and especially on the West coast of America. In the 1970s he collaborated with David Kindersley MBE in the exploration of the mathematics underlying the aesthetics of lettering. Towards the end of 1977 he set up a consultancy company Fendragon Ltd with Kindersley and J. Harradine as directors (later joined by M.J. Jordan and P. Robinson), which operated in text processing and related areas.
Research in the Computer Laboratory developed with the Rainbow display project, which combined Wiseman's interests in electronic design and computer graphics. He ran the Diploma course in computer science, looked after general graduate admissions and played a key role in the establishment of the hardware laboratory for undergraduate practical work. He declined offers of chairs at other universities, preferring to remain in Cambridge. In 1983 he became a Fellow of Wolfson College and in 1986 a personal Readership in Computer Graphics was created for him. He died of cancer on 13 June 1995 after a year's illness.
References
1934 births
1995 deaths
British computer scientists
Members of the University of Cambridge Computer Laboratory
Fellows of Wolfson College, Cambridge
Alumni of Queen Mary University of London
People from Newmarket, Suffolk
Alumni of the University of Cambridge |
24597713 | https://en.wikipedia.org/wiki/Comprehensive%20Technologies%20International%2C%20Inc.%20v.%20Software%20Artisans%2C%20Inc. | Comprehensive Technologies International, Inc. v. Software Artisans, Inc. | Comprehensive Technologies International, Inc. v. Software Artisans, Inc., 3 F.3d 730 (4th Cir. 1993) was a case in which the U.S. Court of Appeals for the Fourth Circuit discussed legal tests for software copyright infringement, and ruled that trade secret misappropriation requires more than circumstantial evidence. The case also ruled on what terms may be reasonable and enforceable in non-compete agreements.
Background
Virginia-based Comprehensive Technologies International (CTI) primarily dealt with defense-related services. In 1988, they created a software group and expanded into Electronic Data Interchange with Claims Express, targeted at the medical industries, and EDI Link, designed to make and use a range of forms.
In February 1991, with EDI Link incomplete, seven CTI employees left the company and formed Software Artisans, Inc. (SA) in April 1991. Software Artisans created a program called Transend which also used EDI transmission to send forms. Transend was developed and marketed by July 1991.
CTI sued Software Artisans and its former employees Marshall Dean Hawkes, Igor Filippides, Randall Sterba, Richard Hennig, David Bixler, Alvan Bixler, and Mark Hawkes for copyright infringement, trade secret misappropriation, breach of confidentiality, and breach of contract. CTI also alleged that Dean Hawkes violated his non-compete agreement.
The district court ruled for the defendants on all counts. CTI appealed, and the case was argued in the Fourth Circuit Court of Appeals on March 30, 1993.
Copyright infringement
The district court ruled against CTI's claims of copyright infringement, due to findings that Transend was not a literal copy of either of CTI's software programs, nor was it substantially similar to either in structure, sequence, and organization. CTI argued that the court should instead have compared with the “abstraction-flitration-comparison” test used by the Second Circuit. The Fourth Circuit ruled that CTI did not meet their burden of proof because they did not indicate evidence from the trial that would have proven their point, therefore the district court's finding on copyright infringement claims were affirmed for the defendant.
Trade secret misappropriation
CTI's claims of trade secret misappropriation were also denied by the district court due to insufficient evidence. The district court found that CTI's claimed trade secrets did not fulfill the requirements of deriving independent economic value from not generally being known and not being readily ascertainable. In addition, the court concluded that there was no evidence that Software Artisans had copied CTI's claimed secret, which the court equated with misappropriation's requirement of "use" of the secret.
Presented evidence of misappropriation was circumstantial: short development time and no documentation of the software design. Software Artisans' programmers testified that they preferred to work on a whiteboard and annotate their code rather than produce formal documentation, and an expert witnesses testified that it was common for small software companies to neglect formal documentation. This testimony was sufficient for the court to deny the circumstantial evidence that is common in such cases.
Non-compete agreement
The district court declined to enforce Dean Hawkes' covenant not to compete, reasoning that it was broader than necessary according to Virginia's three-part test for assessing whether such restrictive covenants are reasonable: no greater restraint than is necessary from the employer's perspective, not unduly harsh from the employee's perspective, and reasonable in terms of sound public policy. On appeal, the court cited similar restrictions that were not deemed unreasonable in scope, and noted Hawkes' thorough knowledge of CTI's confidential information. With that ruling vacated, the decision on whether Hawkes breached his agreement was remanded to the district court.
See also
Uniform Trade Secrets Act
References
External links
United States computer case law
United States copyright case law
United States Court of Appeals for the Fourth Circuit cases
1993 in United States case law
Trade secret case law |
31700347 | https://en.wikipedia.org/wiki/Bodhi%20Linux | Bodhi Linux | Bodhi Linux is a light-weight Linux distribution based on Ubuntu that uses an Enlightenment DR17-based fork called Moksha window manager. The philosophy for the distribution is to provide a minimal base system so that users can populate it with the software they want. Thus, by default it only includes software that is essential to most Linux users, including a file browser (PCManFM), a web browser (GNOME Web) and a terminal emulator (Terminology). It does not include software or features that its developers deem unnecessary. To make populating systems with software easy, Bodhi Linux developers maintain an online database of lightweight software that can be installed in one click via apturl.
Performance
System requirements include 512 MB RAM, 5 GB hard disk space, and a 500 MHz processor. 32-bit processors without PAE capability are supported on same terms as PAE-enabled ones. Only difference between the Bodhi versions is that an older kernel is used.
By using an Enlightenment DR17-based fork called Moksha Desktop, Bodhi provides rich desktop effects and animations that do not require high end computer hardware. The rationale for forking the project from DR17 was due to its established performance & functionality while E19 possessed "optimizations that break existing features users enjoy and use" as per Jeff Hoogland's statement. The Enlightenment window manager, as well as the tools developed specifically for Bodhi Linux, were written in C programming language and Python.
Support
Bodhi Linux is derived from the Ubuntu long term support releases (14.04, 16.04, 18.04...), so support follows the same pattern: Security bug fixes are released on a daily basis throughout the five-year period. As opposed to Ubuntu, Bodhi has no short-term support release. An installed Bodhi Linux can be upgraded to the latest state via command line or package manager.
Release cycle
Releases are numbered x.y.z, where
x represents a major release,
y represents an update (or point) release and
z represents a bug fix release.
The major release (x.y.z; e.g. version 2.y.z > 3.0.0) follows the Ubuntu long term support with a delay of a few months. The goal is to deliver a new major release in July every other year following the new Ubuntu LTS, which is expected in April. New functionality is not added after the release.
The update/point release (x.y.z; e.g. version 2.3.z > 2.4.0) is similar to point releases in Ubuntu (12.04.1, 12.04.2,...). Once more frequent, they are used for delivering new software versions and other improvements which are not related to security. Between 2011 and 2013 there was ARM support.
Beginning with version 2.4.0 update frequency was reduced to three times a year. Every four months - in January, May and September for now - a new update should come out. Bodhi Linux 2.4.0 (planned for release in August 2013) appeared a little late in mid-September, when it [was] ready. A bug fix release (x.y.z; e.g. version 2.4.0 > 2.4.1) is meant for correcting errors with the default configuration.
The Bodhi Linux 3.0.0 branch was released in February 2015 with an additional legacy version for older hardware.
R_Pi Bodhi Linux
The R_Pi Bodhi Linux build was built directly on top of Raspbian and incorporates all of the changes and improvements to produce optimized ″hard float″ code for the Raspberry Pi (armhf or ARM HF). Technically, R_Pi Bodhi Linux is built with compilation settings adjusted to produce optimized ″hard float″ code for the Raspberry Pi (armhf or ARM HF). The hard float application binary interface of the ARM11, a 32-bit RISC microprocessor ARM architecture with ARMv6 architectural additions, provides enormous performance gains for many use cases. However, this has required significant effort to port elements of Debian Wheezy to ARMv6 CPU, as official builds require ARMv7. This should significantly enhance performance for applications that make heavy use of floating point arithmetic operations, as previous less efficient "soft float" settings, that is, native ARMv6 architecture floating point arithmetic operations simulated by software. Because of the effort to build a working release, the ARMHF release is not officially supported anymore at the moment.
Reception
Jack Germain from LinuxInsider wrote a positive review of Bodhi Linux 5.0.0, noting that Bodhi Linux is "elegant and lightweight", and that this distribution "can be a productive computing platform".
See also
Enlightenment Foundation Libraries
Enlightenment (window manager)
Minimalism (computing)
References
External links
Bodhi Linux documentation/wiki
Bodhi Linux at SourceForge.net
Ubuntu derivatives
X86-64 Linux distributions
Linux distributions |
4061679 | https://en.wikipedia.org/wiki/Pieter%20Van%20den%20Abeele | Pieter Van den Abeele | Pieter Van den Abeele is a computer programmer, and the founder of the PowerPC-version of Gentoo Linux, a foundation connected with a distribution of the Linux computer operating system. He founded Gentoo for OS X, for which he received a scholarship by Apple Computer. In 2004 Pieter was invited to the OpenSolaris pilot program and assisted Sun Microsystems with building a development eco-system around Solaris. Pieter was nominated for the OpenSolaris Community Advisory Board and managed a team of developers to make Gentoo available on the Solaris operating system as well. Pieter is a co-author of the Gentoo handbook.
The teams managed by Pieter Van den Abeele have shaped the PowerPC landscape with several "firsts". Gentoo/PowerPC was the first distribution to introduce PowerPC Live CDs. Gentoo also beat Apple to releasing a full 64-bit PowerPC userland environment for the IBM PowerPC 970 (G5) processor.
His Gentoo-based Home Media and Communication System, based on a Freescale Semiconductor PowerPC 7447 processor won the Best of Show award at the inaugural 2005 Freescale Technology Forum in Orlando, Florida. Pieter is also a member of the Power.org consortium and participates in committees and workgroups focusing on disruptive business plays around the Power Architecture.
References
People in information technology
Gentoo Linux people
Living people
Year of birth missing (living people) |
3005034 | https://en.wikipedia.org/wiki/Center%20for%20Information%20Technology | Center for Information Technology | The Center for Information Technology (CIT) is one of the 27 institutes and centers that compose the National Institutes of Health (NIH), an agency of the U.S. Department of Health and Human Services (HHS), a cabinet-level department of the Executive Branch of the United States Federal Government. Originating in 1954 as a central processing facility in the NIH Office of the Director, the Division of Computer Research and Technology was established in 1964, merging in 1998 with the NIH Office of the CIO and the NIH Office of Research Services Telecommunications Branch to form a new organization, the CIT.
Mission
CIT's provides and manages information technology and advances computational scientific development. CIT supports NIH and other Federal research and management programs with administrative and scientific computing. In addition to providing bioinformatics support and scientific tools and resources, CIT provides enterprise technological and computational support for the NIH community; services include networking, telecommunications, application development and hosting services, technical support, computer training, IT acquisition, and IT security.
CIT's activities include the following:
engages in collaborative research and provides collaborative support to NIH investigators in the area of computational bioscience;
provides information systems and networking services;
provides scientific and administrative computing facilities;
identifies new computing technologies with application to biomedical research;
creates, purchases, and distributes software applications;
provides NIH staff with computing information, expertise, and training;
provides data-processing and computing facilities, integrated telecommunications data networks, and services to the U.S. Department of Health and Human Services (DHHS) and other Federal agencies;
serves as a data center to HHS and other Federal agencies; and
develops, administers, and manages NIH systems and provides consulting services to NIH Institutes and Centers (ICs), in support of administrative and business applications.
History and organization
Key highlights of CIT are listed at the following link:
http://www.nih.gov/about/almanac/organization/CIT.htm#events
Past Directors
Past NIH CIOs
In January 2008, in an effort to foster efficiencies, the Office of the Chief Information Officer (OCIO) was established in the NIH Office of the Director. The functions of the CIO, formerly part of NIH Center for Information Technology (CIT), were transferred from CIT to the new OCIO. OCIO develops IT-related strategy, services, and policy to ensure that all NIH IT infrastructure is benchmarked against industry standards. CIT functions as the operating arm of the CIO, providing IT expertise for OCIO program activities and providing enterprise IT services and research and administrative support to all of NIH.
The CIT Office of the Director (OD) plans, directs, coordinates, and evaluates the Center's programs, policies, and procedures and provides analysis and guidance in the development of systems for the use of IT techniques and equipment in support of NIH programs. The OD includes the Executive Office (EO), which provides CIT administrative and business management services in support of CIT programs. The EO provides oversight of CIT's administrative policies and procedures; provides financial management, including development and oversight of the CIT budget; advises on human resources planning and management; and performs strategic planning, operational planning, and performance measurement.
Office of the Director (OD)
ANDREA T. NORRIS Director, CIT;
STACIE ALBOUM, Deputy Director, CIT;
XAVIER SOOSAI, Director, IT Services;
ANDY BAXEVANIS, Director of Computational Biology for the NIH Intramural Research Program;
Office of the IT Services Management
Director: Xavier Soosai, MBA
The Office of IT Services Management directs service areas that provide NIH with a variety of IT services such as user support, identity and access management, high-performance computing, and network cabling. The service areas are:
Business Application Services; Facilities and Infrastructure Support Services; High Performance Computing Services; Hosting and Storage Services; Identity and Access Management Services; IT Support Services; Network Services; Operations Management Services; Service Desk Services; and Unified Communication and Collaboration Services.
Notes and references
NIH Almanac - Organization - Center for Information Technology.
CIT Organization
See also
National Institutes of Health
External links
NIH Home Page
Department of Health and Human Services
CIT Home Page
Division of Network Systems and Telecommunications
Division of Computer System Services
Division of Customer Support
Division of Enterprise and Custom Applications
National Institutes of Health |
30948822 | https://en.wikipedia.org/wiki/FreedomBox | FreedomBox | FreedomBox is a free software home server operating system based on Debian, backed by the FreedomBox Foundation.
Launched in 2010, FreedomBox has grown from a software system to an ecosystem including a DIY community as well as some commercial products.
History
The project was announced by Eben Moglen, Professor of Law at Columbia Law School, in a speech called "Freedom in the Cloud" at the New York ISOC meeting on February 2, 2010. In this speech, Moglen predicted the damage that Facebook would do to society: "Mr. Zuckerberg has attained an unenviable record: he has done more harm to the human race than anybody else his age." In direct response to the threat posed by Facebook in 2010, Moglen argued that FreedomBox should provide the foundation for an alternative Web. As Steven J. Vaughan Nichols notes, "[Moglen] saw the mess we were heading toward almost 10 years ago ... That was before Facebook proved itself to be totally incompetent with security and sold off your data to Cambridge Analytica to scam 50 million US Facebook users with personalized anti-Clinton and pro-Trump propaganda in the 2016 election."
On February 4, 2011, Moglen formed the FreedomBox Foundation to become the organizational headquarters of the project, and on February 18, 2011, the foundation started a campaign to raise $60,000 in 30 days on the crowdfunding service, Kickstarter. The goal was met on February 22, and on March 19, 2011, the campaign ended after collecting $86,724 from 1,007 backers. The early developers aimed to create and preserve personal privacy by providing a secure platform for building decentralized digital applications. They targeted the FreedomBox software for plug computers and single-board computers that can easily be located in individual residences or offices. After 2011, the FreedomBox project continued to grow under different leadership.
In 2017, the project was so successful that "the private sector global technology company ThoughtWorks had hired two developers in India to work on FreedomBox full-time." The FreedomBox project now has a software ecosystem of its own, with contributions from over 60 developers throughout the project's history.
In 2019, the FreedomBox Foundation announced that the first commercial FreedomBox product would be sold by Olimex, a hardware manufacturer.
FreedomBox and Debian
FreedomBox is a Debian Pure Blend. All applications on FreedomBox are installed as Debian packages. The FreedomBox project itself distributes its software through Debian repositories.
Depending on Debian for software maintenance is one of the reasons why FreedomBox outlasted many similar projects that used manual installation scripts instead. FreedomBox comes with automatic software updates powered by Debian.
Hardware neutrality
FreedomBox is designed to be hardware neutral: Its developers aim for it to be installable on almost any computer hardware. One of the benefits of being a Debian Pure Blend is that FreedomBox inherits the diverse hardware compatibility of Debian.
As of April 2019, FreedomBox is packaged in custom operating system images for 11 single-board computers. The hardware currently put forward for use with the FreedomBox software is explained on the Hardware page. OSHW designs are preferred, like the Olimex A20 OLinuXino Lime 2 or the BeagleBone Black,. Closed-source boards like the DreamPlug, Cubietruck and the Raspberry Pi are possible options, while more are on the way. There is also a VirtualBox image. FreedomBox can additionally be installed over a clean Debian installation.
Commercial product
On April 22, 2019, the FreedomBox Foundation announced the launch of sales of the first commercial FreedomBox product. The "Pioneer Edition FreedomBox Home Server Kit" is being produced and sold by Olimex, a company which creates Open Source Hardware. Technology journalist Steven J. Vaughan-Nichols said of the FreedomBox product launch,
The product is designed to make it easier for laypeople to host their own servers. Technology writer Glyn Moody noted that "The FreedomBox project is extremely valuable, not least as a proof that distributed systems can be built. The new commercial solution is particularly welcome for lowering the barriers to participation yet further."
See also
arkOS
Commotion Wireless
MaidSafe
Mesh networking
Personal data manager
PirateBox (similar project to FreedomBox)
Yunohost (also similar project to FreedomBox)
Wireless mesh network
References
Press reviews
"Eben Moglen Is Reshaping Internet With a Freedom Box — The New York Times." [Online]. Available: https://www.nytimes.com/2011/02/16/nyregion/16about.html. [accessed 2016-10-06].
"Fear of Repression Spurs Scholars and Activists to Build Alternate Internets — The Chronicle of Higher Education." [Online]. Available: http://www.chronicle.com/article/fear-of-repression-spurs/129049. [accessed 2016-10-06].
"Gigaom | When laws fail: can technology like Freedom Box shield us from PRISM?" [Online]. Available: https://gigaom.com/2013/06/17/when-laws-fail-can-technology-like-freedom-box-shield-us-from-prism/. [accessed 2016-10-06].
"Good News For Spies and Dictators: 'FreedomBox' Is in Danger of an Early Death | WIRED." [Online]. Available: https://www.wired.com/2012/06/freedombox/. [accessed 2016-10-06].
"Google Fiber Continues Awful ISP Tradition of Banning 'Servers' | Electronic Frontier Foundation." [Online]. Available: https://www.eff.org/deeplinks/2013/08/google-fiber-continues-awful-isp-tradition-banning-servers. [accessed 2016-10-06].
"Internet access and privacy with FreedomBox | Opensource.com." [Online]. Available: https://opensource.com/life/15/12/freedombox. [accessed 2016-10-06].
"Is Privacy Protection 'More Awesome Than Money'? : All Tech Considered : NPR." [Online]. Available: https://www.npr.org/sections/alltechconsidered/2014/12/06/369012826/is-privacy-protection-more-awesome-than-money. [accessed 2016-10-06].
"Jacob Appelbaum: NSA aims for absolute surveillance | ITWeb." [Online]. Available: http://www.itweb.co.za/index.php?id=134825. [accessed 2016-10=06].
"Mi connetto, lontano da Internet: la rivoluzione del mesh networking — Repubblica.it." [Online]. Available: http://www.repubblica.it/tecnologia/2014/05/12/news/mi_connetto_lontano_da_internet_la_rivoluzione_del_mesh_networking-85929965/?refresh_ce. [accessed 2016-10=06].
"This open source private server is as easy to use as a smartphone and can ease your privacy concerns | Latest News & Updates at Daily News & Analysis." [Online]. Available: http://www.dnaindia.com/scitech/report-this-open-source-private-server-is-as-easy-to-use-as-a-smartphone-and-can-ease-your-privacy-concerns-2184605. [accessed 2016-10=06].
Free software
Debian-based distributions
Social networking services
Non-profit technology
Kickstarter-funded software
Linux distributions |
638424 | https://en.wikipedia.org/wiki/Scott%20S.%20Sheppard | Scott S. Sheppard | Scott Sander Sheppard (born 1977) is an American astronomer and a discoverer of numerous moons, comets and minor planets in the outer Solar System.
He is an astronomer in the Department of Terrestrial Magnetism at the Carnegie Institution for Science in Washington, DC. He attended Oberlin College as an undergraduate, and received his bachelor in physics with honors in 1998. Starting as a graduate student at the Institute for Astronomy at the University of Hawaii, he was credited with the discovery of many small moons of Jupiter, Saturn, Uranus, and Neptune. He has also discovered the first known trailing Neptune trojan, , the first named leading Neptune trojan, 385571 Otrera, and the first high inclination Neptune trojan, . These discoveries showed that the Neptune trojan objects are mostly on highly inclined orbits and thus likely captured small bodies from elsewhere in the Solar System.
The main-belt asteroid 17898 Scottsheppard, discovered by LONEOS at Anderson Mesa Station in 1999, was named in his honor.
Discoveries
Sheppard was the lead discoverer of the object with the most distant orbit known in the Solar System, (nicknamed Biden). In 2014, the similarity of the orbit of to other extreme Kuiper belt object orbits led Sheppard and Trujillo to propose that an unknown Super-Earth mass planet (2–15 Earth masses) in the outermost Solar System beyond 200 AU and up to 1500 AU is shepherding these smaller bodies into similar orbits (see Planet X or Planet Nine). The extreme trans-Neptunian objects and , announced in 2016 and co-discovered by Sheppard, further show a likely unknown massive planet exists beyond a few hundred AU in the Solar System, with being the first known high semi-major axis and high perihelion object anti-aligned with the other known extreme objects. In 2018, the announcement of the high perihelion inner Oort cloud object 541132 Leleākūhonua (nicknamed "The Goblin") by Sheppard et al., being only the third known after and Sedna, further demonstrated that a super-Earth planet in the distant solar system likely exists as Leleākūhonua has many orbital similarities as the two other known inner Oort cloud objects.
Most notable discoveries
Sheppard has been involved in the discovery of many small Solar System bodies such as trans-Neptunian objects, centaurs, comets and near-Earth objects.
Three comets are named after him which are Sheppard-Trujillo (C/2014 F3), Sheppard-Tholen (C/2015 T5) and comet Trujillo-Sheppard (P/2018 V5).
The possible dwarf planets discovered by Sheppard are 471143 Dziewanna, , , , and .
In 2018, Sheppard was the lead discoverer of the most distant observed object in our solar system and first object observed beyond 100 AU, dwarf planet (nicknamed Farout), which is around 120 AU from the Sun.
He discovered a minor-planet moon around likely dwarf planet .
He is also a co-discoverer of a minor-planet moon orbiting the binary trans-Neptunian object 341520 Mors–Somnus.
Among the numerous named irregular moons of the major planets in whose discovery he has been involved are:
Jupiter
Discovered moons of Jupiter (full list):
Themisto (2000), first seen but lost in 1975 by Charles Kowal
Harpalyke (2000)
Praxidike (2000)
Chaldene (2000)
Isonoe (2000)
Erinome (2000)
Taygete (2000)
Kalyke (2000)
Megaclite (2000)
Iocaste (2000)
Dia (2000)
Euporie (2001)
Orthosie (2001)
Euanthe (2001)
Thyone (2001)
Hermippe (2001)
Pasithee (2001)
Aitne (2001)
Eurydome (2001)
Autonoe (2001)
Sponde (2001)
Kale (2001)
Arche (2002)
Eukelade (2003)
Helike (2003)
Aoede (2003)
Hegemone (2003)
Kallichore (2003)
Cyllene (2003)
Mneme (2003)
Thelxinoe (2003)
Carpo (2003)
Kore (2003)
Herse (2003)
S/2003 J 2 (2003)
Eupheme (2003)
S/2003 J 4 (2003)
Eirene (2003)
S/2003 J 9 (2003)
S/2003 J 10 (2003)
S/2003 J 12 (2003)
Philophrosyne (2003)
S/2003 J 16 (2003)
Jupiter LV (2003)
Jupiter LXI (2003)
S/2003 J 23 (2003)
S/2003 J 24 (2003)
Jupiter LXXII (2011)
Jupiter LVI (2011)
Jupiter LIV (2016)
Valetudo (2016)
Jupiter LIX (2017)
Jupiter LXIII (2017)
Jupiter LXIV (2017)
Pandia (2017)
Jupiter LXVI (2017)
Jupiter LXVII (2017)
Jupiter LXVIII (2017)
Jupiter LXIX (2017)
Jupiter LXX (2017)
Ersa (2018)
Saturn
Discovered moons of Saturn (full list):
Narvi (2003)
Fornjot (2004)
Farbauti (2004)
Aegir (2004)
Bebhionn (2004)
Hati (2004)
Bergelmir (2004)
Fenrir (2004)
Bestla (2004)
Kari (2004)
S/2004 S 7 (2004)
S/2004 S 12 (2004)
S/2004 S 13 (2004)
S/2004 S 17 (2004)
Hyrrokkin (2006)
Loge (2006)
Surtur (2006)
Skoll (2006)
Greip (2006)
Jarnsaxa (2006)
S/2006 S 1 (2006)
S/2006 S 3 (2006)
Tarqeq (2007)
S/2007 S 2 (2007)
S/2007 S 3 (2007)
Saturn LIV (2019)
S/2004 S 21 (2019)
Saturn LV (2019)
Saturn LVI (2019)
S/2004 S 24 (2019)
Saturn LVII (2019)
Saturn LVIII (2019)
Saturn LIX (2019)
S/2004 S 28 (2019)
Saturn LX (2019)
Saturn LXI (2019)
S/2004 S 31 (2019)
Saturn LXII (2019)
Saturn LXIII (2019)
Saturn LXIV (2019)
Saturn LXV (2019)
S/2004 S 36 (2019)
S/2004 S 37 (2019)
Saturn LXVI (2019)
S/2004 S 39 (2019)
Uranus
Discovered moons of Uranus (full list):
Margaret (2003)
Ferdinand (2003), first seen but lost in 2001 by Holman et al.
Neptune
Discovered moons of Neptune (full list):
Psamathe (2003)
See also
References
External links
Scott Sheppard's web site, Carnegie Institution for Science
Scott S. Sheppard – Curriculum Vitae, Carnegie Institution for Science
1977 births
American astronomers
Planetary scientists
Discoverers of moons
Discoverers of trans-Neptunian objects
Living people |
2148947 | https://en.wikipedia.org/wiki/Elva%20%28car%20manufacturer%29 | Elva (car manufacturer) | Elva was a sports and racing car manufacturing company based in Bexhill, then Hastings and Rye, East Sussex, United Kingdom. The company was founded in 1955 by Frank G. Nichols. The name comes from the French phrase elle va ("she goes").
Racing cars
Frank Nichols's intention was to build a low-cost sports/racing car, and a series of models were produced between 1954 and 1959.
The original model, based on the CSM car built nearby in Hastings by Mike Chapman, used Standard Ten front suspension rather than Ford swing axles, and a Ford Anglia rear axle with an overhead-valve-conversion of a Ford 10 engine. About 25 were made. While awaiting delivery of the CSM, Nichols finished second in a handicap race at Goodwood on 27 March 1954, driving a Lotus. "From racing a Ford-engined CSM sports car in 1954, just for fun but nevertheless with great success, Frank Nichols has become a component manufacturer. The intermediate stage was concerned with the design of a special head, tried in the CSM and the introduction of the Elva car which was raced with success in 1955." The cylinder head for the 1,172 cc Ford engine, devised by Malcolm Witts and Harry Weslake, featured overhead inlet valves.
Mk I to III
On 22 May 1955 Robbie Mackenzie-Low climbed Prescott in the sports Elva Mk I to set the class record at 51.14 sec. Mackenzie-Low also won the Bodiam Hill Climb outright at the end of the season.
The 1956 Elva MK II works prototype, registered KDY 68, was fitted with a Falcon all-enveloping fibreglass bodyshell. Nichols developed the Elva Mk II from lessons learnt in racing the prototype: "That car was driven in 1956 races by Archie Scott Brown, Stuart Lewis-Evans and others." The Elva Mk II appeared in 1957: "Main differences from the Mark I are in the use of a De Dion rear axle as on the prototype, but with new location, inboard rear brakes, lengthened wheelbase, and lighter chassis frame."
The Elva cars were offered and raced with the 1,100 cc Coventry-Climax FWA engine as standard but went through various bodywork and suspension changes up to the Mark III of 1958.
Mk IV and V
Carl Haas, from Chicago, was an Elva agent serving the midwest of the United States from the mid-1950s through the 1960s. Haas was invited to England to drive an Elva Mk III in the Tourist Trophy at Goodwood on 13 September 1958, where he finished twelfth overall. Also in that 23rd Tourist Trophy race was the new Mark IV model driven by Ian Burgess and Robbie Mackenzie-Low. Stuart Lewis-Evans drove the same works car, registered MBW 616, to fastest time of the day at the Bodiam Hill Climb in East Sussex on 11 October 1958. Tragically, Lewis-Evans lost his life just two weeks later through injuries sustained at the Moroccan Grand Prix.
As far as the design of the new Mark IV was concerned, in the words of Carl Haas "The major change is an all-new independent rear suspension utilizing low-pivot swing axles. The body is entirely new with close attention to aerodynamics and a reduced frontal area. It's a big step from the Mk III. Finally, Elva has an 1100cc car potentially better than the Lotus. They've moved a lot of weight off the front wheels by moving the engine back." The Mark IV was also the first Elva with a tubular spaceframe chassis and had an aluminium under tray riveted to the chassis providing rigidity and strength.
At the Sebring 12 Hours sports car race in March 1959 the No. 48 Elva Mark IV driven by Frank Baptista, Art Tweedale and Charley Wallace finished first in Class G, and 19th overall. Another works Mark IV, No. 49 driven by Burdette Martin, Chuck Dietrich and Bill Jordan, took second place in Class G completing an excellent outing for the new 1959 season model.
A week or so later saw the first UK outing in 1959 for the Elva works team. Three Mark IV cars took part in the Chichester Cup at the Goodwood Easter Meeting with Scots racer Tom Dickson starting on the front row alongside three of the new Lola Mk1s. The Lolas dominated the race taking a 1-2-3 but the Elva Mark IV of Les Leston finished a decent 7th with Dickson just behind and John Peters, an American amateur racer, a few places further back in 11th place. Peters later exported his Mark IV to his home in Los Angeles and continued to race the car in Californian events, including attempting to qualify for the 200-mile Los Angeles Times Grand Prix for sports cars held at Riverside International Raceway in late 1959.
At the 11th International Trophy meeting at Silverstone on 2 May 1959 Tom Dickson finished a creditable 3rd in the 1,100cc sports racing event, sandwiched between the works Lola Mk1s in first and second and the Lotus Eleven of Peter Arundell in fourth. A second Mark IV, that of experienced amateur racer Cedric Brierley, came in 5th.
Further success came on 21 June 1959 when Arthur Tweedale and Bob Davis won the Marlboro Six Hour Endurance Race in Maryland driving the No. 37 Elva Mark IV. Art Tweedale repeated the win in the Marlboro Six Hours in 1960. Teamed with Ed Costley he covered 337.75 mile, this time in an Elva Mark V sports car. Introduced mid-way through the 1959 season, the Mark V was the final iteration of the Elva front-engined sports racing car and differed from the Mark IV only through some minor tweaks to the rear suspension and revised bodywork.
Elva sports racers featured again at Goodwood in the 24th Tourist Trophy race held on 5 September 1959. A Mark IV driven by John Brown and Chris Steele finished the race in overall 13th place. However, marque honours were taken by the Mk V driven to 3rd in the 1,100 cc class and 9th overall by Mike McKee and Cedric Brierley.
Although ultimately outclassed by the similarly-engined Lola Mk1, the Elva Mark IV and Mark V models were short-lived but relatively successful models in the highly competitive late 1950s 1,100 cc sports/racing class. In the period up to the end of 1960, aside from one notable event detailed below, they were only ever raced in serious events with the modestly-powered but efficient 1,098cc Coventry-Climax FWA engine fitted with SU carburettors. However, their lightweight construction, innovative suspension and good aerodynamics made them serious competition.
The majority of Mark IV and Mark V cars built were exported to the USA and raced successfully by both amateur and professional racers alike. One notable occasion saw Burdette Martin's Elva MkIV fitted with a 1,475cc Coventry-Climax FPF F2 engine driven with some success by Ed Crawford in Round 8 of the 1959 USAC Road Racing Championship at Meadowdale. Crawford was a renowned Porsche and Briggs Cunningham pilot and took the Elva Mk IV to an emphatic win, lapping the field in the 1,500 cc qualifying heat against some impressive opposition.
On a slightly less serious note, one of the US-domiciled Mk IV cars ended up featuring in the supporting cast of the Elvis Presley movie 'Viva Las Vegas' although a later Mark VI sports racer played a more prominent role as Elvis' race car.
The last Mk V chassis won a number of important races in the US midwest driven by Dick Buedingen, including the 1961 Elkhart Lake 500 teamed with Carl Haas. At this time Elva Cars Limited was operating from premises at Sedlescombe Road North, Hastings, Sussex, England.
Mk VI, VII and VIII/VIIIS
After financial problems caused by the failure of the US distributor, Frank Nichols started a new company in Rye, Sussex in 1961 to continue building racing cars. The Elva Mk VI rear-engined sports car, still sticking with 1,100 cc Coventry Climax power, made its competition debut at Brands Hatch on Boxing Day, 1961, driven by Chris Ashmore, finishing second to the three-litre Ferrari of Graham Hill. The car was designed by Keith Marsden.
On 8 September 1963, Bill Wuesthoff and Augie Pabst won the Road America 500, round seven of the United States Road Racing Championship, at Elkhart Lake, Wisconsin driving an Elva Mk.7-Porsche. "The Elva-Porsche is based on the Mark VII Elva, but redesigned aft of the front section to take the 1,700 cc Porsche air-cooled flat-four unit and its horizontal cooling fan."
Edgar Barth won the opening round of the European Hill Climb Championship on 7 June 1964, at Rossfeld in southern Germany in an Elva-Porsche flat-eight sports car. The cars were placed throughout the seven-round series with Herbert Muller winning at the final round at Sierre Montana Crans in Switzerland on 30 August 1964.
Around 1964-1966 Elva made a very successful series of Mk8 sports racers mostly with 1.8 litre BMW engines (modified from the 1.6 litre by Nerus) and some with 1.15 litre Holbay-Ford engines. The Mk8 had a longer wheelbase and wider track compared to the Mk7, which was known for difficult handling due to a 70-30 weight bias to the rear. Following the success of the McLaren in sportscar racing, Elva became involved in producing cars for sale to customers: "Later a tie-up with Elva and the Trojan Group was arranged and they took over the manufacture of the McLaren sports/racer, under the name McLaren-Elva-Oldsmobile." At the 1966 Racing Car Show, held in London in January, Elva exhibited two sports racing cars – the McLaren-Elva Mk.II V8 and the Elva-BMW Mk. VIIIS. The McLaren-Elva was offered with the option of Oldsmobile, Chevrolet or Ford V8 engines. The Elva-BMW Mk. VIIIS was fitted with a rear-mounted BMW two-litre four-cylinder OHC engine.
Luki Botha campaigned an Elva-Porsche in southern Africa from 1966.
Single Seater
Elva produced a single-seater car for Formula Junior events, the FJ 100, initially supplied with a front-mounted B.M.C. 'A' series engine in a tubular steel chassis. "Elva Cars, Ltd., new Formula Junior powered by an untuned BMC 'A' Series 948cc engine. The price of this 970 lb. car is $2,725 in England. Wheelbase: 84", tread: 48", brake lining area: 163" sq. The 15" wheels are cast magnesium. Independent suspension front and rear with transverse wishbones, coil springs, and telescopic shock absorbers. The car is 12 feet, four inches long." Bill de Selincourt won a race at Cadours, France, in an Elva-B.M.C. FJ on September 6, 1959. Nichols switched to a two-stroke DKW engine supplied by Gerhard Mitter. In 1959 Peter Arundell won the John Davy Trophy at the Boxing Day Brands Hatch meeting driving an Elva-D.K.W. "Orders poured in for the Elva but when the 1960 season commenced Lotus and Cooper had things under control and disillusioned Elva owners watched the rear-engined car disappearing round corners, knowing they had backed the wrong horse." Sporadic success continued for Elva in the early part of that year, with Jim Hall winning at Sebring and Loyer at Montlhéry.
Elva produced a rear-engined FJ car, with B.M.C. engine, at the end of the 1960 season. Chuck Dietrich finished third at Silverstone in the BRDC British Empire Trophy race on 1 October. In 1961 "an entirely new and rather experimental Elva-Ford" FJ-car debuted at Goodwood, making fastest lap, driven by Chris Meek.
Elva Courier
The main road car, introduced in 1958, was called the Courier and went through a series of developments throughout the existence of the company. Initially all the cars were exported, home market sales not starting until 1960. Mark Donohue had his first racing successes in an Elva Courier winning the SCCA F Prod Championship in 1960 and the SCCA E Prod Championship in 1961.
The Mk 1 used a 1500 cc MGA or Riley 1.5 litre engine in a ladder chassis with Elva designed independent front suspension. The engine was set well back in the chassis to help weight distribution, which produced good handling but encroached on the cockpit making the car a little cramped. The chassis carried lightweight 2-seater open glassfibre bodywork. It was produced as a complete car for the US and European market and available in kit form for the UK market. After about 50 cars were made it was upgraded to the Mk II which was the same car but fitted with a proprietary curved glass windscreen, replacing the original flat-glass split type, and the larger 1600 cc MGA engine. Approximately 400 of the Mk I and II were made.
The rights to the Elva Courier were acquired by Trojan in 1962, and production moved to the main Trojan factory in Purley Way, Croydon, Surrey. Competition Press announced: "Elva Courier manufacturing rights have been sold to Lambretta-Trojan in England. F-Jr Elva and Mark IV sports cars will continue to be built by Frank Nichols as in the past."
With the Trojan takeover the Mk III was introduced in 1962 and was sold as a complete car. On the home market a complete car cost £965 or the kit version £716. The chassis was now a box frame moulded into the body. Triumph rack and pinion steering and front suspension was standardised. A closed coupé body was also available with either a reverse slope Ford Anglia-type rear window or a fastback. In autumn 1962: "Elva Courier Mk IV was shown at London Show. New coupe has all-independent suspension, fibreglass body, MG engine. Mk III Couriers were also shown. Though previously equipped with MG-A engines, new versions will be equipped with 1800cc MG-B engine." Later the Ford Cortina GT unit was available. The final version, the fixed head coupé Mk IV T type used Lotus twin-cam engines with the body modified to give more interior room. It could be had with all independent suspension and four wheel disc brakes. 210 were made.
Ken Sheppard Customised Sports Cars of Shenley, Hertfordshire acquired the Elva Courier from Trojan in 1965 but production ended in 1968.
GT160
There was also a GT160 which never got beyond production of three prototypes. It used a BMW dry sump engine of 2 litre capacity with bodywork styled by Englishman Trevor Frost (also known as Trevor Fiore, and who also designed the Trident) and made by Carrozzeria Fissore of Turin. It weighed and had so would have had very impressive performance but was deemed too costly to put into series production. The car was shown at the London Motor Show in 1964. One of the cars was purchased by Richard Wrottesley and entered in the 1965 24 Hours of Le Mans. Co-driven by Tony Lanfranchi, the car retired early in the race.
Other companies
There was another Elva car company that lasted for one year, 1907, and was based in Paris, France.
See also
Archie Butterworth, supplied engines for the Elva-Butterworth car.
References
External links
Elva.com
Defunct motor vehicle manufacturers of England
Companies based in East Sussex
Kit car manufacturers
British racecar constructors
Hastings |
8629315 | https://en.wikipedia.org/wiki/Imagineer%20Systems | Imagineer Systems | Imagineer Systems Limited is a software company that specializes in the development and maintenance of several visual effects software applications. They are the "maker of the Academy Award-winning planar-tracking software Mocha." The company was founded in June 2000 by Allan Jaenicke and Philip McLauchlan. The applications produced by Imagineer have been widely adopted within the entertainment industry, and can be seen in films such as X-Men: First Class, Alice in Wonderland and Black Swan.
History
Imagineer Systems was established in June 2000 by Allan Jaenicke and Philip McLauchlan. The pair had been carrying out a joint research project at the University of Surrey in Guildford, United Kingdom, with the aim of applying the latest computer vision research to methods of editing moving images. Upon finding the uses of this algorithm in removing wires and harnessing from stunt scenes, the duo decided to found the company. Mokey, their first commercial software, was released shortly afterwards. Over the subsequent years, they would later release the programs Monet and Mocha, the latter of which would come to be adapted and come included with future versions of the editing software Adobe After Effects. In 2012, Imagineer received an Academy Award for Technical Achievement for its Imagineer System Planner, along with its tracking tools and rotoscoping software.
In 2014, Imagineer Systems came under the ownership of motion graphics and VFX development company Boris FX.
References
External links
Visual effects companies
Software companies of England
Companies based in Surrey
Software companies established in 2000
British companies established in 2000
Academy Award for Technical Achievement winners |
234034 | https://en.wikipedia.org/wiki/Code%20smell | Code smell | In computer programming, a code smell is any characteristic in the source code of a program that possibly indicates a deeper problem. Determining what is and is not a code smell is subjective, and varies by language, developer, and development methodology.
The term was popularised by Kent Beck on WardsWiki in the late 1990s. Usage of the term increased after it was featured in the 1999 book Refactoring: Improving the Design of Existing Code by Martin Fowler. It is also a term used by agile programmers.
Definition
One way to look at smells is with respect to principles and quality: "Smells are certain structures in the code that indicate violation of fundamental design principles and negatively impact design quality". Code smells are usually not bugs; they are not technically incorrect and do not prevent the program from functioning. Instead, they indicate weaknesses in design that may slow down development or increase the risk of bugs or failures in the future. Bad code smells can be an indicator of factors that contribute to technical debt. Robert C. Martin calls a list of code smells a "value system" for software craftsmanship.
Often the deeper problem hinted at by a code smell can be uncovered when the code is subjected to a short feedback cycle, where it is refactored in small, controlled steps, and the resulting design is examined to see if there are any further code smells that in turn indicate the need for more refactoring. From the point of view of a programmer charged with performing refactoring, code smells are heuristics to indicate when to refactor, and what specific refactoring techniques to use. Thus, a code smell is a driver for refactoring.
A 2015 study utilizing automated analysis for half a million source code commits and the manual examination of 9,164 commits determined to exhibit "code smells" found that:
There exists empirical evidence for the consequences of "technical debt", but there exists only anecdotal evidence as to how, when, or why this occurs.
Common wisdom suggests that urgent maintenance activities and pressure to deliver features while prioritizing time-to-market over code quality are often the causes of such smells.
Tools such as Checkstyle, PMD, FindBugs, and SonarQube can automatically identify code smells.
Common code smells
Application-level smells
Mysterious Name: functions, modules, variables or classes that are named in a way that does not communicate what they do or how to use them.
Duplicated code: identical or very similar code that exists in more than one location.
Contrived complexity: forced usage of overcomplicated design patterns where simpler design patterns would suffice.
Shotgun surgery: a single change that needs to be applied to multiple classes at the same time.
Uncontrolled side effects: side effects of coding that commonly cause runtime exceptions, with unit tests unable to capture the exact cause of the problem.
Variable mutations: mutations that vary widely enough that refactoring the code becomes increasingly difficult, due to the actual value's status as unpredictable and hard to reason about.
Boolean blindness: easy to assert on the opposite value and still type checks.
Class-level smells
Large class: a class that has grown too large. See God object.
Feature envy: a class that uses methods of another class excessively.
Inappropriate intimacy: a class that has dependencies on implementation details of another class. See Object orgy.
Refused bequest: a class that overrides a method of a base class in such a way that the contract of the base class is not honored by the derived class. See Liskov substitution principle.
Lazy class/freeloader: a class that does too little.
Excessive use of literals: these should be coded as named constants, to improve readability and to avoid programming errors. Additionally, literals can and should be externalized into resource files/scripts, or other data stores such as databases where possible, to facilitate localization of software if it is intended to be deployed in different regions.
Cyclomatic complexity: too many branches or loops; this may indicate a function needs to be broken up into smaller functions, or that it has potential for simplification/refactoring.
Downcasting: a type cast which breaks the abstraction model; the abstraction may have to be refactored or eliminated.
Orphan variable or constant class: a class that typically has a collection of constants which belong elsewhere where those constants should be owned by one of the other member classes.
Data clump: Occurs when a group of variables are passed around together in various parts of the program. In general, this suggests that it would be more appropriate to formally group the different variables together into a single object, and pass around only the new object instead.
Method-level smells
Too many parameters: a long list of parameters is hard to read, and makes calling and testing the function complicated. It may indicate that the purpose of the function is ill-conceived and that the code should be refactored so responsibility is assigned in a more clean-cut way.
Long method: a method, function, or procedure that has grown too large.
Excessively long identifiers: in particular, the use of naming conventions to provide disambiguation that should be implicit in the software architecture.
Excessively short identifiers: the name of a variable should reflect its function unless the function is obvious.
Excessive return of data: a function or method that returns more than what each of its callers needs.
Excessive comments: a class, function or method has irrelevant or trivial comments. A comment on an attribute setter/getter is a good example.
Excessively long line of code (or God Line): A line of code which is too long, making the code difficult to read, understand, debug, refactor, or even identify possibilities of software reuse.
See also
Anti-pattern
Design smell
List of tools for static code analysis
Software rot
References
Further reading
External links
CodeSmell at c2.com
Taxonomy of code smells
Overview of many code smells
CodeSmell
Boundy, David, Software cancer: the seven early warning signs or here, ACM SIGSOFT Software Engineering Notes, Vol. 18 No. 2 (April 1993), Association for Computing Machinery, New York, NY, USA
Anti-patterns
Computer programming folklore
Software engineering folklore |
19371634 | https://en.wikipedia.org/wiki/Parsix | Parsix | Parsix GNU/Linux is a live and installation DVD based on Debian. The Parsix project's goal is to provide a ready to use, easy to install, desktop and laptop optimized operating system based on Debian's testing branch and the latest stable release of GNOME desktop environment. It is possible to install extra software packages from the projects's own APT repositories.
In 2017, the official website announce the shutdown of the project in the year, and suggest users to go to Debian stretch.
Logo
The Parsix logo is inspired by stone flower carvings found in Persepolis.
Usage
Parsix Linux was designed to be used as a Live CD, Live USB, or installed operating system on a hard disk drive. Live mode is useful for operations such as data recovery or hard drive partitioning.
Versions
History
The first version of Parsix GNU/Linux was announced on February 2005 by Alan Baghumian. Seeking a more stable platform, the project started using Debian testing branch as of version 0.85. Starting with version 0.90, Parsix uses characters from the movie Happy Feet to name their releases. The project's own APT repositories were launched on February 2008. The multimedia repository, Wonderland, was launched on September 2010. The Parsix project started to offer security updates for their stable and testing branches as of December 2010.
Receptions
DistroWatch Weekly reviewed Parsix 1.5r1 in 2008:
LinuxBSDos wrote the review of Parsix 3.0r2:
References
External links
Community User Forums
Issue Tracker
Wiki
Mailing Lists
Debian-based distributions
Live USB
LiveDistro
X86-64 Linux distributions
Discontinued Linux distributions
Linux distributions |
1818594 | https://en.wikipedia.org/wiki/WDC%2065C02 | WDC 65C02 | The Western Design Center (WDC) 65C02 microprocessor is an enhanced CMOS version of the popular nMOS-based 8-bit MOS Technology 6502. The 65C02 fixed several problems in the original 6502 and added some new instructions, but its main feature was greatly lowered power usage, on the order of 10 to 20 times less than the original 6502 running at the same speed. The reduced power consumption made the 65C02 useful in portable computer roles and microcontroller systems in industrial settings. It has been used in some home computers, as well as in embedded applications, including medical-grade implanted devices.
Development began in 1981 and samples were released in early 1983. WDC licensed the design to Synertek, NCR, GTE, and Rockwell Semiconductor. Rockwell's primary interest was in the embedded market and asked for several new commands to be added to aid in this role. These were later copied back into the baseline version, at which point WDC added two new commands of their own to create the W65C02. Sanyo later licensed the design as well, and Seiko Epson produced a further modified version as the HuC6280.
Early versions used 40-pin DIP packaging, and were available in 1, 2 and 4 MHz versions, matching the speeds of the original nMOS versions. Later versions were produced in PLCC and QFP packages, as well as PDIP, and with much higher clock speed ratings. The current version from WDC, the W65C02S-14 has a fully static core and officially runs at speeds up to 14 MHz when powered at 5 volts.
Introduction and features
The 65C02 is a low cost, general-purpose 8-bit microprocessor (8-bit registers and data bus) with a 16-bit program counter and address bus. The register set is small, with a single 8-bit accumulator (A), two 8-bit index registers (X and Y), an 8-bit status register (P), and a 16-bit program counter (PC). In addition to the single accumulator, the first 256 bytes of RAM, the "zero page" ($0000 to $00FF), allow faster access through addressing modes that use an 8-bit memory address instead of a 16-bit address. The stack lies in the next 256 bytes, page one ($0100 to $01FF), and cannot be moved or extended. The stack grows backwards with the stack pointer (S) starting at $01FF and decrementing as the stack grows. It has a variable-length instruction set, varying between one and three bytes per instruction.
The basic architecture of the 65C02 is identical to the original 6502, and can be considered a low-power implementation of that design. At 1 MHz, the most popular speed for the original 6502, the 65C02 requires only 20 mW, while the original uses 450 mW, a reduction of over twenty times. The manually optimized core and low power use is intended to make the 65C02 well suited for low power system-on-chip (SoC) designs.
A Verilog hardware description model is available for designing the W65C02S core into an application-specific integrated circuit (ASIC) or a field-programmable gate array (FPGA). As is common in the semiconductor industry, WDC offers a development system, which includes a developer board, an in-circuit emulator (ICE) and a software development system.
The W65C02S–14 is the production version , and is available in PDIP, PLCC and QFP packages. The maximum officially supported Ø2 (primary) clock speed is 14 MHz when operated at 5 volts, indicated by the –14 part number suffix (hobbyists have developed 65C02 homebrew systems that run faster than the official rating). The "S" designation indicates that the part has a fully static core, a feature that allows Ø2 to be slowed down or fully stopped in either the high or low state with no loss of data. Typical microprocessors not implemented in CMOS have dynamic cores and will lose their internal register contents (and thus crash) if they are not continuously clocked at a rate between some minimum and maximum specified values.
General logic features
8-bit data bus
16-bit address bus (providing an address space of 64 kB)
8-bit arithmetic logic unit (ALU)
8-bit processor registers:
accumulator
stack pointer
index registers
status register
16-bit program counter
69 instructions, implemented by 212 operation codes
16 addressing modes, including zero page addressing
Logic features
Vector pull (VPB) output indicates when interrupt vectors are being addressed
Memory lock (MLB) output indicates to other bus masters when a read-modify-write instruction is being processed
WAit-for-Interrupt (WAI) and SToP (STP) instructions reduce power consumption, decrease interrupt latency and enable synchronization with external events
Electrical features
Supply voltage specified at 1.71 V to 5.25 V
Current consumption (core) of 0.15 and 1.5 mA per MHz at 1.89 V and 5.25 V respectively
Variable length instruction set, enabling code size optimization over fixed length instruction set processors, results in power savings
Fully static circuitry allows stopping the clock to conserve power
Clocking features
The W65C02S may be operated at any convenient supply voltage (VDD) between 1.8 and 5 volts (±5%). The data sheet AC characteristics table lists operational characteristics at 5 V at 14 MHz, 3.3 V or 3 V at 8 MHz, 2.5 V at 4 MHz, and 1.8 V at 2 MHz. This information may be an artifact of an earlier data sheet, as a graph indicates that typical devices are capable of operation at higher speeds than suggested by the AC characteristics table, and that reliable operation at 20 MHz should be readily attainable with VDD at 5 volts, assuming the supporting hardware will allow it.
The W65C02S support for arbitrary clock rates allows it to use a clock that runs at a rate ideal for some other part of the system, such as 13.5 MHz (digital SDTV luma sampling rate), 14.31818 MHz (NTSC colour carrier frequency × 4), 14.75 MHz (PAL square pixels), 14.7456 (baud rate crystal), etc., as long as VDD is sufficient to support the frequency. Designer Bill Mensch has pointed out that FMAX is affected by off-chip factors, such as the capacitive load on the microprocessor's pins. Minimizing load by using short signal tracks and fewest devices helps raise FMAX. The PLCC and QFP packages have less pin-to-pin capacitance than the PDIP package, and are more economical in the use of printed circuit board space.
WDC has reported that FPGA realizations of the W65C02S have been successfully operated at 200 MHz.
Comparison with the NMOS 6502
Basic architecture
Although the 65C02 can mostly be thought of as a low-power 6502, it also fixes several bugs found in the original and adds new instructions, addressing modes and features that can assist the programmer in writing smaller and faster-executing programs. It is estimated that the average 6502 assembly language program can be made 10 to 15 percent smaller on the 65C02 and see a similar improvement in performance, largely through avoided memory accesses through the use of fewer instructions to accomplish a given task.
Undocumented instructions removed
The original 6502 had 56 instructions, which, when combined with different addressing modes, produced a total of 151 opcodes of the possible 256 8-bit opcode patterns. The remaining 105 unused opcodes were undefined, with the set of codes with low-order 4-bits with 3, 7, B or F left entirely unused, the code with low-order 2 having only a single opcode.
The 6502 was famous for the way that some of these leftover codes actually performed actions. Due to the way the 6502's instruction decoder worked, simply setting certain bits in the opcode would cause parts of the instruction processing to take place. Some of these opcodes would immediately crash the processor, while other performed useful functions and were even given unofficial assembler mnemonics by users.
The 65C02 added a number of new opcodes that used up a number of these previously "undocumented instruction" slots, for instance, $FF was now used for the new BBS instruction (see below). Those that remained truly unused were set to perform NOPs. Programs that took advantage of these codes will not work on the 65C02.
Bug fixes
The original 6502 had several errata when initially launched. Early versions of the processor had a defective ROR (rotate right) instruction, which issue MOS Technology addressed by not documenting the instruction. ROR was fixed very early in the production run and was not an issue for the vast majority of machines using the processor.
In contrast, a notorious bug that is present in all NMOS variants of the 6502 involves the jump instruction (JMP) when using indirect addressing. In this addressing mode, the target address of the JMP instruction is fetched from memory (the jump vector), rather than being an operand to the JMP instruction. For example, JMP ($1234) would fetch the value in memory locations $1234 (least significant byte) and $1235 (most significant byte) and load those values into the program counter, which would then cause the processor to continue execution at the address stored in the jump vector.
The bug appears when the vector address ends in $FF, which is the boundary of a memory page. In this case, JMP will fetch the most significant byte of the target address from $00 of the original page rather than $00 of the new page. Hence JMP ($12FF) would get the least significant byte of the target address at $12FF and the most significant byte of the target address from $1200 rather than $1300. The 65C02 corrected this issue.
More of an oversight than a bug, the state of the (D)ecimal flag in the NMOS 6502's status register is undefined after a reset or interrupt. This means programmers have to set the flag to a known value in order to avoid any bugs related to arithmetic operations. As a result, one finds a CLD instruction (CLear Decimal) in almost all 6502 interrupt handlers, as well as early in the reset code. The 65C02 automatically clears this flag after pushing the status register onto the stack in response any interrupt or in response to a hardware reset, thus placing the processor back into binary arithmetic mode.
During decimal mode arithmetic, the NMOS 6502 will update the (N)egative, o(V)erflow and (Z)ero flags to reflect the result of underlying binary arithmetic, that is, the flags are reflecting a result computed prior to the processor performing decimal correction. In contrast, the 65C02 sets these flags according to the result of decimal arithmetic, at the cost of an extra clock cycle per arithmetic instruction.
When executing a read-modify-write (R-M-W) instruction, such as INC addr, all NMOS variants will do a double write on addr, first rewriting the current value found at addr and then writing the modified value. This behavior can result in difficult-to-resolve bugs if addr is a hardware register. The 65C02 instead performs a double read of addr, followed by a single write.
When performing indexed addressing, if indexing crosses a page boundary all NMOS variants will read from an invalid address before accessing the correct address. As with a R-M-W instruction, this behavior can cause problems when accessing hardware registers via indexing. The 65C02 fixed this problem by performing a dummy read of the instruction opcode when indexing crosses a page boundary. However, this fix introduced a new bug that occurs when the base address is on an even page boundary (which means indexing will never cross into the next page). With the new bug, a dummy read is performed on the base address prior to indexing, such that LDA $1200,X will do a dummy read on $1200 prior to the value of X being added to $1200. Again, if indexing on hardware register addresses, this bug can result in undefined behavior.
If an NMOS 6502 is fetching a BRK (software interrupt) opcode at the same time a hardware interrupt occurs BRK will be ignored as the processor reacts to the interrupt. The 65C02 correctly handles this situation by servicing the interrupt and then executing BRK.
New addressing modes
The 6502 has two indirect addressing modes which dereference through 16-bit addresses stored in page zero:
Indexed indirect, e.g. LDA ($10,X), adds the X register to the given page zero address before reading the 16-bit vector. For instance, if X is 5, it reads the 16-bit address from location $15/$16. This is useful when there is an array of pointers in page zero.
Indirect indexed LDA ($10),Y adds the Y register to the 16-bit vector read from the given page zero address. For instance, if Y is 5, and $10/$11 contains the vector $1000, This reads the value from $1005. This performs pointer-offset addressing.
A downside of this model is that if indexing is not needed, one of the index registers must still be set to zero and used in one of these instructions. The 65C02 added a non-indexed indirect addressing mode LDA ($10) to all instructions that used indexed indirect and indirect indexed modes, freeing up the index registers.
The 6502's instruction had a unique (among 6502 instructions) addressing mode known as "absolute indirect" that read a 16-bit value from a given memory address and then jumped to the address in that 16-bit value. For instance, if memory location $A000 holds $34 and $A001 holds $12, JMP ($A000) would read those two bytes, construct the value $1234, and then jump to that location.
One common use for indirect addressing is to build branch tables, a list of entry points for subroutines that can be accessed using an index. For instance, a device driver might list the entry points for , , , etc in a table at $A000. is the third entry, zero indexed, and each address requires 16-bits, so to call one would use something similar to JMP ($A004). If the driver is updated and the subroutine code moves in memory, any existing code will still work as long as the table of pointers remains at $A000.
The 65C02 added the new "indexed absolute indirect" mode which eased the use of branch tables. This mode added the value of the X register to the absolute address and took the 16-bit address from the resulting location. For instance, to access the function from the table above, one would store 4 in X, then JMP ($A000,X). This style of access makes accessing branch tables simpler as a single base address is used in conjunction with an 8-bit offset.
New and modified instructions
In addition to the new addressing modes, the "base model" 65C02 also added a set of new instructions.
INC and DEC with no parameters now increment or decrement the accumulator. This was an odd oversight in the original instruction set, which only included INX/DEX,INY/DEY and INC addr/DEC addr. Some assemblers use the alternate forms INA/DEA or INC A/DEC A.
STZ addr, STore Zero in addr. Replaces the need to LDA #0;STA addr and doesn't require changing the value of the accumulator. As this task is common in most programs, using STZ can reduce code size, both by eliminating the LDA as well as any code needed to save the value of the accumulator, typically a PHA PLA pair.
PHX,PLX,PHY,PLY, push and pull the X and Y registers to/from the stack. Previously, only the accumulator and status register had push and pull instructions. X and Y could only be stacked by moving them to the accumulator first with TXA or TYA, thereby changing the accumulator contents, then using PHA.
BRA, branch always. Operates like a JMP but uses a 1-byte relative address like other branches, saving a byte. The speed is often the same as the 3 cycle absolute JMP unless a page is crossed which would make the BRA version 1 cycle longer (4 cycles). As the address is relative, it is also useful when writing relocatable code, a common task in the era before memory management units.
Bit manipulation instructions
Both WDC and Rockwell contributed improvements to the bit testing and manipulation functions in the 65C02. WDC added new addressing modes to the BIT instruction that was present in the 6502, as well two new instructions for convenient manipulation of bit fields, a common activity in device drivers.
BIT in the 65C02 adds immediate mode, zero page indexed by X and absolute indexed by X addressing. Immediate mode addressing is particularly convenient in that it is completely non-destructive. For example:
LDA <register>
BIT #%00010000
may be used in place of:
LDA <register>
AND #%00010000
The operation changes the value in the accumulator, so the original value of <register> is lost. Using leaves the value in the accumulator unchanged, so subsequent code can make additional tests against the original value, avoiding having to re-load the value from <register>.
In addition to the enhancements of the BIT instruction, WDC added two instructions designed to conveniently manipulate bit fields:
TSB addr and TRB addr, Test and Set Bits and Test and Reset Bits.
A mask in the accumulator (.A) is logically ANDed with memory at addr, which location may be zero page or absolute. The Z flag in the status register is conditioned according to the result of the logical AND—no other status register flags are affected. Furthermore, bits in addr are set (TSB) or cleared (TRB) according to the mask in .A. Succinctly, TSB performs a logical OR after the logical AND and stores the result of the logical OR at addr, whereas TRB stores the results of the logical AND at addr. In both cases, the Z flag in the status register indicates the result of .A AND addr before the content of addr is changed. TRB and TSB thus replace a sequence of instructions, essentially combining the BIT instruction with additional steps to save the computational changes, but in a way that reports the status of the affected value before it is changed.
Rockwell's changes added more bit manipulation instructions for directly setting and testing any bit, and combining the test, clear and branch into a single opcode. The new instructions were available from the start in Rockwell's R65C00 family, but was not part of the original 65C02 specification and not found in versions made by WDC or its other licensees. These were later copied back into the baseline design, and were available in later WDC versions.
Rockwell-specific instructions are:
SMBbit# zp/RMBbit# zp. Set or Reset (clear) bit number bit# in zero page byte zp.
RMB and SMB are used to clear (RMB) or set (SMB) individual bits in a bit field, each replacing a sequence of three instructions. As RMB and SMB are zero page addressing only, these instructions are limited in usefulness and are primarily of value in systems in which device registers are present in zero page. The bit# component of the instruction is often written as part of the mnemonic, such as SMB1 $12 which sets bit 1 in zero-page address $12. Some assemblers treat bit# as part of the instruction's operand, e.g., SMB 1,$12, which has the advantage of allowing it to be replaced by a variable name or calculated number.
BBR bit#,offset,addr and BBS bit#,offset,addr, Branch on Bit Set/Reset.
Same zero-page addressing and limitations as RMB and SMB, but branches to addr if the selected bit is clear (BBR) or set (BBS). As is the case with RMB and SMB, BBR and BBS replace a sequence of three instructions.
Low-power modes
In addition to the new commands above, WDC also added the STP and WAI instructions for supporting low-power modes.
, STop the Processor, halted all processing until a hardware reset was issued. This could be used to put a system to "sleep" and then rapidly wake it with a reset. Normally this would require some external system to maintain main memory, and it was not widely used.
t had a similar effect, entering low-power mode, but this instruction woke the processor up again on the reception of an interrupt. Previously, handling an interrupt generally involved running a loop to check if an interrupt has been received, sometimes known as "spinning", checking the type when one is received, and then jumping to the processing code. This meant the processor was running during the entire process.
In contrast, in the 65C02, interrupt code could be written by having a followed immediately by a or to the handler. When the was encountered, processing stopped and the processor went into low-power mode. When the interrupt was received, it immediately processed the and handled the request.
This had the added advantage of slightly improving performance. In the spinning case, the interrupt might arrive in the middle of one of the loop's instructions, and to allow it to restart after returning from the handler, the processor spends one cycle to save its location. With , the processor enters the low-power state in a known location where all instructions are guaranteed to be complete, so when the interrupt arrives it cannot possibly interrupt an instruction and the processor can safely continue without spending a cycle saving state.
65SC02
The 65SC02 is a variant of the WDC 65C02 without bit instructions.
Notable uses of the 65C02
Home computers
Apple IIc portable by Apple Computer (NCR 1.023 MHz)
Enhanced Apple IIe by Apple Computer (1.023 MHz)
BBC Master home/educational computer, by Acorn Computers Ltd (2 MHz 65SC12 plus optional 4 MHz 65C102 second processor)
Replica 1 by Briel Computers, a replica of the Apple I hobbyist computer (1 MHz)
Laser 128 series clones of Apple II
KIM-1 Modern Replica of the MOS/CBM KIM-1 by Briel Computing
Video game consoles
Atari Lynx handheld (65SC02 @ ~4 MHz)
NEC PC Engine aka TurboGrafx-16 (HuC6280 @ 7.16 MHz)
GameKing handhelds (6 MHz) by Timetop
Watara Supervision handhelds (65SC02 @ 4 MHz)
Other products
TurboMaster accelerator cartridge for the Commodore 64 home computer (65C02 @ 4.09 MHz)
Tube-connected second processor for the Acorn BBC Micro home computer (65C02 @ 3 MHz)
many dedicated chess computers i.e.: Mephisto MMV, Novag Super Constellation, Fidelity Elite and many more (4–20 MHz)
See also
Interrupts in 65xx processors
CSG 65CE02, a further enhanced version of the 65C02
Notes
References
Citations
Bibliography
Further reading
65C02 Datasheet; Western Design Center; 32 pages; 2018.
Programming the 65816 - including the 6502, 65C02, 65802; 1st Ed; David Eyes and Ron Lichty; Prentice Hall; 636 pages; 1986; . (archive)
External links
65C02 webpage - Western Design Center
65xx/65Cxx/65SCxx Differences - CPU World
6502/65C02/65C816 Instruction Set Decoded – From Neil Parker's Apple II page
65xx microprocessors
8-bit microprocessors |
46989674 | https://en.wikipedia.org/wiki/Dark%20Souls%20III | Dark Souls III | is a 2016 action role-playing video game developed by FromSoftware and published by Bandai Namco Entertainment for PlayStation 4, Xbox One, and Microsoft Windows. It is the fourth overall entry of the Souls series and the final installment of the Dark Souls trilogy.
It is an action role-playing game played in a third-person perspective. Players have access to various weapons, armour, magic, and consumables that they can use to fight their enemies. Bonfires serve as checkpoints. The Estus Flask is the consumable used for healing in Dark Souls III. Ashen Estus Flasks restore focus points (FP), which can be used for magic or weapon arts. Hidetaka Miyazaki, the creator of the series, returned to direct the game after handing the development duties of Dark Souls II to others in FromSoftware.
Dark Souls III was critically and commercially successful, with critics calling it a worthy and fitting conclusion to the series. It was the fastest-selling game in Bandai Namco's history, shipping over three million copies within its first two months and over 10 million by 2020. Two downloadable content (DLC) expansions, Ashes of Ariandel and The Ringed City, were also made. A complete version containing the base game and both expansions, Dark Souls III: The Fire Fades, was released in April 2017.
Gameplay
Dark Souls III is an action role-playing game played in a third-person perspective, similar to previous games in the series. According to lead director and series creator Hidetaka Miyazaki, the game's gameplay design followed "closely from Dark Souls II". Players are equipped with various weapons to fight against enemies, such as bows, throwable projectiles, and swords. Shields can act as secondary weapons, but they are mainly used to deflect enemies' attacks and protect the player from suffering damage. Each weapon has two basic types of attack, one being a standard attack and the other being slightly more powerful that can be charged up, similar to FromSoftware's previous game, Bloodborne. In addition, attacks can be evaded through dodge-rolling. Bonfires, which serve as checkpoints, return from previous instalments. Ashes, according to Miyazaki, play an important role in the game.
Magic is featured in the game, with a returning magic system from Demon's Souls, now known as "focus points" (FP). When performing spells, the player's focus points are consumed. There are two types of Estus Flasks in the game, which can be allotted to fit a players' particular play style. One refills hit points like previous games in the series, while the other refills focus points, a feature new to the game. Combat and movements were made faster and more fluid than Dark Souls II. Several player movements are performed more rapidly, allowing more damage to be done in a shorter period.
Throughout the game, players encounter different types of enemies, each with different behaviours. Some of them change their combat pattern during battles. New combat features are introduced in Dark Souls III, including weapon and shield "Skills", which are special abilities that vary from weapon to weapon and enable special attacks and features at the cost of focus points. The game focuses more on role-playing; the expanded character builder and improved weapons provide more tactical options. The game features fewer overall maps than its predecessor Dark Souls II, but they are larger and more detailed, encouraging exploration. The adaptability stat from Dark Souls II was removed in Dark Souls III, with other stats being adjusted, alongside the introduction of the luck stat. The game features multiplayer elements like the previous games in the series.
Plot
Set in the Kingdom of Lothric, a bell has rung to signal that the First Flame, responsible for maintaining the Age of Fire, is dying out. As has happened many times before, the coming of the Age of Dark produces the undead: cursed beings that rise after death. The Age of Fire can be prolonged with the linking of the fire, a ritual in which great lords and heroes sacrifice their souls to rekindle the First Flame. However, Prince Lothric, the chosen linker for this age, abandoned his duty and watched the flame die from afar. The bell is the last hope for the Age of Fire, resurrecting previous Lords of Cinder (heroes who linked the flame in past ages) to attempt to link the fire again; however, all but one Lord shirk their duty. Meanwhile, Sulyvahn, a sorcerer from the Painted World of Ariandel, wrongfully proclaims himself Pontiff and seizes power over Irithyll of the Boreal Valley and the returning Anor Londo cathedral from Dark Souls as a tyrant.
The Ashen One, an Undead who failed to become a Lord of Cinder and thus called an Unkindled, rises and must link the fire by returning Prince Lothric and the defiant Lords of Cinder to their thrones in Firelink Shrine. The Lords include the Abyss Watchers, a legion of warriors, sworn by the Old Wolf's Blood which linked their souls into one, to protect the land from the Abyss, and were ultimately locked in an endless battle between each other; Yhorm the Giant, who was once a conqueror of the very people for whom he then sacrificed his life; and Aldrich, who became a Lord of Cinder despite his ravenous appetite for both men and gods. Lothric himself was raised to link the First Flame, but shirked his duties and chose instead to watch the fire fade.
Once the Ashen One succeeds in returning Lothric and the Lords of Cinder to their thrones, they travel to the ruins of the Kiln of the First Flame. There, they encounter the Soul of Cinder, an amalgamation of all the previous Lords of Cinder who had linked the flame in the past. Once the Soul of Cinder is defeated, four endings are made possible based on the player's actions during the game. The player can attempt to link the fire, summon the Fire Keeper to extinguish the flame and begin an age of Dark, or kill her. A fourth ending consists of the Ashen One taking the flame for their own and becoming the Lord of Hollows.
Ashes of Ariandel
Ashes of Ariandel introduces a new area, the Painted World of Ariandel. On arriving at the Cathedral of the Deep in the base game, the Ashen One meets a wandering knight, Gael, who implores them to enter the Painted World and fulfil a prophecy to bring "Fire for Ariandel." Inhabitants of this world variously beg the Ashen One to burn the Painted World per the prophecy or leave it to its slow rot. A painter girl tells the Ashen One of "Uncle Gael"'s promise to find her dyes to paint a new world. The player's decision to proceed elicits first coldness from the world's self-appointed guardian and then a boss fight, in which Ariandel is ultimately set on fire. The painter thanks the player for showing her flame and awaits Gael for the Dark Soul, which she can use to paint a new world for humanity.
In keeping with previous franchise DLC, Ashes of Ariandel introduces a substantial new area, two boss fights and several new weapons, spells, and armour pieces.
The Ringed City
In The Ringed City, the Ashen One begins their journey to an area known as "The Dreg Heap", a region where ruined kingdoms of different eras are piled upon each other as the world draws to a close. From the Dreg Heap, after battling through the ruins of Lothric Castle, the Ashen One encounters the amnesiac knight Lapp, who cannot remember his past. Throughout the Dreg Heap, messages from Gael from Ashes of Ariandel guide the player. The Ashen One traverses the remnants of Earthen Peak, an area encountered in Dark Souls II, before fighting the last remnant of the demon race, the Demon Prince, in the base of an Archtree that contains the ruins of Firelink Shrine from Dark Souls. Victorious, the player travels to the Ringed City, an ancient city of Pygmies, the ancestors of humanity, which has fallen into the Abyss. After defeating the guardian of the Church of Filianore, the player awakens Filianore, the daughter of Lord Gwyn who was entrusted to the Ringed City as a token of peace between Gwyn and the Pygmy Lords. This transports them to a ruined wasteland of ash, which can be interpreted as either a skip forward in time or the lifting of an illusion cast by Filianore. There, the Ashen One meets a disheveled Gael, who has begun killing the Pygmy Lords in order to gain the blood of the Dark Soul from the Pygmies for the painter girl in Ariandel to use as ink. After consuming the Dark Soul, Gael has been fully corrupted by its power and demands the Ashen One's portion of it. He is finally struck down, allowing the Ashen One to obtain his blood (which contains the Dark Soul), which the painter girl in Ariandel uses to paint a new world for humanity.
Development
The game's development began in mid-2013, before the release of Dark Souls II, whose development was handled by Tomohiro Shibuya and Yui Tanimura instead of the series creator, Hidetaka Miyazaki. The game was developed alongside Bloodborne but was handled by two mainly separate teams. Miyazaki also returned to direct Dark Souls III. Isamu Okano and Tanimura, the directors of Steel Battalion: Heavy Armor and Dark Souls II, respectively, served as co-directors for the game. Despite Miyazaki initially believing that the series would not have many sequels, Dark Souls III would serve as the fourth instalment in the Souls series. Miyazaki later added that the game would not be the last in the series. Instead, it would serve as a "turning point" for both the franchise and the studio, as it was the last project by FromSoftware before Miyazaki became the company's president. Multiple screenshots of the game were leaked before its initial reveal at Electronic Entertainment Expo 2015. The game's gameplay was then first shown at Gamescom 2015 in August.
Miyazaki said that Bloodbornes limitations made him want to return to the Souls series. The game's level design was created to become more of another "enemy" the player must face. However, just as how the former Souls games narrate their stories, Dark Souls III unfolds the plot with strong vagueness: players can learn the storyline merely through the conversation with non-player characters (NPCs), art design, and item flavour text. Due to this, Miyazaki states that there is no official and unique story. His intention of designing this game was to not impose his own viewpoint, with him stating that any attempts to discover and understand the plot and that world are encouraged. The improvement to archery, specifically draw speed, was inspired by Legolas from The Lord of the Rings franchise. The game's visual design focuses on "withered beauty", with ember and ash scattered throughout the game's world. The game's original score was primarily written by Dark Souls II and Bloodborne composer Yuka Kitamura and performed by the Tokyo Philharmonic Orchestra. Additional music was written by Dark Souls composer Motoi Sakuraba, with a single boss theme each by Tsukasa Saitoh and Nobuyoshi Suzuki.
Dark Souls III was released in Japan for PlayStation 4 and Xbox One on March 24, 2016, and released worldwide, along with the Microsoft Windows version, on April 12, 2016. A stress test for the game, which allowed players selected by Bandai Namco to test the game's network functionality before release, was available for three days in October 2015. The game has three different special editions for players to purchase, which cost more than the base game. Players who pre-ordered the game had their game automatically upgraded to the Apocalypse Edition, which has a special case and the game's original soundtrack. The Collector's Edition contains physical items such as the Red Knight figurine, an artbook, a new map, and special packaging. The Prestige Edition features all the content in The Collector's Edition, but has an additional Lord of Cinder resin figurine, which can form a pair with the Red Knight figurine.
The game's first downloadable content (DLC) expansion, titled Ashes of Ariandel, was released on October 24, 2016. The second and final DLC, titled The Ringed City, was released on March 28, 2017. Both DLCs added new locations, bosses, armours, and weapons to the game. A complete version containing the base game and both DLCs, titled Dark Souls III: The Fire Fades Edition, was released on April 21, 2017.
Reception
Dark Souls III received "generally favorable" reviews according to review aggregator Metacritic, with praise given to the game's visuals and combat mechanics, reminding reviewers of its faster-paced similarity to Bloodborne.
Chloi Rad of IGN awarded the game a 9.5 out of 10, stating she thought that "If Dark Souls 3 truly is the last in the series as we know it, then it's a worthy send-off." Rich Stanton of Eurogamer rated the game as "essential", calling it "fabulous" and that it was "a fitting conclusion" to the series. Steven Strom of Ars Technica wrote that he thought the title still had the "smooth and impressive rendering of the series' signature style" and some of "the best boss fights in any Souls game". Simon Parkin of The Guardian gave the game 5 out of 5 stars and wrote that while Dark Souls III "may not have the novelty of the first Dark Souls", it was "the more pristine and rounded work" of the series.
However, criticism was directed at issues with the game's frame rate and performance, linear map design, and Bandai Namco's handling of the Western launch. Philip Kollar of Polygon rated the game a 7 out of 10, bluntly stating disappointment at the lack of surprises and the arbitrary nature of the game's design, writing that "in so many important ways -- its world design, its pacing, the technology powering it - Dark Souls III falls short of the mark." A later patch, released on April 9, fixed some of the technical issues reviewers had with the game.
Reception to Ashes of Ariandel, the game's first downloadable content (DLC) expansion, was generally positive. Brendan Graeber of IGN enjoyed what the DLC offered, enjoying the introduction of a dedicated player versus player (PvP) arena, as well as the new enemies and bosses, but criticised the length, stating that Ashes of Ariandel served more as "an appetizer than a full course meal". Kollar of Polygon considered the content of the DLC to be "great", but agreed with Graeber's criticism of the length, saying that there was not much of it.
Reception to The Ringed City, the game's second and final DLC expansion, was also generally positive. Chloi Rad of IGN praised the overall level design and boss fights, adding that the DLC was a "satisfying" conclusion to the trilogy. In contrast, James Davenport of PC Gamer was less positive, calling the DLC "gorgeous but empty", adding that it was a "weak reflection" on the series' best traits.
Sales
In Japan, the PlayStation 4 version sold over 200,000 copies in its first two weeks of release. It became the fastest-selling video game published by Bandai Namco Entertainment America, becoming its most successful day-one launch. On May 10, 2016, Bandai Namco announced that Dark Souls III had reached three million total copies shipped worldwide, with 500,000 in Japan and Asia, 1.5 million in North America, and one million in Europe. It was also reported that Dark Souls III was the best selling software in North America in the month of release. By May 2020, the game had sold over 10 million copies.
Awards
Notes
References
External links
2016 video games
Action role-playing video games
Bandai Namco games
Dark fantasy role-playing video games
Death in fiction
FromSoftware games
Multiplayer and single-player video games
PlayStation 4 games
Souls (series)
Role-playing video games
Video games scored by Motoi Sakuraba
Video games developed in Japan
Video game sequels
Video games featuring protagonists of selectable gender
Video games using Havok
Video games with alternate endings
Video games with downloadable content
Windows games
Xbox One games
PlayStation 4 Pro enhanced games
Video games directed by Hidetaka Miyazaki
Soulslike video games |
29019482 | https://en.wikipedia.org/wiki/Gogmagog%20%28giant%29 | Gogmagog (giant) | Gogmagog (also Goemagot, Goemagog, Goëmagot and Gogmagoc) was a legendary giant in Welsh and later English mythology. According to Geoffrey of Monmouth's Historia Regum Britanniae ("The History of The Kings of Britain", 12th century), he was a giant inhabitant of Albion, thrown off a cliff during a wrestling match with Corineus (a companion of Brutus of Troy). Gogmagog was the last of the Giants found by Brutus and his men inhabiting the land of Albion.
The effigies of Gogmagog and Corineus, used in English pageantry and later instituted as guardian statues at Guildhall in London eventually earned the familiar names "Gog and Magog".
Etymology
The name "Gogmagog" is often connected to the biblical characters Gog and Magog; however Manley Pope, author of an 1862 English translation of the Welsh chronicle Brut y Brenhinedd (itself a translation of Monmouth's "Historia Regum Britanniae") argued that it was a corruption of Gawr Madoc (Madoc the Great).
Geoffrey of Monmouth
Gogmagog ("Goemagot", "Goemagog") in the legend of the founding of Britain as written by Geoffrey of Monmouth in Historia Regum Britanniae (1136). Gogmagog was a giant of Albion who was slain by Corineus, a member of the invading Trojan colonizers headed by Brutus. Corineus was subsequently granted a piece of land that was named "Cornwall" after him.
The Historia details the encounter as follows: Gogmagog, accompanied by twenty fellow giants, attacked the Trojan settlement and caused great slaughter. The Trojans rallied back and killed all giants, except for "one detestable monster named Gogmagog, in stature twelve cubits, and of such prodigious strength that at one shake he pulled up an oak as if it had been a hazel wand". He is captured so that Corineus can wrestle with him. The giant breaks three of Corineus's ribs, which so enrages him that he picks up the giant and carries him on his shoulders to the top of a high rock, from which he throws the giant down into the sea. The place where he fell was known as "Gogmagog's Leap" to posterity.
Later versions
Gogmagog's combat with Corineus according to Geoffrey was repeated in Wace's Anglo-Norman Brut and Layamon's Middle-English Brut. Because Geoffrey's work is regarded as fact until the late 17th Century, the story has continued to appear in most early histories of Britain.
The tale of Gogmagog's ancestry was composed later in the 14th century. Known as the "Albina story" (or Des Grantz Geanz), it claimed Gogmagog to be a giant descended from Albina and her sisters, thirty daughters of the king of Greece exiled to the land later to be known as "Albion". This story was added as a prologue to later versions of Brut pseudo-history,
Thus according to the Middle English prose version of the Brut, known as the Chronicles of England, Albina was the daughter of Syrian king named Diodicias, from whom Gogmagog and Laugherigan and the other giants of Albion are descended. These giants lived in caves and hills until being conquered by Brutus' party arriving in "Tottenesse" (Totnes, Devon). A later chapter describes Gogmagog's combat Corineus (Middle English:Coryn) "at Totttenes", more or less as according to Geoffrey. Gogmagog was the tallest of these giants; Coryn in comparison was at least the largest man from the waist upward among Brutus's crew. Caxton's printed edition, The Cronycles of Englond (1482), closely matches this content.
Raphael Holinshed also localizes the event of the "leape of Gogmagog" at Dover, But William Camden in his 1586 work Brittannia locates it on Plymouth Hoe, perhaps following Richard Carew's Survey of Cornwall. Carew describes "the portraiture of two men, one bigger, the other lesser.. (whom they term "Gogmagog") which was cut upon the ground at the Hawe (i.e. The Hoe) in Plymouth...". These figures were first recorded in 1495 and were destroyed by the construction of the Royal Citadel in 1665.
Michael Drayton's Poly-Olbion preserves the tale as well:
Guardians of London
The Lord Mayor's account of Gogmagog says that the Roman Emperor Diocletian had thirty-three wicked daughters. He found thirty-three husbands for them to curb their wicked ways; they chafed at this, and under the leadership of the eldest sister, Alba, they murdered their husbands. For this crime they were set adrift at sea; they washed ashore on a windswept island, which they named "Albion"—after Alba. Here they coupled with demons and gave birth to a race of giants, whose descendants included Gog and Magog. The effigies of two giants were recorded in 1558 at the coronation of Elizabeth I and were described as "Gogmagot the Albion" and "Corineus the Britain". These, or similar figures, made of "wickerwork and pasteboard" made regular appearances in the Lord Mayor's Show thereafter, although they became known as Gog and Magog over the years. New figures were carved from pine in 1709 by Captain Richard Saunders and displayed in the Guildhall until 1940 when they were destroyed in an air-raid; they were replaced by David Evans in 1953.
Images of Gog and Magog (depicted as giants) are carried by Lord Mayors of the City of London in a traditional procession in the Lord Mayor's Show each year on the second Saturday of November.
In French literature
Under the influence of Geoffrey's Gogmagog (Goemagot), Gos et Magos, the French rendition of "Gog and Magog", were recast in the role of enemies defeated by the giant Gargantua, and taken prisoner to King Arthur who held court in London in Rabelais's Gargantua (1534). Gargantua's father Pantagruel also had an ancestor named Gemmagog, whose name was also a corruption of "Gog and Magog", influenced by the British legend.
In Irish folklore
Works of Irish mythology, including the Lebor Gabála Érenn (the Book of Invasions), expand on the Genesis account of Magog as the son of Japheth and make him the ancestor to the Irish through Partholón, leader of the first group to colonize Ireland after the Deluge, and a descendant of Magog, as also were the Milesians, the people of the 5th invasion of Ireland. Magog was also the progenitor of the Scythians, as well as of numerous other races across Europe and Central Asia. His three sons were Baath, Jobhath, and Fathochta.
Explanatory notes
References
Bibliography
British folklore
Characters in works by Geoffrey of Monmouth
English giants
English folklore
London folklore
Gog and Magog |
2737938 | https://en.wikipedia.org/wiki/Univel | Univel | Univel, Inc. was a joint venture of Novell and AT&T's Unix System Laboratories (USL) that was formed in October 1991 to develop and market the Destiny desktop Unix operating system, which was released in 1992 as UnixWare 1.0. Univel existed only briefly in the period between AT&T initially divesting parts of USL in 1991, and its eventual outright purchase by Novell, which completed in 1993, thereby acquiring rights to the Unix operating system. Novell merged USL and Univel into their new Unix Systems Group (USG).
See also
DOS Merge 3.0
DR DOS 6.0
References
1991 establishments in Utah
1993 disestablishments in Utah
American companies established in 1991
American companies disestablished in 1993
AT&T subsidiaries
Companies established in 1991
Computer companies established in 1991
Computer companies disestablished in 1993
Defunct computer companies of the United States
Novell
Unix history |
53009271 | https://en.wikipedia.org/wiki/TVPaint%20Animation | TVPaint Animation | TVPaint Animation (also known as TVPaint, TVP, Bauhaus Mirage or NewTek Aura) is a 2D paint and digital animation software package developed by TVPaint Developpement SARL based in Lorraine (France). Originally released for Amiga in 1991, version 3.0 (1994) introduced support for other platforms. In 1999, the last Amiga version 3.59 was released as free download.
Notable uses
Feature films
My Dog Tulip, a 2009 American animated feature film by Paul Fierlinger and Sandra Fierlinger made with TVPaint Animation
Song of the Sea, a 2014 Irish Oscar-nominated animated feature film from Cartoon Saloon, directed by Tomm Moore
Mune: Guardian of the Moon, a 2015 French 3D computer-animated adventure fantasy film directed by Benoît Philippon and Alexandre Heboyan; 2D animated clips were made with TVPaint to coincide with the dominant computer-animation technique for the film.
The Peanuts Movie, a 2015 American 3D computer-animated comedy film (with 2D animation sequences animated with TVPaint) produced by Blue Sky Studios
The Breadwinner, a 2017 Oscar-nominated animated film by Cartoon Saloon, directed by Nora Twomey and executive produced by Angelina Jolie.
Kurt Cobain: Montage of Heck, a 2015 Emmy-nominated documentary directed by Brett Morgen with large animated sequences by a team led by Hisko Hulsing and animation by Stefan Nadelman
The Red Turtle, a 2016 Oscar-nominated animated feature film directed by Michael Dudok de Wit coproduced by numerous European studios and studio Ghibli, all the hand drawn animation was done in TVPaint.
Ethel & Ernest, a 2016 animated feature directed by Roger Mainwood , based on the book by Raymond Briggs, produced by Lupus Films.
Short films
Adam and Dog, a 2011 American Oscar-nominated animated short
How To Eat Your Apple , a 2011 animated short film made with TVPaint Animation, by Erick Oh.
Late Afternoon, a 2017 Irish Oscar-nominated animated short by Cartoon Saloon
We're Going On A Bear Hunt , a 2016 British animated television special (30-minutes) by Lupus Films.
Bird Karma, a 2018 American animated short by DreamWorks Animation
The Tiger Who Came To Tea, a 2019 British animated television special (23-minutes) by Lupus Films.
Kitbull, a 2019 Oscar-nominated American animated short produced by Pixar Animation Studios
Burrow, a 2020 American Oscar-nominated animated short produced by Pixar Animation Studios
TV and Web series
C'est Bon, a French animated series produced by Folimage
Simon's Cat, a cartoon and book series by British animator Simon Tofield. It was created using Adobe Flash, and TVPaint was used in the episodes Scaredy Cat, Snow Cat and in the Off to the vet special.
Gigglebug, originally an iPad app made by Anima Boutique; a Finnish animated series, that first aired in April 2016
PIG: The Dam Keeper Poems, a 2017 series based on the 2014 Oscar-nominated short The Dam Keeper directed by Erick Oh for Tonko House, which debuted on Hulu Japan on October 6, 2017
Samurai Jack, Season 5 (2017) The fifth and final season of Samurai Jack, an American animated series, premiered on Adult Swim's Toonami, directed by Genndy Tartakovsky .
Undone, animated series for Amazon Prime directed by Hisko Hulsing
Primal, animated series created and directed by Genndy Tartakovsky for Adult Swim.
See also
Flash animation
References
External links
TVPaint 3.59 for Amiga
Animation software
2D animation software
C++ software
Graphics software
Raster graphics editors
MacOS graphics software
Windows graphics-related software
Raster graphics editors for Linux
Software companies of France
Amiga software
Proprietary software |
1657742 | https://en.wikipedia.org/wiki/List%20of%20acronyms%3A%20I | List of acronyms: I | (Main list of acronyms)
I – (s) Iodine – One (in Roman numerals)
I0–9
I2WD or I2WD – (i) U.S. Intelligence and Information Warfare Directorate (CERDEC)
IA
ia – (s) Interlingua language (ISO 639-1 code)
IA – (s) Iowa (postal symbol)
IAAF – (i) International Association of Athletics Federations ("International Amateur Athletics Federation" from 1912 to 2001; renamed "World Athletics" in 2019)
IAAL – (i) I Am A Lawyer
IAAS - (p) Infrastructure-as-a-Service
IAB – (i) International Association for Biologicals
IACREOT – (i) International Association of Clerks, Recorders, Election Officials, and Treasurers
IAD – (i) Ion Assisted Deposition
IADS – (i) Integrated Air Defence System
IAEA – (i) International Atomic Energy Agency
IAF – {i} Industrial Areas Foundation
IAFG – (i) Information Assurance Focus Group
IAI – (i) International African Institute – Israel Aircraft Industries
IANA – (i) Internet Assigned Numbers Authority
IANAL – (i) I Am Not A Lawyer
IAPD
International Associates of Paediatric(sic) Dentistry
(i) Investment Adviser Public Disclosure
iShares Asia Pacific Dividend
IAQ – (i) Indoor air quality – Infrequently Asked Questions
IAS – (i) Image Assessment System
IASIP – (i) It's Always Sunny in Philadelphia
IATA – (a/i) International Air Transport Association
IATF – International Automotive Task Force
IATSE – International Alliance of Theatrical Stage Employees
IAU – (i) International Association of Universities – International Astronomical Union
IAUC – (i) International Astronomical Union Circular
IAW – (i) In Accordance With
IB
IB – (i) International Baccalaureate
IBA – (i) Important Bird Area
IBD – (i) Inflammatory Bowel Disease
IBDM – (i) Information Based Decision Making
IBDS – (i) Integrated Biological Detection System – International Business Development Specialist
IBM – (i) International Business Machines
ibo – (s) Igbo language (ISO 639-2 code)
IBRD – (i) International Bank for Reconstruction and Development (part of the World Bank)
IBS – (i) Irritable Bowel Syndrome
IC
IC
(s) Ice Crystals (METAR Code)
Iceland (FIPS 10-4 country code)
(i) Integrated Circuit - [inner circle]
ICANN – (a) Internet Corporation for Assigned Names and Numbers
ICAO – (i) International Civil Aviation Organization
ICBM – (i) Intercontinental Ballistic Missile
ICBM – I see (phonetically "c") BatMan
ICBN – (i) International Code of Botanical Nomenclature
ICC
(i) Integrated Command and Control
International Criminal Court
International Cricket Council
Interstate Commerce Commission
International Chamber of Commerce
ICD
(i) Implantable Cardioverter-Defibrillator
Initial Capabilities Document
International Classification of Diseases
ICE – (i/a) In case of emergency
ICES – (a/i) International Council for the Exploration of the Sea
ICF – (i) Intelligent Community Forum
ICHTHYS – (a) Iesous Christos, Theou Yios, Soter (Greek, "Jesus Christ, Son of God, Savior")
ICM
(i) Improved Conventional Munition
Integrated Collection Management
International Congress of Mathematicians
Iraq Campaign Medal
ICMP – (i) Internet Control Message Protocol
ICNAF – (i) International Commission for the Northwest Atlantic Fisheries (became NAFO in 1978)
ICNI – (i) Integrated Communication, Navigation, and Identification
ICO – (i) Intermediate Circular Orbit
ICP
(i) Inductively coupled plasma
Insane Clown Posse
ICPE – (a/i) International Conference on Pharmacoepidemiology, the annual conference of the International Society for Pharmacoepidemiology
ICRAF – (i) International Centre for Research in Agroforestry (a.k.a. World Agroforestry Centre since 2002)
ICRISAT – (i/a) International Crops Research Institute for the Semi-Arid Tropics
ICRL – (i) Individual Component Repair List
ICRS – (i) International Celestial Reference System
ICS
(i) In-Country Support
Intelligence and Communications Systems
Interim Contractor Support
ICSI – (p) Intracytoplasmic Sperm Injection
ICT – (i) Information and communication technologies
ICTH – (i) Island Closest to Heaven/Hell Final Fantasy VIII
ICV – (i) Infantry Carrier Vehicle
ICW
(i) In Co-ordination With
Indonesia Corruption Watch
Infectious and Chemotherapeutic Waste
Insulating Concrete Wall
Integral Crystalline Waterproofing
(p) Interactive Courseware
(i) International Championship Wrestling (defunct US professional wrestling promotion)
ICYMI – (i) In Case You Missed It
ICZN
(i) International Code of Zoological Nomenclature
International Commission on Zoological Nomenclature
ID
id – (s) Indonesian language (ISO 639-1 code)
ID
(s) Idaho (postal symbol)
(p) Identity
(s) Indonesia (FIPS 10-4 country code; ISO 3166 digram)
(i) Infantry Division
Intelligent Design
ID10T Error – (i) Idiot User Error (IT help desk inside joke)
IDA
(i) U.S. Institute for Defense Analyses
International Development Association (part of the World Bank
IDC – (i) International Data Corporation
IDDI – (i) I don't do initialisms (see irony)
IDE
(i) Integrated Development Environment
Integrated Drive Electronics
IDeA – (a) Improvement and Development Agency, the former name of Local Government Improvement and Development in the United Kingdom
IDEA
(a) Individuals with Disabilities Education Act (U.S.)
International Data Encryption Algorithm
International Design Excellence Awards
IDF
(i) Israel Defense Forces
Intel Development Forum
International Dairy Federation
IDGAF – (i) I don't give a f**k
IDK – (i) I don't know
IDKIBTI – I don't know, I'm black, that's it
IDL – (i) Interface Definition Language
IDN – (s) Indonesia (ISO 3166 trigram)
ido – (s) Ido language (ISO 639-2 code)
IDR – (s) Indonesian rupiah (ISO 4217 currency code)
IDTS – (i) I don't think so
IE
i.e. – (i) id est (Latin, roughly "that is")
ie – (s) Interlingue language (ISO 639-1 code)
IE – (i) Indo-European languages – Internet Explorer – (s) Ireland (ISO 3166 digram) – Iced Earth
IEA – (i) International Energy Agency
IEC – (i) International Electrotechnical Commission
IED – (i) Improvised Explosive Device
IEEE – (p) Institute of Electrical and Electronics Engineers ("I triple-E")
IELTS – (a) International English Language Testing System
IER – (i) Information Exchange Requirement
IET – (i) Institution of Engineering and Technology
IET – (ii) Initial Entry Training
IETF – (i) Internet Engineering Task Force
IEW – (i) Intelligence and Electronic Warfare
IF
IFAB – (i) International Football Association Board
IFAH – (i) International Federation for Animal Health
IFAP – (i) International Federation of Agricultural Producers, (i) International Fashion Academy Pakistan
IFC – (i) International Finance Corporation (part of the World Bank) – (a) Independent Film Channel
IFF – (i) Identification, Friend or Foe – Individually Fresh Frozen – International Flavors and Fragrances
IFFN – (i) Identification, Friend, Foe, or Neutral
IFO – (i) Identified Flying Object (see also UFO)
IFOR – (p) UN Implementation Force – (i) Institute for Operations Research – International Fellowship of Reconciliation
IFOV – (i) Instantaneous Field of View
IFPI – (i) International Federation of Phonographic Industries
IFR – (i) Instrument Flight Rules
IFRB – (i) International Frequency Registration Board
IFV – (i) Infantry Fighting Vehicle
IG
ig – (s) Igbo language (ISO 639-1 code)
IG – (i) Inspector General
IGFA – International Game Fish Association
IGNOU – (a) Indira Gandhi National Open University
IGS – (i) International Ground Station
IGY – (i) Israeli Gay Youth – International Geophysical Year
IH
IHÉS – (i) Institut des Hautes Études Scientifiques (French, Institute of Advanced Scientific Studies)
IHH – (i) Idiopathic Hypogonadotropic Hypogonadism
IHO – (i/a) International Hydrographic Organization
IHOP – (i/a) International House of Pancakes, the original name of this American restaurant chain ("EYE-hop")
IHR – (i) Institute for Historical Review
IHSI – Institutum Historicum Societatis Iesu (Jesuit Historical Institute)
II
ii – (s) Sichuan Yi language (ISO 639-1 code)
II – (s) Two (in Roman numerals)
IICA – (i) Inter-American Institute for Cooperation on Agriculture
IIE – (i) Innovative Interstellar Explorer
iii – (s) Sichuan Yi language (ISO 639-2 code)
III – (s) Three (in Roman numerals)
IIMSS – If I May Say So
IIRC – If I Remember Correctly, If I Recall Correctly
IISS – (i) International Institute for Strategic Studies
IIT – (i) Illinois Institute of Technology – Indian Institutes of Technology – Indiana Institute of Technology
IITM – (i) Image Institute of Technology & Management – Indian Institute of Technology Madras – Indian Institute of Tropical Meteorology
I/ITSEC – (a) Interservice/Industry Training, Simulation and Education Conference ("eye-it-sec" or "it-sec")
IJ
IJN – (i) Imperial Japanese Navy (World War II)
IJWP – (i) Interim Joint Warfare Publication
IK
ik – (s) Inupiaq language (ISO 639-1 code)
IKEA – (a) Ingvar Kamprad (the company's founder), Elmtaryd (the farm where he was born and raised), Agunnaryd (a village and parish near the farm)
iku – (s) (s) Inuktitut language (ISO 639-2 code)
IM
IM
(s) Isle of Man (FIPS 10-4 territory code)
(i) Instant Message (Internet speech)
IMA
(i) Intermediate Maintenance Activity
Institute of Mathematics and Applications
IMBD – (i) International Migratory Bird Day
IMDb – Internet Movie Database
IME - (i) In My Experience
IMF
(i) id music file/id's music format (audio file format)
Impossible Missions Force, the protagonist organization in the Mission: Impossible media franchise
International Monetary Fund (the most common use for the initialism)
IMHO – (i) In My Humble/Honest Opinion, cf. IMO
IMIST AMBO – used to facilitate an informed clinical handover between paramedics, emergency department staff or other healthcare similar to SBAR: Identification, Method of Injury, Index of concern, Signs, Treatment, Allergies, Medication, Background, and Other
IML – (i) International Mister Leather
IMLAST – (i) International Movement for Leisure Activities in Science and Technology – see MILSET
IMNM - (i) - Immune-mediated necrotizing myopathy a rare, but serious ADR of statins
IMNSHO – (i) In My Not So Humble Opinion
IMO (in my opinion) – (i) In My Opinion, also "imo", cf. IMHO
IMPAC – (a) International Merchant Purchase Authorization Card
IMPACT – (a) International Medical Products Anti-Counterfeiting Taskforce
IMPATT – (p) IMPact Avalanche Transit Time diode
IMPT – (i) Institute of Maxillofacial Prosthetists and Technologists
IMR – (i) Inzhenernaya Mashina Razgrazhdeniya (Russian Инженерные Машины Разграждения, "Engineer Vehicle Obstacle-Clearing") †
IMRI
(i) Industrial Membrane Research Institute
Information Management Research Institute
Institut pour le management de la recherche et de l'innovation (French, "Institute for the Management of Research and Innovation")
International Market Research Information
Intraoperative Magnetic Resonance Imaging
IMRO
Internal Macedonian Revolutionary Organization
Irish Music Rights Organisation
IMRL – (i) Individual Material Readiness List
IMS
(i) Information Management System
International Meat Secretariat
Ion Mobility Spectrometer
IMSA
(i) Illinois Mathematics and Science Academy
International Mind Sports Association
International Motor Sports Association
IMsL – (a) International Ms. Leather
IMT – (i) Integrated Management Tool
IMU – (i) International Mathematical Union
IN
In – (s) Indium
IN – (s) India (FIPS 10-4 country code; ISO 3166 digram) – Indiana (postal symbol) – Infantry
ina – (s) Interlingua language (ISO 639-2 code)
INA – (p) Immigration and Nationality Act
INAS – (a) International Near-Earth Asteroid Survey
INC –
(i) Iglesia ni Cristo (Filipino, "Church of Christ")
Indian National Congress
International Network of Crackers
Iraqi National Congress
INCITS – (p) InterNational Committee for Information Technology Standards
ind – (s) Indonesian language (ISO 639-2 code)
IND – (s) India (ISO 3166 trigram)
inet – (a) the Internet
INDIGO – (p) (U.S.) INtelligence DIvision Gaming Operation
INFORMS – (a) (U.S.) Institute for Operations Research and the Management Sciences
INFOSEC – (p) Information Security
INLA – (i) Irish National Liberation Army
INMARSAT – (p) International Maritime Satellite organization
INR – (s) Indian rupee (ISO 4217 currency code)
INRI – (a/i) Iesus Nazarenus Rex Iudæorum (Latin, "Jesus of Nazareth, King of the Jews")
INRIA – (p) Institut national de recherche en informatique et en automatique
INS – (i) Immigration and Naturalization Service
InSAR – (p) Interferometric synthetic aperture radar
INST – (p) Information Standards and Technology
in trans. – (p) in transitu (Latin, "in transit")
INTSUM – (p) Intelligence Summary
INTERFET – (p) International Force for East Timor
IO
io – (s) Ido language (ISO 639-1 code)
IO – (s) British Indian Ocean Territory (ISO 3166 digram; FIPS 10-4 territory code)
IOC
(i) Initial Operational Capability
Intergovernmental Oceanic Commission
International Olympic Committee
International Ornithological Congress
IOHK – Input Output Hong Kong. Redirected to Charles Hoskinson
IOLTA – (a) Interest on Lawyer Trust Accounts (charitable funding mechanism, especially for legal aid)
IOM
(i) Iowa, Ohio, Michigan (soybean origin)
Institute of Medicine
International Organisation for Migration
Isle of Man
IONA – (i) Islands of the North Atlantic (alternate name for Great Britain, Ireland, the Isle of Man, and related islands)
IOT – (s) British Indian Ocean Territory (ISO 3166 trigram)
IOT&E – (i) Initial Operational Test and Evaluation
IOU – (p) "I Owe You" (Promissory note)
IP
IP –
(s) Clipperton Island (FIPS 10-4 territory code)
(i) Initial Point – Intellectual Property – Internet Protocol
IPA – (i) India Pale Ale – International Phonetic Alphabet – Isopropyl alcohol
IPB – (i) Intelligence Preparation of the Battlefield
IPCC –
(i) Intergovernmental Panel on Climate Change
Independent Police Complaints Commission
ipk – (s) Inupiaq language (ISO 639-2 code)
IPL – (i) Inferior Parietal Lobule
IPN – (i) Instytut Pamieci Narodowej (Polish, "Institute for National Memory")
IPO – (i) Initial public offering
IPR –
(i) In Progress Review
Intellectual Property Rights
Intelligence Production Requirement
IPT – (i) Integrated Product/Project Team
IPTS – (i) Institute for Prospective Technological Studies
IPTV – (p) Internet Protocol TeleVision
IPW – (i) Interrogation of Prisoners of War
IQ
iq – (i) idem quod (Latin, "the same as")
IQ –
(i) Intelligence Quotient
(s) Iraq (ISO 3166 digram)
IQD – (s) Iraqi dinar (ISO 4217 currency code)
IR
Ir – (s) Iridium
IR
(i) Infrared
Intelligence requirement (military)
(s) Iran (FIPS 10-4 country code; ISO 3166 digram)
Iran Air (IATA alpha code) see entry for more
IRA
(i) Individual retirement account
(i) Irish Republican Army (any of several armed groups dedicated to Irish republicanism)
Internet Research Agency (used in the Mueller report)
IRAD – (a) Internal Research and Development
IRAN - (I) Inspect and Repair As Necessary
IRAS – (p) Infrared Astronomical Satellite
IRB – (i) International Rugby Board, a former name of World Rugby
IRBM – (i) Intermediate Range Ballistic Missile
IRC
(i) International Red Cross
International reply coupon
(i) Internet Relay Chat
IREA – (i) Intermountain Rural Electric Association
IRL
(s) Ireland (ISO 3166 trigram)
Industrial Research Limited
Indy Racing League, former name of the motorsport governing body now known as INDYCAR
(i) In real life
IRM
(i) Illinois Railway Museum
Inzhenernaya Razvedyvatelnaya Mashina (Russian Инженерная Разведывательная Машина, "Engineer Reconnaissance Vehicle") †
IRN – (s) Iran (ISO 3166 trigram)
IRINN - Islamic Republic of Iran News Network
IRO – (i) International Refugee Organisation
IRQ – (s) Iraq (ISO 3166 trigram)
IRR – (s) Iranian rial (ISO 4217 currency code)
IRS – (i) U.S. Internal Revenue Service
IRSN – (i) Institut de radioprotection et de sûreté nucléaire (French, "Institute for Radiation Protection and Nuclear Safety")
IRST – (i) Infra-red Search and Track
IS
is – (s) Icelandic language (ISO 639-1 code)
IS
(s) Iceland (ISO 3166 digram)
Israel (FIPS 10-4 country code)
ISA
(i) Individual Savings Account
Industry Standard Architecture
Instruction Set Architecture
International Seabed Authority
ISAAA – (i) International Service for the Acquisition of Agri-biotech Applications
ISAF
(a) International Sailing Federation
International Security Assistance Force
ISAR – (a) Inverse SAR (Synthetic Aperture Radar)
ISB – (i) Intermediate Staging Base
ISBN – (i) International Standard Book Number (ISO 2108)
ISCCP – (i) International Satellite Cloud Climatology Project
ISCII – (a) Indian Standard Code for Information Interchange
ISDN – Integrated Services Digital Network
ISEF – (a) Intel International Science and Engineering Fair
ISEN – (i) Internet Search Environment Number
ISF – (i) Internal Security Force
ISI
Indian Standards Institute, former name of the Bureau of Indian Standards
Inter-Services Intelligence (Pakistan)
Islamic State of Iraq, an umbrella organization for several Iraqi insurgent groups
ISIL – (i/a) Islamic State of Iraq and the Levant
ISIS
(i/a) Institute for Science and International Security
Islamic State of Iraq and Syria
ISK – (s) Icelandic krona (ISO 4217 currency code)
isl – (s) Icelandic language (ISO 639-2 code)
ISL – (s) Iceland (ISO 3166 trigram)
ISLN –
(i) International Standard Lawyer Number (used initially by LexisNexis Martindale-Hubbell)
(i) International School Libraries Network (HQ in Singapore)
(s) Isilon Systems, the NASDAQ symbol
ISM
(i) Industrial, scientific or medical
(p) Interstellar medium
ISMB – (i) International Society of Matrix Biologists
ISMC – (i) Intelligent Sounding Meaningless Conversation
ISO
(i) In Search Of
(s) International Organization for Standardization (from the Greek ίσος, isos, meaning "equal")
ISOGP – (i) Indian Society of Orthodontics for General Practitioners
ISP
(i) International Standardized Profile
Internet Service Provider
Information Systems Professional
ISPE – (a/i) International Society for Pharmacoepidemiology
ISR
(i) Intelligence, Surveillance and Reconnaissance
(s) Israel (ISO 3166 trigram)
ISRO – (i) Indian Space Research Organisation
ISS
(i) International Shorebird Survey (North America)
International Space Station
ISSCR – (i) International Society for Stem Cell Research
ISSN – (i) International standard serial number (ISO 3297)
IST
(i) Information System Technology
UCF Institute for Simulation and Training
ISTAR – (i) Intelligence, Surveillance, Target Acquisition, and Reconnaissance
ISTC – (i) Institute of Scientific and Technical Communications (UK)
ISTS – (i) Intel Science Talent Search
ISU
(i) Integrated Sight Unit
International Skating Union
IT
it – (s) Italian language (ISO 639-1 code)
IT
(i) InferoTemporal (neurophysiology)
Information Technology
(s) Italy (FIPS 10-4 country code; ISO 3166 digram)
ita – (s) Italian language (ISO 639-2 code)
ITA – (s) Italy (ISO 3166 trigram)
ITAG – (i) Inertial Terrain-Aided Guidance
ITALY – (a) I'll Truly Always Love You
ITAS – (i) Improved Target Acquisition Sight
ITC – (i) InferoTemporal Cortex (neurophysiology)
ITCZ – (i/p) Intertropical Convergence Zone
ITEC – (a) Information Technology Exposition & Conference
ITEMS – (a) Interactive Tactical Environment Management System
ITER – (a) International Thermonuclear Experimental Reactor
ITF – (a) International Tennis Federation
ITK – (a) In The Know
ITN – (i) Independent Television News (British)
ITS
(i) Incompatible Time-sharing System
Individual Training Standards
ITT – (i) International Telephone and Telegraph (U.S.)
ITTO – (i) International Tropical Timber Organization
ITU
(i) Intent-to-Use
International Telecommunication Union (International Telegraph Union 1865–1932)
IU
iu – (s) (s) Inuktitut language (ISO 639-1 code)
IUCN – (i) International Union for the Conservation of Nature and Natural Resources (World Conservation Union)
IUD – (i) Intrauterine device (https://www.plannedparenthood.org/learn/birth-control/iud)
IUI – (i) Intrauterine insemination (http://americanpregnancy.org/infertility/intrauterine-insemination/)
IUPAC – (a) International Union of Pure and Applied Chemistry (pronounced "eye-yoo-pac")
IUPUI – (a) Indiana University – Purdue University Indianapolis
IUSS – (i) Integrated Undersea Surveillance System – Integrated Unit Simulation System – International Union of Soil Sciences
IV
IV – (s) Côte d'Ivoire (FIPS 10-4 country code; from Ivory Coast) – Four (in Roman numerals) – (i) IntraVenous (as in intravenous drip or intravenous therapy)
IVD – (i) Internet Video Device, intra vas device
Iveco – (p) Industrial Vehicles Corporation
IVF – (i) In Vitro Fertilisation
IVI – (i) Interchangeable Virtual Instrument
IVIS – (i/a) Inter-Vehicle Information System
IVL – (i) Inter-Visibility Line
IVM – (i) In Vitro [Oocyte] Maturation
IVO – (i) In Vicinity Of
IW
IW – (i) Impact Wrestling
IW – (i) Information Warfare
IWARS – (p) Infantry Warrior Simulation
IWC – (i) International Whaling Commission
IWT – (i) Illegal Wildlife Trade
IWW – (i) Industrial Workers of the World
IX
IX – (s) Nine (in Roman numerals)
IXT – (s) IntraText digital library; IntraText lexical hypertext
IY
IY – (s) Saudi–Iraqi neutral zone (FIPS 10-4 territory code; obsolete 1993)
IYKYK – (i) If You Know You Know
IZ
IZ – (s) Iraq (FIPS 10-4 country code)
References
Acronyms I |
1858505 | https://en.wikipedia.org/wiki/ARexx | ARexx | ARexx is an implementation of the Rexx language for the Amiga, written in 1987 by William S. Hawes, with a number of Amiga-specific features beyond standard REXX facilities. Like most REXX implementations, ARexx is an interpreted language. Programs written for ARexx are called "scripts", or "macros"; several programs offer the ability to run ARexx scripts in their main interface as macros.
ARexx can easily communicate with third-party software that implements an "ARexx port". Any Amiga application or script can define a set of commands and functions for ARexx to address, thus making the capabilities of the software available to the scripts written in ARexx.
ARexx can direct commands and functions to several applications from the same script, thus offering the opportunity to mix and match functions from the different programs. For example, an ARexx script could extract data from a database, insert the data into a spreadsheet to perform calculations on it, then insert tables and charts based on the results into a word processor document.
History
ARexx was first created in 1987, developed for the Amiga by William S. Hawes. It is based on the REXX language described by Mike Cowlishaw in the book The REXX Language: A Practical Approach to Programming. ARexx was included by Commodore with AmigaOS 2.0 in 1990, and has been included with all subsequent AmigaOS releases. This later version of ARexx follows the official REXX language closely; Hawes was later involved in drafting the ANSI standard for REXX.
ARexx is written in 68000 Assembly, and cannot therefore function at full speed with new PPC CPUs, a version of ARexx has not been rewritten for them and is still missing from MorphOS 3.0. William Hawes is no longer involved in development of Amiga programs and no other Amiga-related firm is financing new versions of ARexx. Notwithstanding this fact, the existing version of ARexx continues to be used, although it is not distributed with MorphOS.
From the ARexx manual:ARexx was developed on an Amiga 1000 computer with 512k bytes of
memory and two floppy disk drives. The language prototype was
developed in C using Lattice C, and the production version was written
in assembly-language using the Metacomco assembler. The documentation
was created using the TxEd editor, and was set in TeX using AmigaTeX.
This is a 100% Amiga product.
Characteristics
ARexx is a programming language that can communicate with other applications. Using ARexx, for example, one could request data from a database application and send it to a spreadsheet application. To support this facility, an application must be "ARexx compatible" by being able to receive commands from ARexx and execute them. A database program might have commands to search for, retrieve, and save data — the MicroFiche Filer database has an extensive ARexx command set. A text editor might have ARexx commands corresponding to its editing command set — the Textra editor supplied with JForth can be used to provide an integrated programming environment. The AmigaVision multimedia presentation program also has ARexx port built in and can control other programs using ARexx.
ARexx can increase the power of a computer by combining the capabilities of various programs. Because of the popularity of a stand-alone ARexx package, Commodore included it with Release 2 of AmigaDOS.
Like all REXX implementations, ARexx uses typeless data representation. Other programming languages made distinctions between integers, floating point numbers, strings, characters, vectors, etc. In contrast, REXX systems treat all data as strings of characters, making it simpler to write expressions and algorithms.
As is often the case in dynamically scoped languages, variables are not declared before using them, they come into being on their first use.
ARexx scripts benefit from an error handling system which monitors execution and responds accordingly. The programmer can choose to suspend and resume the execution of the program as needed.
The ARexx command set is simple, but in addition to the commands there are the functions of its Amiga reference library (rexxsyslib.library). It is also easy to add other libraries or individual functions. ARexx scripts can also be invoked as functions from other ARexx scripts. Any Amiga program which has an ARexx port built in can share its functions with ARexx scripts.
Examples of ARexx solutions to common problems
Implementing new features and capabilities via scripts
If end user is using a program which builds animations by joining various bitmap image files but which lacks image processing capabilities, he could write an ARexx script which performs these actions:
ARexx locates the image files in their directories
ARexx loads first image
ARexx loads paint program
The image is loaded into paint program which performs modifications to file
The modified image is stored into another directory
ARexx repeats procedure on any image in the directory
The paint program is closed and the animation program is loaded
The animation is built
The animation is saved in its directory
The animation program is closed
Avoiding repetitive procedures
EqFiles.rexx is a well-known example of a simple ARexx script written to automate repetitive and boring procedures. This script uses the ALeXcompare program to compare files, and then finds all duplicates in a set of files and returns output by highlighting any results in a different color.
Expand AmigaOS capabilities
One of the main features of ARexx is the fact it could expand the capabilities of the AmigaOS by adding some procedures the OS lacked. For example, a simple ARexx program could be written to print a warning message on the screen of the monitor, or play an audio alert signal if a certain Amiga program stops, faults or has finished its scheduled job.
The following script is a minimal ARexx script that displays warnings depending on events that take place.
/* Alarm.rexx */
ARG event
IF event = 0 THEN EXIT
IF event = 1 THEN SAY "Program has ended unexpectedly"
IF event = 2 THEN SAY "Program has finished its job"
IF event = 3 THEN SAY "Cannot find data in selected directory"
See also
REXX
References
Notes
External links
Beginning ARexx Tutorial
Command and Function Reference
Design Tool
Amiga APIs
Amiga development software
AmigaOS 4 software
AmigaOS
MorphOS
Scripting languages
CBM software
Assembly language software
Rexx
Inter-process communication |
43003632 | https://en.wikipedia.org/wiki/Anaconda%20%28Python%20distribution%29 | Anaconda (Python distribution) | Anaconda is a distribution of the Python and R programming languages for scientific computing (data science, machine learning applications, large-scale data processing, predictive analytics, etc.), that aims to simplify package management and deployment. The distribution includes data-science packages suitable for Windows, Linux, and macOS. It is developed and maintained by Anaconda, Inc., which was founded by Peter Wang and Travis Oliphant in 2012. As an Anaconda, Inc. product, it is also known as Anaconda Distribution or Anaconda Individual Edition, while other products from the company are Anaconda Team Edition and Anaconda Enterprise Edition, both of which are not free.
Package versions in Anaconda are managed by the package management system conda. This package manager was spun out as a separate open-source package as it ended up being useful on its own and for things other than Python. There is also a small, bootstrap version of Anaconda called Miniconda, which includes only conda, Python, the packages they depend on, and a small number of other packages.
Overview
Anaconda distribution comes with over 250 packages automatically installed, and over 7,500 additional open-source packages can be installed from PyPI as well as the conda package and virtual environment manager. It also includes a GUI, Anaconda Navigator, as a graphical alternative to the command-line interface (CLI).
The big difference between conda and the pip package manager is in how package dependencies are managed, which is a significant challenge for Python data science and the reason conda exists.
Before version 20.3, when pip installed a package, it automatically installed any dependent Python packages without checking if these conflict with previously installed packages. It would install a package and any of its dependencies regardless of the state of the existing installation. Because of this, a user with a working installation of, for example, TensorFlow, could find that it stopped working having used pip to install a different package that requires a different version of the dependent numpy library than the one used by TensorFlow. In some cases, the package would appear to work but produce different results in detail. While pip has since implemented consistent dependency resolution, this difference accounts for a historical differentiation of the conda package manager.
In contrast, conda analyses the current environment including everything currently installed, and, together with any version limitations specified (e.g. the user may wish to have TensorFlow version 2,0 or higher), works out how to install a compatible set of dependencies, and shows a warning if this cannot be done.
Open source packages can be individually installed from the Anaconda repository, Anaconda Cloud (anaconda.org), or the user's own private repository or mirror, using the conda install command. Anaconda, Inc. compiles and builds the packages available in the Anaconda repository itself, and provides binaries for Windows 32/64 bit, Linux 64 bit and MacOS 64-bit. Anything available on PyPI may be installed into a conda environment using pip, and conda will keep track of what it has installed itself and what pip has installed.
Custom packages can be made using the conda build command, and can be shared with others by uploading them to Anaconda Cloud, PyPI or other repositories.
The default installation of Anaconda2 includes Python 2.7 and Anaconda3 includes Python 3.7. However, it is possible to create new environments that include any version of Python packaged with conda.
Anaconda Navigator
Anaconda Navigator is a desktop graphical user interface (GUI) included in Anaconda distribution that allows users to launch applications and manage conda packages, environments and channels without using command-line commands. Navigator can search for packages on Anaconda Cloud or in a local Anaconda Repository, install them in an environment, run the packages and update them. It is available for Windows, macOS and Linux.
The following applications are available by default in Navigator:
JupyterLab
Jupyter Notebook
QtConsole
Spyder
Glue
Orange
RStudio
Visual Studio Code
Conda
Conda is an open source, cross-platform, language-agnostic package manager and environment management system that installs, runs, and updates packages and their dependencies. It was created for Python programs, but it can package and distribute software for any language (e.g., R), including multi-language projects.
The conda package and environment manager is included in all versions of Anaconda, Miniconda, and Anaconda Repository.
Anaconda Cloud
Anaconda Cloud is a package management service by Anaconda where users can find, access, store and share public and private notebooks, environments, and conda and PyPI packages. Cloud hosts useful Python packages, notebooks and environments for a wide variety of applications. Users do not need to log in or to have a Cloud account, to search for public packages, download and install them.
Users can build new packages using the Anaconda Client command line interface (CLI), then manually or automatically upload the packages to Cloud.
See also
List of software package management systems
Package manager
Pip (package manager)
Setuptools
References
External links
Anaconda Cloud
Package management systems
Python (programming language) software |
8868396 | https://en.wikipedia.org/wiki/IGUANA%20Computing | IGUANA Computing | Independent Group of Unix-Alikes and Networking Activists (IGUANA) are developers of the Wombat system.
IGUANA is also an operating system (OS) personality that provides a set of services for memory management and process protection. Iguana is designed as a base for the provision of operating system services for embedded systems. Among others, it provides the underlying OS for Wombat, a version of paravirtualised Linux designed to provide legacy support for embedded systems.
Wombat works together with Pistachio, Kenge and Iguana.
It is also used, along with Pistachio, to create Qualcomm's REX OS designed for cell phones.
External links
Iguana L4 Development using Iguana
Iguana Linux Information
Kernel: NICTA::Pistachio-embedded
L4 Based Operating Systems
Project:Iguana
Virtualised os: wombat
Wombat
Embedded operating systems
Unix variants |
8213797 | https://en.wikipedia.org/wiki/Brennan%20Carroll | Brennan Carroll | Brennan Carroll (born March 20, 1979) is the offensive coordinator for the Arizona Wildcats. His nickname is BC. His father is NFL coach Pete Carroll.
High school career
Carroll played high school football at Saratoga High School in Saratoga, California.
College career
Carroll played college football as a tight end at the University of Pittsburgh (1999–2001) after transferring from the University of Delaware (1997).
Coaching career
In 2002, Carroll joined the USC Trojans football team staff as a graduate assistant under his father, Pete Carroll, who was then head coach. During his first season he worked with offense and special teams. During his second season on staff, he worked with the tight ends. In 2004, he became the full-time assistant coach in charge of tight ends. In 2007, in addition to his work as an assistant coach, he became the team's Recruiting Coordinator. Also in 2007, Trojans tight end Fred Davis, who Brennan coached, won the John Mackey Award, which goes to the nation's top tight end. In February 2010, it was announced that the recently hired head coach Lane Kiffin would not retain Brennan Carroll.
Brennan Caroll did not coach during the 2010 college or NFL seasons.
On December 22, 2010, it was announced that Carroll would join Al Golden's staff at the University of Miami where he assumed the role of TE coach and recruiting coordinator.
On January 10, 2013, Carroll moved to WR coach, while retaining the recruiting coordinator title, after the Hurricanes hired Mario Cristobal as their Associate HC and TE Coach.
On February 9, 2015, Carroll joined an NFL staff for the first time in his career after spending 13 years in the college ranks, reuniting with his father Pete. Carroll assumed the role of assistant offensive line coach.
Prior to the 2020 NFL season, Carroll received a promotion and was named run game coordinator.
On January 1, 2021, Carroll joined the Arizona Wildcats team staff as offensive coordinator and O-line coach, reunited with head coach Jedd Fisch who worked with Carroll at Miami during a stint where Carroll was the recruiting coordinator and coached tight ends.
Personal life
Carroll's father is Pete Carroll and his brother Nate is currently the wide receivers coach for the Seattle Seahawks.
Brennan Carroll and his wife Amber have one son, Dillon Brennan Carroll.
References
External links
Brennan Carroll – USC Athletic Department bio
Brennan Carroll – HurricaneSports Bio
1979 births
Living people
People from Saratoga, California
American football tight ends
Delaware Fightin' Blue Hens football players
Pittsburgh Panthers football players
USC Trojans football coaches
Miami Hurricanes football coaches
Players of American football from California
Seattle Seahawks coaches
Sportspeople from Santa Clara County, California |
31217397 | https://en.wikipedia.org/wiki/Lists%20of%20computers | Lists of computers | Lists of computers cover computers, or programmable machines, by period, type, vendor and region.
Early computers
List of vacuum tube computers
List of transistorized computers
List of early microcomputers
List of computers with on-board BASIC
List of computers running CP/M
More recent computers
List of home computers
List of home computers by video hardware
List of fastest computers
Lists of microcomputers
Lists of mobile computers
List of fictional computers
Vendor-specific
HP business desktops
List of Macintosh models grouped by CPU type
List of Macintosh models by case type
List of TRS-80 and Tandy-branded computers
List of VAX computers
Regional
List of British computers
List of computer systems from Croatia
List of computer systems from Serbia
List of computer systems from Slovenia
List of computer systems from the Socialist Federal Republic of Yugoslavia
List of Soviet computer systems
See also |
46556301 | https://en.wikipedia.org/wiki/Dark%20media | Dark media | Dark media are a type of media outlined by American philosopher Eugene Thacker to describe technologies that mediate between the natural and supernatural, most commonly found in the horror genre.
Overview
Discussed at length in the essay of the same name, Eugene Thacker writes that dark media are media that function too well. Thacker writes that, "dark media have, as their aim, the mediation of that which is unavailable or inaccessible to the senses, and thus that which we are normally "in the dark" about." Typically in works of Horror, dark media are relatively commonplace media that show more of the world than is expected, with the dark medium showing what lies beyond the possibility of human sense. Dark media are significant in their ability to breach the, typically unbridgeable, gap between objects being mediated. Thacker's examples include the films of Georges Méliès and Peter Tscherkassky, J-horror film directors like Kiyoshi Kurosawa, the horror films of Kenneth Anger and Dario Argento, The Twilight Zone TV series, and the "occult detective" writing linked to the Society for Psychical Research and authors such as William Hope Hodgson, Algernon Blackwood, and Sheridan Le Fanu. Referencing the philosophies of religion in Augustine, Immanuel Kant, William James, and Georges Bataille, Thacker shows how dark media blur the boundary between the natural and supernatural, and bear comparison to accounts of mystical experience. Thacker references the work of François Laruelle's "non-philosophy" and Siegfried Zielinski's media archaeology to show how dark media point to the limits of human perception and knowledge. With dark media, as shown in the J-Horror film Ring, dark media can create a point between the natural and the supernatural. In Ring, the dark medium of the VHS cassette makes it possible for antagonist Sadako Yamamura to cross the threshold of a TV, and subsequently kill those that have viewed the videotape. Further examples of dark media are found in Thacker's book In the Dust of This Planet.
Examples of dark media
Machine from Long Distance Wireless Photography and other films of Georges Méliès.
The "electric pentagram" in Carnacki the Ghost-Finder, by William Hope Hodgson (1913).
Machine in H.P. Lovecraft's short story "From Beyond" (1934).
Fortune-telling machine from The Twilight Zone episode "Nick of Time".
Video & audio recording equipment in Poltergeist (1982).
The movie theater in Lamberto Bava & Dario Argento's film Demons (1985)
Videotape in Hideo Nakata's film Ring (1998).
Webcam in Kiyoshi Kurosawa's film Pulse (2001).
References
Theories
Supernatural |
3199237 | https://en.wikipedia.org/wiki/OpenArena | OpenArena | OpenArena is a free and open-source video game. It is a first-person shooter (FPS), and a video game clone of Quake III Arena.
Development
The OpenArena project was established on August 19, 2005, one day after the id Tech 3 source code released under GNU GPL-2.0-or-later license.
Its official website includes downloads for Microsoft Windows, Linux, and macOS operating systems. Thanks to third-party efforts, it is also available from the default repositories of a number of open-source operating systems, including Debian, Fedora, FreeBSD, OpenBSD, Gentoo, Mandriva, Arch and Ubuntu. It is also in development for the Maemo mobile operating system. Ports for Raspberry Pi, Android and iOS are available, too.
An assets "reboot" named "OA3" is planned, with the aim of steering the art style away from the classic space and gothic themes to "something more manga inspired", while also raising its quality and performances standards.
Gameplay
OpenArena'''s gameplay attempts to follow Quake III Arena: score frags to win the game using a balanced set of weapons each designed for different situations, with just minor changes to the rules enabled by default (like awarding a character for "pushing" another character to their death).
Each match happens in an "arena": a map where players try to kill each other; some arenas are designed for Capture the flag'' and similar gametypes, so are built with two bases (usually identical, apart from the colors), for the two teams.
The Quake III style of play is very fast and requires skill to be played successfully online. It's an arcade-style gameplay which allows players to quickly move through maps thanks to "bouncepads", "accelerator pads", "teleporters" and advanced techniques such as "strafe jumping" and "rocket jumping". Some arenas include traps.
The game can be played online (against other human players) or offline (against computer-controlled characters known as bots). "Singleplayer" mode allows players to play a predefined series of deathmatches, unlocking a new "tier" of four maps after completing the previous one, or to create custom matches in any game type through the "skirmish" mode.
Game modes
As of OpenArena 0.8.8, maps can be played in at least one of these gametypes: Deathmatch (called as Free For All in the game), Team Deathmatch, Tournament, Capture The Flag, One Flag CTF, Harvester, Overload, Elimination, CTF Elimination, Last Man Standing, Double Domination and Domination:
"Free For All" is classic Deathmatch, where players are all pitted against each other, and wins the player with the highest score at the end of the match, or the one with the highest number of frags when the time limit is reached.
"Team Deathmatch" is a team-based variation of Deathmatch, with two teams of players being pitted against the other.
"Tournament" chooses two players and makes them duel, in a classic "winner stays, loser gets out" setting.
"Capture The Flag" is a team-based mode where each team spawns in a base which contains a flag. They must capture the enemy team's flag while keeping their own flag from being captured.
"One Flag CTF" is a variation of Capture The Flag where a white flag spawns in the middle of the map, and the teams must bring it to the enemy base, instead of taking the enemy's flag.
"Harvester" is another team-based mode played in some Capture The Flag scenarios. Each team spawns with a Skull Receptacle, and there's a Skull Generator at the middle of the map. By fragging enemies, skulls appear in this generator. The players must collect their enemies' skulls and bring them to the enemy base in order to score.
"Overload" has both teams' bases spawn a crystal. The players of each team must travel to the enemy base and destroy this crystal in order to win.
"Elimination" is a team based mode where both players must frag all of their enemies in a "Last Man Standing" match of sorts. The team with the highest number of points win the match.
"CTF Elimination", as its name implies, is a mix of Capture The Flag and Elimination. Not only do the teams score by fragging all of the enemy team's players, but they also can win rounds by capturing their flags.
"Last Man Standing" is a non-team variation of Elimination where all of the players start with a finite number of lives and frag each other until only one of them remains.
"Double Domination" is a team-based game which features two control points, and the players must hold them during some seconds in order to score points.
"Domination" is also team-based, and has control points scattered throughout the maps; the players must secure these points in order to rack up points for their teams.
Reception and impact
The game is one of the most popular open-source first-person shooters, particularly among fans of the original Quake III. It has only really been negatively criticized as somewhat incomplete as of yet, with some saying that this detracts from long term play. OpenArena has also been praised for its portability and ability to run on old hardware. Internet play has also been praised, as well as the number of players found on the average OpenArena server. The game has also been credited for its creativity in bot design, rather than sticking to more traditional tropes. OpenArena is also available on macOS, with one reviewer praising it as one of the best free games for the Mac, noting that it is only slightly behind contemporary commercially funded games for the PC and consoles in terms of graphics and artificial intelligence.
OpenArena has been used as a platform for scholarly work in computer science. Some examples include streaming graphics from a central server, and visualizing large amounts of network data.
See also
List of open-source games
List of open-source first-person shooters
Freedoom, a video game clone of Doom (1993 video game)
Linux gaming
References
External links
OpenArena at Indie DB
OpenArena at the Linux Game Database
OpenArena at Linux Links
2005 video games
AmigaOS 4 games
Fangames
First-person shooters
Linux games
Multiplayer online games
Shooter video games
Open-source video games
MacOS games
Quake III Arena mods
Upcoming video games
Video game clones
Windows games
Free and open-source Android software
Ouya games |
72032 | https://en.wikipedia.org/wiki/Commodore%20128 | Commodore 128 | The Commodore 128, also known as the C128, C-128, C= 128, is the last 8-bit home computer that was commercially released by Commodore Business Machines (CBM). Introduced in January 1985 at the CES in Las Vegas, it appeared three years after its predecessor, the bestselling Commodore 64.
The C128 is a significantly expanded successor to the C64, with nearly full compatibility. The newer machine has 128 KB of RAM in two 64 KB banks, and an 80-column color video output. It has a redesigned case and keyboard. Also included is a Zilog Z80 CPU which allows the C128 to run CP/M, as an alternative to the usual Commodore BASIC environment. The presence of the Z80 and the huge CP/M software library it brings, coupled with the C64's software library, gave the C128 one of the broadest ranges of available software among its competitors.
The primary hardware designer of the C128 was Bil Herd, who had worked on the Plus/4. Other hardware engineers were Dave Haynie and Frank Palaia, while the IC design work was done by Dave DiOrio. The main Commodore system software was developed by Fred Bowen and Terry Ryan, while the CP/M subsystem was developed by Von Ertwine.
Technical overview
The C128's keyboard includes four cursor keys, an Alt key, Help key, Esc key, Tab key and a numeric keypad. None of these were present on the C64 which had only two cursor keys, requiring the use of the Shift key to move the cursor up or left. This alternate arrangement was retained on the 128, for use under C64 mode. The lack of a numeric keypad, Alt key, and Esc key on the C64 was an issue with some CP/M productivity software when used with the C64's Z80 cartridge. A keypad was requested by many C64 owners who spent long hours entering machine language type-in programs. Many of the added keys matched counterparts present on the IBM PC's keyboard and made the new computer more attractive to business software developers. While the 128's 40-column mode closely duplicates that of the C64, an extra 1K of color RAM is made available to the programmer, as it is multiplexed through memory address 1. The C128's power supply is improved over the C64's unreliable design, being much larger and equipped with cooling vents and a replaceable fuse. The C128 does not perform a system RAM test on power-up like previous Commodore machines. Instead of the single 6510 microprocessor of the C64, the C128 incorporates a two-CPU design. The primary CPU, the 8502, is a slightly improved version of the 6510, capable of being clocked at 2 MHz. The second CPU is a Zilog Z80 which is used to run CP/M software, as well as to initiate operating-mode selection at boot time. The two processors cannot run concurrently, thus the C128 is not a multiprocessing system.
The C128's complex architecture, includes four differently accessed kinds of RAM (128 KB main RAM, 16–64 KB VDC video RAM, 2 kNibbles VIC-II Color RAM, 2-kilobyte floppy-drive RAM on C128Ds, 0, 128 or 512 KB REU RAM), two or three CPUs (main: 8502, Z80 for CP/M; the 128D also incorporates a 6502 in the disk drive), and two different video chips (VIC-IIe and VDC) for its various operational modes.
Early versions of the C128 occasionally experience temperature-related reliability issues due to the use of an electromagnetic shield over the main circuit board. The shield was equipped with fingers that contacted the tops of the major chips, ostensibly causing the shield to act as a large heat sink. A combination of poor contact between the shield and the chips, the inherently limited heat conductivity of plastic chip packages, as well as the relatively poor thermal conductivity of the shield itself, resulted in overheating and failure in some cases. The SID sound chip is particularly vulnerable in this respect. The most common remedy is to remove the shield, which Commodore had added late in development to comply with FCC radio-frequency regulations.
The C128 has three operating modes. C128 Mode (native mode) runs at 1 or 2 MHz with the 8502 CPU and has both 40- and 80-column text modes available. CP/M Mode uses both the Z80 and the 8502, and is able to function in both 40- or 80-column text mode. C64 Mode is nearly 100 percent compatible with the earlier computer. Selection of these modes is implemented via the Z80 chip. The Z80 controls the bus on initial boot-up and checks to see if there is a CP/M disk in the drive, if there are any C64/C128 cartridges present, or if the Commodore key (which serves as the C64-mode selector) is being depressed on boot-up. Based on these conditions, it will switch to the appropriate mode of operation.
Modes
C128
In 1984, a year before the release of the Commodore 128, Commodore released the Plus/4. Although targeted at a low-end business market that could not afford the relatively high cost and training requirements of early IBM PC compatibles, it was perceived by the Commodore press as a follow-up to the 64 and would be expected to improve upon that model's capabilities. While the C64's graphics and sound capabilities were generally considered excellent, the response to the Plus/4 was one of disappointment. Upon the Plus/4's introduction, repeated recommendations were made in the Commodore press for a new computer called the "C-128" with increased RAM capacity, an 80-column display as was standard in business computers, a new BASIC programming language that made it easy for programmers to use the computer's graphics and sound without resorting to PEEK and POKEs, a new disk drive that improved upon the 1541's abysmal transfer rate, as well as total C64 compatibility.
The designers of the C128 succeeded in addressing most of these concerns. A new chip, the VDC, provides the C128 with an 80-column color CGA-compatible display (also called RGBI for red-green-blue plus intensity). The then-new 8502 microprocessor is completely backward-compatible with the C64's 6510, but can run at double the speed if desired. The C64's BASIC 2.0 was replaced with BASIC 7.0, which includes structured programming commands from the Plus/4's BASIC 3.5, as well as keywords designed specifically to take advantage of the machine's capabilities. A sprite editor and machine language monitor were added. The screen-editor part of the Kernal was further improved to support an insert mode and other features accessed through ESC-key combinations, as well as a rudimentary windowing feature, and was relocated to a separate ROM. The VIC-II chip which controls the 40-column display can only operate at 1 MHz, so the 40-column display appears jumbled in FAST mode. In 80-column mode the editor takes advantage of VDC features to provide blinking and underlined text, activated through escape codes, in addition to the standard Commodore reverse text. The C128's 80-column mode can display lowercase characters along with PETSCII graphics characters; 40-column mode is subject to the same "upper- and lowercase" or "uppercase-plus-graphics" restriction as earlier Commodores. The 40- and 80-column modes are independent and both can be active at the same time. A programmer with both a composite and RGB display can use one of the screens as a "scratchpad" or for rudimentary multiple buffer support. The active display can be switched with ESC-X. A hardware reset button was added to the system. The keyboard, however, was not switched to the Selectric layout as had become standard, instead retaining the same ADM-3A-derived design as on Commodore's prior models.
The VDC chip is largely useless for gaming since it has no sprites or raster interrupts. NTSC C128s will work with any CGA-type monitor (TTL RGB @ 15 kHz/60 Hz) such as the IBM 5153. However, PAL models of the C128 operate at 50 Hz and aren't compatible with most CGA monitors, which expect a 60 Hz refresh rate. Pin 7 of the VDC output (normally unused on CGA monitors) produces a monochrome NTSC/PAL signal, but no cable was provided for it and interested users had to make their own or purchase one on the aftermarket.
Two new disk drives were introduced in conjunction with the C128: the short-lived single-sided 1570 and the double-sided 1571. A dual-disk 1572 model was announced but never produced. Later on, the 3.5-inch 1581 was introduced. All of these drives are more reliable than the 1541 and promise much better performance via a new "burst mode" feature. The 1581 drive also has more on-board RAM than its predecessors, making it possible to open a larger number of files at one time. BASIC 7.0 includes DLOAD and DSAVE commands to support loading and saving to disk without using the ,8 or other device number, and also a DIRECTORY command that reads a disk's catalog information directly to screen memory without overwriting BASIC memory as in BASIC 2.0. In addition, the C128 introduces auto-booting of disk software, a feature standard on most personal computers, but absent from Commodore machines up to that point. Users no longer have to type LOAD"*",8,1. BASIC also added a COLLECT command for removing "splat" files (files that were not closed properly and truncated to zero length).
All 1571 drives will normally start up in native mode on the C128. If the user switches to C64 mode by typing "GO 64", the drive remains in native mode. But if C64 mode is activated by holding the Commodore key down when powering-up, the 1571 then goes into 1541 mode. This routine is necessary for software that performs low-level drive access.
The C128 has twice the RAM of the C64, a far higher proportion of which is available for BASIC programming, due to the new MMU bank-switching chip. This allows BASIC program code to be stored separately from variables, greatly enhancing the machine's ability to handle complex programs, speeding garbage collection and easing debugging for the programmer. An executing program can be STOPped, its code edited, variable values inspected or altered in direct mode, and program execution resumed with the variable table intact using BASIC's GOTO command. Although other BASICs support the CONT command to restart execution without clearing variables, editing any code causes them to be cleared. Different memory configurations can be loaded using BASIC's BANK command.
BASIC 7.0 has a full complement of graphics and sound-handling commands, as well as BASIC 4.0's disk commands and improved garbage cleanup, and support for structured programming via IF...THEN...ELSE, DO...WHILE, and WHILE...WEND loops. Programmable characters are still however not supported, so the programmer will have to manipulate them with PEEK and POKE as on the VIC-20 and C64.
On the downside, BASIC 7.0 ran considerably slower than BASIC 2.0 unless 2 MHz mode was used due to its 28 KB size (a 250% increase over BASIC 2.0) and having to bank switch to access program variables and BASIC program text (if greater than 16k in length).
The 128's ROM contains an easter egg: Entering the command SYS 32800,123,45,6 in native mode reveals a screen with a listing of the machine's main developers followed by the message Link arms, don't make them." Also, entering the keywords QUIT or OFF will produce an ?UNIMPLEMENTED COMMAND ERROR. These commands are holdovers from the BASIC interpreter intended for a planned but never-produced LCD portable computer and had been intended to exit from the BASIC interpreter and to ignore keyboard input during sensitive program execution, respectively.
The C128's greater hardware capabilities, especially the increased RAM, screen display resolution, and serial bus speed, made it a more capable platform than the C64 for running the GEOS graphical operating system.
CP/M
The second of the C128's two CPUs is the Zilog Z80, which allows the C128 to run CP/M. The C128 was shipped with CP/M 3.0 (a.k.a. CP/M Plus, which is backward-compatible with CP/M 2.2) and ADM31/3A terminal emulation. A CP/M cartridge had been available for the C64, but it was expensive and of limited use since the 1541 drive cannot read the MFM-formatted disks that CP/M software was distributed on. Software had to be made available on Commodore-specific disks formatted using the GCR encoding scheme. Commodore made versions of PerfectCalc and the EMACS-derived PerfectWriter available, and Commodore user groups sometimes had a selection of CP/M diskettes, but the limited software availability negated one of CP/M's chief attractions—its huge software library. In addition, the cartridges only work on early model C64s from 1982 and are incompatible with later units. Since they were also incompatible with the C128, the design team decided to support CP/M by putting the Z80 on the main system board.
The C128 runs CP/M noticeably slower than most dedicated CP/M systems, as the Z80 processor runs at an effective speed of only 2 MHz. This was because the C128's system bus was designed around the 65xx CPUs. These CPUs handle data and memory addressing very differently from the Z80. CP/M also ran more slowly due to the reasons mentioned below, such as needing to pass control to the 8502 for any I/O or interrupt processing. For these reasons, few users actually ran CP/M software on the C128.
When the C128 is powered on, the Z80 is active first and executes a small boot loader ROM at $0-$FFF to check for the presence of a CP/M disk. If one is not detected, control is passed to the 8502 and C128 native mode is started.
CP/M mode in practice requires a 1571 or 1581 drive to be useful, since a 1541 cannot read MFM disks and will run much slower due to not supporting the C128's burst mode. CP/M boot disks nonetheless must be in the drive's native GCR format; MFM disks cannot be booted from, only read once the user is already in CP/M. This is because the code necessary to operate the drive in MFM mode is loaded as part of the boot process. In addition, 80-column mode is generally required since most CP/M software expects an 80-column screen. The C128 emulates an ADM-3A terminal in CP/M mode, so software will have to be set up for that. Aside from the standard ADM-3A terminal commands, a number of extra ones are available to use the VIC-II and VDC's features, including setting the text and background color. The CP/M command interpreter (although not application software) includes a safeguard to prevent the user from issuing a control code to make the text and background the same color, which would render text invisible and force the user to reset the computer. If this happens, it will default to a gray background with brown text.
In CP/M mode, it is possible to run MBASIC, Microsoft's release of BASIC-80 for CP/M. Compared with the native mode BASIC 7.0, MBASIC is terse and limited in its capabilities, requiring the use of terminal-style key combinations to edit program lines or move the text cursor and lacking any sound or graphics features. Although MBASIC has mathematical and calculation features that BASIC 7.0 lacks such as integer and double precision variable support, any speed advantage gained by the use of integer variables is rendered moot by the extremely slow performance of the computer in CP/M mode. Moreover, Commodore BASIC has 40-bit floating point which serves as a middle ground between MBASIC's 32-bit floating point and 64-bit double precision variables. MBASIC also offers only 34k of free program space against BASIC 7.0's approximately 90k.
Other CP/M software such as Wordstar and Supercalc will also be significantly outperformed by native mode C128 equivalents like PaperClip, which also have an easier to use interface.
The CP/M CBIOS (the part of CP/M that interfaces with the hardware) does not directly interface with the hardware like on most CP/M implementations, rather it calls the kernal routines for interrupt handing and I/O—when the kernal needs to be used, the Z80 uses routines at $FFD0-$FFEF to pass parameter data to the 8502, which is then activated and the Z80 deactivated. After the kernal routine is finished executing, control is passed back to the Z80. It was reported that the programmer in charge of porting CP/M to the C128 had intended to have the CBIOS interface with the hardware directly in Z80 machine language, but had great difficulty with the VDU chips as they were prone to overheating and self-destructing. The VDU also underwent numerous hardware revisions while the C128 was in development and the CP/M programmer was unable to get his code working properly, so the C128 engineering team requested instead that he simply rewrite the CBIOS to pass function calls to the 8502.
CP/M mode is very different from the operating environments familiar to Commodore users. While Commodore DOS is built into the ROM of Commodore disk drives and is usually accessed through BASIC, CP/M requires the use of a boot diskette and requires entry of terse commands inherited from minicomputer platforms. CP/M programs tend to lack the user-friendly nature of most Commodore applications. Intended to give the new computer a large library of professional-grade business software that Commodore lacked, CP/M was long past its prime by the mid-1980s, and so it was seldom used on the C128.
C64
By incorporating the original C64 BASIC and Kernal ROMs in their entirety (16 KB total), the C128 achieves almost 100 percent compatibility with the Commodore 64. The C64 mode can be accessed in three ways:
Holding down the Commodore-logo key when booting.
Entering the GO 64 command, then responding Y to the ARE YOU SURE? prompt, in BASIC 7.0.
Booting with a C64 cartridge plugged in.
Grounding the cartridge port's /EXROM and/or /GAME lines will cause the computer to automatically start up in C64 mode. This feature faithfully duplicates the C64's behavior when a cartridge (such as Simons' BASIC) is plugged into the port and asserts either of these lines but, unlike an actual C64, where the memory-map-changing action of these lines is implemented directly in hardware, the C128's Z80 firmware startup code polls these lines on power-up and then switches modes as necessary. C128 native-mode cartridges are recognized and started by the kernal polling defined locations in the memory map.
C64 mode almost exactly duplicates the features of a hardware C64. The MMU, Z80, and IEC burst mode are disabled in C64 mode, however all other C128 hardware features including the VDU and 2 MHz mode are still accessible. The extended keys of the C128 keyboard may be read from machine language, although the kernal routines only recognize the keys that exist on the C64. A few games are capable of detecting if a C128 is running and switching to 2 MHz mode during the vertical retrace for faster performance.
On North American C128s, when in C64 mode, even the character (font) ROM changes from that of C128 mode. Early C128 prototypes had a single ROM, with a slightly improved character set over that of the C64. But some C64 programs read the character ROM as data, and will fail in various ways on a C128. Thus, the C128 was given a double-sized character ROM, which delivers the C128 font in C128 mode, and the C64 font in C64 mode. International models of the C128 use the unmodified C64 font in both modes, since the second half of the character ROM is instead dedicated to the international font (containing such things as accented characters or German umlauts).
Some of the few C64 programs that fail on a C128 will run correctly when the caps lock key is pressed down (or the ASCII/National key on international C128 models). This has to do with the larger built-in I/O port of the C128's CPU. Whereas the SHIFT LOCK key found on both C64 and C128 is simply a mechanical latch for the left SHIFT key, the CAPS LOCK key on the C128 can be read via the 8502's built-in I/O port. A few C64 programs are confused by this extra I/O bit; keeping the CAPS LOCK key in the down position will force the I/O line low, matching the C64's configuration and resolving the issue.
A handful of C64 programs write to $D030 (53296), often as part of a loop initializing the VIC-II chip registers. This memory-mapped register, unused in the C64, determines the system clock rate. Since this register is fully functional in C64 mode, an inadvertent write can scramble the 40-column display by switching the CPU over to 2–MHz, at which clock rate the VIC-II video processor cannot produce a coherent display. Fortunately, few programs suffer from this flaw. In July 1986, COMPUTE!'s Gazette published a type-in program that exploited this difference by using a raster interrupt to enable fast mode when the bottom of the visible screen was reached, and then disable it when screen rendering began again at the top. By using the higher clock rate during the vertical blank period, standard video display is maintained while increasing overall execution speed by about 20 percent.
An easy way to differentiate between a hardware C64 and a C128 operating in C64 mode, typically used from within a running program, is to write a value different from $FF (255) to memory address $D02F (53295), a register which is used to decode the extra keys of the C128 (the numerical keypad and some other keys). On the C64 this memory location will always contain the value $FF no matter what is written to it, but on a C128 in C64 mode the value of the location—a memory-mapped register—can be changed. Thus, checking the location's value after writing to it will reveal the actual hardware platform.
RAM setup
To handle the relatively large amounts of ROM and RAM (tenfold the size of 8502's 64 KB address space) the C128 uses the 8722 MMU chip to create different memory maps, in which different combinations of RAM and ROM would appear according to bit patterns written into the MMU's configuration register at memory address $FF00. Another feature of the memory management unit is to allow relocation of zero page and the stack.
Although the C128 can theoretically support 256k of RAM in four blocks, the PCB has no provisions to add this extra RAM, nor can the MMU actually access more than 128k. Therefore, if the MMU is programmed to access blocks 2 or 3, all that results is a mirror of the RAM in blocks 0 and 1.
Since the I/O registers and system ROMs can be disabled or enabled freely, as well as being locatable in either RAM bank and the VIC-II set to use either bank for its memory space, up to 256 memory configurations are possible, although the vast majority of them are useless (for example, unworkable combinations like the kernal ROM in bank 0 and the I/O registers in bank 1 are possible). Because of this, BASIC's BANK statement allows the user to select 15 of the most useful arrangements, with the power-on default being Bank 15. This default places the system ROMs, I/O registers, and BASIC program text in block 0, with block 1 being used by BASIC program variables. BASIC program text and variables can extend all the way to $FFEF. But since block 0 contains the ROMs and I/O registers from $4000 onward, BASIC uses an internal switching routine to read program text higher than $3FFF.
The top and bottom 1k of RAM ($0–$3FF and $FF00-$FFFF) are "shared" RAM, visible from both blocks. The MMU allows either to be expanded in increments up to 16k. The $0–$3FF range contains the zero page and stack while $FF00-$FFFF contains the MMU registers and reset vectors. These areas are always shared and cannot be switched to non-shared RAM. Shared RAM is always the opposite bank from the one currently being used by the CPU, thus if bank 0 is selected, any read or write to shared RAM will refer to the corresponding locations in bank 1 and vice versa. The VIC-II can be set to use either RAM bank and from there, its normal 16k window. While on the C64, the VIC-II can only see the character ROM in banks 2 and 4 of its memory space, the C128, on the other hand, makes it possible to enable or disable the character ROM for any VIC-II bank via the register at $1. Also, there are two sets of color RAM—one visible to the CPU, the other to the VIC-II and the user may select what chip sees what.
In CP/M mode, the Program Segment Prefix and Transient Program Area reside in Bank 1 and the I/O registers and CP/M system code in Bank 0.
The C128's RAM is expandable from the standard 128 KB to 256, 512 or even 1,024 KB, either by using commercial memory expansion modules, or by making one based on schematics available on the internet.
Commodore's RAM Expansion Units use an external 8726 DMA controller to transfer data between the C128's RAM and the RAM in the expansion unit.
C128D
Late in 1985, Commodore released to the European market a new version of the C128 with a redesigned chassis resembling the Amiga 1000. Called the Commodore 128D, this new European model featured a plastic chassis with a carrying handle on the side, incorporated a 1571 disk drive into the main chassis, replaced the built-in keyboard with a detachable one, and added a cooling fan. The keyboard also featured two folding legs for changing the typing angle.
The C128 released in the United Kingdom on 25 July 1985, and in North America in November 1985.
According to Bil Herd, head of the Hardware Team (a.k.a. the "C128 Animals"), the C128D was ready for production at the same time as the regular version. Working to release two models at the same time had increased the risk for on-time delivery and was apparent in that the main PCB has large holes in critical sections to support the C128D case and the normal case concurrently.
In the latter part of 1986, Commodore released a version of the C128D in North America and parts of Europe referred to as the C128DCR, CR meaning "cost-reduced". The DCR model features a stamped-steel chassis in place of the plastic version of the C128D (with no carrying handle), a modular switched-mode power supply similar to that of the C128D, retaining that model's detachable keyboard and internal 1571 floppy drive. A number of components on the mainboard were consolidated to reduce production costs and, as an additional cost-reduction measure, the 40 mm cooling fan that was fitted to the D model's power supply was removed. However, the mounting provisions on the power supply subchassis were retained, as well as the two 12-volt DC connection points on the power supply's printed circuit board for powering the fan. The C128DCR mounting provision is for a 60 mm fan.
A significant improvement introduced with the DCR model was the replacement of the 8563 video display controller (VDC) with the more technically advanced 8568 VDC and equipping it with 64 kilobytes of video RAM—the maximum amount addressable by the device. The four-fold increase in video RAM over that installed in the "flat" C128 made it possible, among other things, to maintain multiple text screens in support of a true windowing system, or generate higher-resolution graphics with a more flexible color palette. Little commercial software took advantage of these possibilities.
The C128DCR is equipped with new ROMs dubbed the "1986 ROMs," so-named from the copyright date displayed on the power-on banner screen. The new ROMs address a number of bugs that are present in the original ROMs, including an infamous off-by-one error in the keyboard decoding table, in which the 'Q' character would remain lower case when CAPS LOCK was active. Some software will only run on the DCR, due to dependencies on the computer's enhanced hardware features and revised ROMs.
Despite the DCR's improved RGB video capabilities, Commodore did not enhance BASIC 7.0 with the ability to manipulate RGB graphics. Driving the VDC in graphics mode continues to require the use of calls to screen-editor ROM primitives or their assembly language equivalents, or by using third-party BASIC language extensions, such as Free Spirit Software's "BASIC 8", which adds high-resolution VDC graphics commands to BASIC 7.0.
Market performance
By January 1987, Info reported that "All of those rumors about the imminent death of the C128 may have some basis in fact." Stating that Commodore wanted to divert resources to increasing 64C production and its PC clones, the magazine stated that, "The latest word online is that the last C128 will roll off the lines in December of 1987." Compute! stated in 1989, "If you bought your 128 under the impression that 128-specific software would be plentiful and quick to arrive, you've probably been quite disappointed. One of the 128's major selling points is its total compatibility with the 64, a point that's worked more against the 128 than for it." Because the 128 would run virtually all 64 software, and because the next-generation 32/16-bit home computers—primarily the Commodore Amiga and Atari ST—represented the latest technology, relatively little software for the C128's native mode appeared (probably on the order of 100–200 commercial titles, plus the usual share of public domain and magazine type-in programs), leading some users to regret their purchase. While the C128 sold a total number of 4 million units between 1985 and 1989, its popularity paled in comparison to that of its predecessor. One explanation for these lower sales numbers may be because the C64 was sold to people primarily interested in video games, which the more expensive C128 didn't add much value towards improving.
Some C64 software such as Bard's Tale III and Kid Niki ran in 128 mode without stating this in the documentation, using the autoboot and the 1571's faster disk access. Some Infocom text adventures took advantage of the 80-column screen and increased memory capacity. Some C64 games were ported to native mode like Kikstart 2 and The Last V8 from Mastertronic, which had separate C128 versions, and Ultima V: Warriors of Destiny from Origin Systems, which used extra RAM for music if running on the C128. Star Fleet I: The War Begins from Interstel had separate versions, and took advantage of 80-column display on the C128. The vast majority of games simply ran in C64 mode.
By contrast, many C64 productivity software titles were ported to the C128, including the popular PaperClip and Paperback Writer series. This software used the extra memory, 80-column screen, enhanced keyboard and large-capacity disk drives to provide features that were considered essential for business use. With its advanced BASIC programming language, CP/M compatibility and "user-friendly" native software packages such as Jane, Commodore attempted to create a low-end business market for the C128 similar to its strategy with the Plus/4, even distancing itself from the home computer label by branding the C128 a "Personal Computer" on the case. Significantly, the C128 was the first Commodore computer to advertise its use of Microsoft BASIC, where the Microsoft name would have been a competitive asset.
The C128 was certainly a better business machine than the C64, but not really a better gaming machine. People who wanted business machines bought IBM PC clones almost exclusively by the time the C128 was released. The availability of low-cost IBM compatibles like the Leading Edge Model D and Tandy 1000 that, in some cases, sold for less than a complete C128 system derailed Commodore's small business computer strategy. There was a professional-level CAD program, Home Designer by BRiWALL, but again, most of this work was done on PCs by the C128's era. The main reason that the C128 still sold fairly well was probably that it was a much better machine for hobbyist programming than the C64, as well as being a natural follow-on model to owners with significant investments in C64 peripherals and software.
But ultimately the C128 could not compete with the new 16/32-bit systems, which outmatched it and the rest of its 8-bit generation in nearly every aspect. When the C128(D/DCR) was discontinued in 1989, it was reported to cost nearly as much to manufacture as the Amiga 500, even though the C128D had to sell for several hundred dollars less to keep the Amiga's high-end marketing image intact.
Bil Herd has stated that the design goals of the C128 did not initially include 100% compatibility with the C64. Some form of compatibility was always intended after Herd was approached at the Plus/4's introduction by a woman who was disappointed that the educational software package she had written for the C64 would not run on Commodore's new computer, but when Commodore's marketing department learned of this, they independently announced total compatibility. Herd gave the reason for the 128's inclusion of a Z80 processor as ensuring this "100% compatibility" claim, since supporting the C64's Z80 cartridge would have meant the C128 supplying additional power to the cartridge port. He also stated that the VDC video chip and Z80 were sources of trouble during the machine's design. Herd added that "I only expected the C128 to be sold for about a year, we figured a couple of million would be nice and of course it wouldn’t undercut Amiga or even the C64". After Commodore raised the price of the 64 for the first time by introducing the redesigned 64C in 1986, its profit from each 64C sold was reportedly much greater than that from the C128.
Specifications
CPUs:
MOS Technology 8502 @ 2 MHz (1 MHz selectable for C64 compatibility mode or C128's 40-column mode)
Zilog Z80 @ 4 MHz (running at an effective 2 MHz because of wait states to allow the VIC-II video chip access to the system bus)
(C128D(CR)): MOS Technology 6502 for the integrated floppy controller
MMU: MOS Technology 8722 Memory Management Unit controls 8502/Z80 processor selection; ROM/RAM banking; common RAM areas; relocation of zero page and stack
RAM: 128 KB system RAM, 2 KB 4-bit dedicated color RAM (for the VIC-II E), 16 KB or 64 KB dedicated video RAM (for the VDC), up to 512 KB REU expansion RAM
ROM: 72 KB
28 KB BASIC 7.0
4 KB MLM
8 KB C128 KERNAL
4 KB screen editor
4 KB Z80 BIOS
16 KB C64 ROM: ≈9 KB C64 BASIC 2.0 + ≈7 KB C64 KERNAL
4 KB C64 (or international) character generator
4 KB C128 (or national) character generator
32 KB Internal Function ROM (optional: for placement in motherboard socket)
32 KB External Function ROM (optional: for placement in REU socket)
Video:
MOS 8564/8566 VIC-II E (NTSC/PAL) for 40-column composite video (a TV set can be used instead of a monitor if desired)
Direct register access through memory-mapped I/O
Text mode: 40×25, 16 colors
Graphics modes: 160×200, 320×200
8 hardware sprites
2 KB dedicated 4-bit color RAM, otherwise uses main memory as video RAM
MOS 8563 VDC (or, in C128DCR, the 8568) for 80-column digital RGBI component video, compatible with IBM PC CGA monitors, monochrome display also possible on composite video monitors; usable with TV sets only when the set has SCART and/or baseband video-in sockets in addition to the antenna connector. Color is possible through SCART, only monochrome through baseband video-in.
Indirect register access (address register, data register in mapped memory)
Text mode: Fully programmable, typically 80×25 or 80x50, 16 RGBI colors (not the same palette as the VIC-II)
Graphics modes: Fully programmable, typical modes are 320x200, 640×200, and 640×400 (interlaced).
16 KB dedicated video RAM (64 KB standard in C128DCR, C128/C128D can be upgraded to 64 KB), accessible to the CPU only in a doubly indirect method (address register, data register on VDC, which in turn are addressed through address register, data register in mapped memory)
Limited blitter functionality
Sound:
MOS 6581 SID (or, in the C128DCR, the MOS 8580 SID) synthesizer chip
3 voices, ADSR-controllable
Standard SID waveforms (triangle, sawtooth, variable pulse, noise, and certain combined modes)
Multi-mode filter
3 ring modulators
I/O ports:
All Commodore 64 ports with 100 percent compatibility, plus the following:
Higher "burst mode" speed possible on the serial bus
Expansion port more flexibly programmable
RGBI video output (DE9-connector) logically similar to the IBM PC CGA connector, but with an added monochrome composite signal. This added signal causes a minor incompatibility with certain CGA monitors that can be rectified by removing pin 7 from the plug at one end of the connecting cable.
External keyboard input (DB25-connector) (C128D(CR) only)
See also
Commodore BASIC
Commodore 64 peripherals
Notes
References
Bibliography
Greenley, Larry, et al. (1986). Commodore 128 Programmer's Reference Guide. Bantam Computer Books/Commodore Publications. .
Gerits, K.; Schieb, J.; Thrun, F. (1986). Commodore 128 Internals. 2nd ed. Grand Rapids, Michigan: Abacus Software, Inc. . Original German edition (1985), Düsseldorf, West Germany: Data Becker GmbH & Co. KG.
External links
Commodore 128 Systems Guide
Commodore 128 CP/M User's Guide
VICE: Versatile Commodore Emulator
Z64K: C128, C64, VIC20, and Atari 2600 emulators
RUN Magazine Issue 18 June 1985
hackaday.com: Guest Post: The Real Story of Hacking Together the Commodore C128, by: Bil Herd (dated 2013-12-09)
6502-based home computers
Z80-based home computers |
18424005 | https://en.wikipedia.org/wiki/Steve%20Cunningham%20%28computer%20scientist%29 | Steve Cunningham (computer scientist) | Robert Stephen (Steve) Cunningham (born 1942 – March 27, 2015) was an American Computer Scientist and Professor Emeritus of Computer Science at California State University Stanislaus.
Biography
Steve Cunningham received his BA cum laude in Mathematics from Drury University in 1964. He continued his studies at the University of Oregon where he earned his M.A. in Mathematics in 1966 and his Ph.D. in Mathematics three years later. In 1982, he received an M.S. in Computer Science at Oregon State University.
Cunningham started working at the University of Kansas as Assistant Professor of Mathematics from 1969 to 1974. From 1974 he worked at the Birmingham-Southern College as Assistant Professor of Mathematics for a year, Associate Professor of Mathematics for four years and as Associate Professor of Computer Science from 1979 to 1982. Since 1982 he has worked at the California State University Stanislaus, since 1986 as Professor of Computer Science until 2001, Gemperle Distinguished Professor for three years and Stanislaus Professor Emeritus since 2005. From 1999 to 2000 Cunningham was also Visiting Scientist at the San Diego Supercomputer Center. He was National Science Foundation Program Director, EHR/DUE from 2003 to 2005. Research Professor of Computer Science at the Oregon State University 2004-05 and Noyce Visiting Professor of Computer Science at Grinnell College in 2006.
He received several awards and honors. A Fellow of the European Association for Computer Graphics in 1998, the Outstanding Professor for Research, Scholarship, and Creative Activity, CSU Stanislaus in 2001, the Gemperle Distinguished Professor, CSU Stanislaus in 2001, the ACM SIGGRAPH Outstanding Contribution Award in 2004 and the Noyce Visiting Professor of Computer Science, Grinnell College in 2006.
Work
Cunningham's research interests were in Computer graphics, especially computer graphics education, Computer Science Education, and computer visualization in learning mathematics.
See also
Educational visualization
Publications
1989. Programming the User Interface: Principles and Examples. With Judith R. Brown. Wiley.
1991. Visualization in Teaching and Learning Mathematics. Edited with Walter Zimmermann. MAA Notes Number 19, Mathematical Association of America.
1992. Computer Graphics Using Object-Oriented Programming. Edited with N. Craighill, M. Fong and J. Brown. Wiley,
1992. Interactive Learning Through Visualization - The Impact of Computer Graphics in Education. Edited with Roger Hubbold. Springer-Verlag.
1996. Electronic Publishing on CD-ROM. With Judson Rosebush. O'Reilly and Associates.
2007. Computer Graphics: Programming in OpenGL for Visual Communication. Prentice-Hall.
2009. Graphics Shaders: Theory and Practice. With Mike Bailey. AK Peters. 2012. Second edition.
References
External links
Steve Cunningham homepage.
1942 births
American computer scientists
Living people
Information visualization experts
Drury University alumni
Oregon State University alumni
University of Oregon alumni |
22826 | https://en.wikipedia.org/wiki/Object%20database | Object database | An object database is a database management system in which information is represented in the form of objects as used in object-oriented programming. Object databases are different from relational databases which are table-oriented. Object–relational databases are a hybrid of both approaches.
Object databases have been considered since the early 1980s.
Overview
Object-oriented database management systems (OODBMSs) also called ODBMS (Object Database Management System) combine database capabilities with object-oriented programming language capabilities.
OODBMSs allow object-oriented programmers to develop the product, store them as objects, and replicate or modify existing objects to make new objects within the OODBMS. Because the database is integrated with the programming language, the programmer can maintain consistency within one environment, in that both the OODBMS and the programming language will use the same model of representation. Relational DBMS projects, by way of contrast, maintain a clearer division between the database model and the application.
As the usage of web-based technology increases with the implementation of Intranets and extranets, companies have a vested interest in OODBMSs to display their complex data. Using a DBMS that has been specifically designed to store data as objects gives an advantage to those companies that are geared towards multimedia presentation or organizations that utilize computer-aided design (CAD).
Some object-oriented databases are designed to work well with object-oriented programming languages such as Delphi, Ruby, Python, JavaScript, Perl, Java, C#, Visual Basic .NET, C++, Objective-C and Smalltalk; others such as JADE have their own programming languages. OODBMSs use exactly the same model as object-oriented programming languages.
History
Object database management systems grew out of research during the early to mid-1970s into having intrinsic database management support for graph-structured objects. The term "object-oriented database system" first appeared around 1985. Notable research projects included Encore-Ob/Server (Brown University), EXODUS (University of Wisconsin–Madison), IRIS (Hewlett-Packard), ODE (Bell Labs), ORION (Microelectronics and Computer Technology Corporation or MCC), Vodak (GMD-IPSI), and Zeitgeist (Texas Instruments). The ORION project had more published papers than any of the other efforts. Won Kim of MCC compiled the best of those papers in a book published by The MIT Press.
Early commercial products included Gemstone (Servio Logic, name changed to GemStone Systems), Gbase (Graphael), and Vbase (Ontologic). Additional commercial products entered the market in the late 1980s through the mid 1990s. These included ITASCA (Itasca Systems), Jasmine (Fujitsu, marketed by Computer Associates), Matisse (Matisse Software), Objectivity/DB (Objectivity, Inc.), ObjectStore (Progress Software, acquired from eXcelon which was originally Object Design, Incorporated), ONTOS (Ontos, Inc., name changed from Ontologic), O2 (O2 Technology, merged with several companies, acquired by Informix, which was in turn acquired by IBM), POET (now FastObjects from Versant which acquired Poet Software), Versant Object Database (Versant Corporation), VOSS (Logic Arts) and JADE (Jade Software Corporation). Some of these products remain on the market and have been joined by new open source and commercial products such as InterSystems Caché.
Object database management systems added the concept of persistence to object programming languages. The early commercial products were integrated with various languages: GemStone (Smalltalk), Gbase (LISP), Vbase (COP) and VOSS (Virtual Object Storage System for Smalltalk). For much of the 1990s, C++ dominated the commercial object database management market. Vendors added Java in the late 1990s and more recently, C#.
Starting in 2004, object databases have seen a second growth period when open source object databases emerged that were widely affordable and easy to use, because they are entirely written in OOP languages like Smalltalk, Java, or C#, such as Versant's db4o (db4objects), DTS/S1 from Obsidian Dynamics and Perst (McObject), available under dual open source and commercial licensing.'''
Timeline
1966
MUMPS
1979
InterSystems M
1980
TORNADO – an object database for CAD/CAM
1982
Gemstone started (as Servio Logic) to build a set theoretic model data base machine.
1985 – Term Object Database first introduced
1986
Servio Logic (Gemstone Systems) Ships Gemstone 1.0
1988
Object Design, Incorporated founded, development of ObjectStore begun
Versant Corporation started (as Object Sciences Corp)
Objectivity, Inc. founded
Early 1990s
Servio Logic changes name to Gemstone Systems
Gemstone (Smalltalk)-(C++)-(Java)
GBase (LISP)
VBase (O2- ONTOS – INFORMIX)
Objectivity/DB
Mid 1990s
InterSystems Caché
Versant Object Database
ODABA
ZODB
Poet
JADE
Matisse
Illustra Informix
2000s
lambda-DB: An ODMG-Based Object-Oriented DBMS by Leonidas Fegaras, Chandrasekhar Srinivasan, Arvind Rajendran, David Maier
db4o project started by Carl Rosenberger
ObjectDB
2001 IBM acquires Informix
2003 odbpp public release
2004 db4o's commercial launch as db4objects, Inc.
2008 db4o acquired by Versant Corporation
2010 VMware acquires GemStone
2011 db4o's development stopped.
2012 Wakanda first production versions with open source and commercial licenses
2013 GemTalk Systems acquires Gemstone products from VMware
2014 db4o's commercial offering is officially discontinued by Actian (which had acquired Versant)
2014 Realm
2017 ObjectBox
Adoption of object databases
Object databases based on persistent programming acquired a niche in application areas such as
engineering and spatial databases, telecommunications, and scientific areas such as high energy physics and molecular biology.
Another group of object databases focuses on embedded use in devices, packaged software, and real-time systems.
Technical features
Most object databases also offer some kind of query language, allowing objects to be found using a declarative programming approach. It is in the area of object query languages, and the integration of the query and navigational interfaces, that the biggest differences between products are found. An attempt at standardization was made by the ODMG with the Object Query Language, OQL.
Access to data can be faster because an object can be retrieved directly without a search, by following pointers.
Another area of variation between products is in the way that the schema of a database is defined. A general characteristic, however, is that the programming language and the database schema use the same type definitions.
Multimedia applications are facilitated because the class methods associated with the data are responsible for its correct interpretation.
Many object databases, for example Gemstone or VOSS, offer support for versioning. An object can be viewed as the set of all its versions. Also, object versions can be treated as objects in their own right. Some object databases also provide systematic support for triggers and constraints which are the basis of active databases.
The efficiency of such a database is also greatly improved in areas which demand massive amounts of data about one item. For example, a banking institution could get the user's account information and provide them efficiently with extensive information such as transactions, account information entries etc.
Standards
The Object Data Management Group was a consortium of object database and object–relational mapping vendors, members of the academic community, and interested parties. Its goal was to create a set of specifications that would allow for portable applications that store objects in database management systems. It published several versions of its specification. The last release was ODMG 3.0. By 2001, most of the major object database and object–relational mapping vendors claimed conformance to the ODMG Java Language Binding. Compliance to the other components of the specification was mixed. In 2001, the ODMG Java Language Binding was submitted to the Java Community Process as a basis for the Java Data Objects specification. The ODMG member companies then decided to concentrate their efforts on the Java Data Objects specification. As a result, the ODMG disbanded in 2001.
Many object database ideas were also absorbed into SQL:1999 and have been implemented in varying degrees in object–relational database products.
In 2005 Cook, Rai, and Rosenberger proposed to drop all standardization efforts to introduce additional object-oriented query APIs but rather use the OO programming language itself, i.e., Java and .NET, to express queries. As a result, Native Queries emerged. Similarly, Microsoft announced Language Integrated Query (LINQ) and DLINQ, an implementation of LINQ, in September 2005, to provide close, language-integrated database query capabilities with its programming languages C# and VB.NET 9.
In February 2006, the Object Management Group (OMG) announced that they had been granted the right to develop new specifications based on the ODMG 3.0 specification and the formation of the Object Database Technology Working Group (ODBT WG). The ODBT WG planned to create a set of standards that would incorporate advances in object database technology (e.g., replication), data management (e.g., spatial indexing), and data formats (e.g., XML) and to include new features into these standards that support domains where object databases are being adopted (e.g., real-time systems). The work of the ODBT WG was suspended in March 2009 when, subsequent to the economic turmoil in late 2008, the ODB vendors involved in this effort decided to focus their resources elsewhere.
In January 2007 the World Wide Web Consortium gave final recommendation status to the XQuery language. XQuery uses XML as its data model. Some of the ideas developed originally for object databases found their way into XQuery, but XQuery is not intrinsically object-oriented. Because of the popularity of XML, XQuery engines compete with object databases as a vehicle for storage of data that is too complex or variable to hold conveniently in a relational database. XQuery also allows modules to be written to provide encapsulation features that have been provided by Object-Oriented systems.
XQuery v1 and XPath v2 are extremely complex (no FOSS software is implementing these standards more than 10 years after their publication) when compared to XPath v1 and XSLT v1, and XML did not fit all community demands as an open format. Since the early 2000s JSON has gained community adoption and popularity in applications, surpassing XML in the 2010s. JSONiq, a query-analog of XQuery for JSON (sharing XQuery's core expressions and operations), demonstrated the functional equivalence of the JSON and XML formats. In this context, the main strategy of OODBMS maintainers was to retrofit JSON to their databases (by using it as the internal data type).
In January 2016, with the PostgreSQL 9.5 release was the first FOSS OODBMS to offer an efficient JSON internal datatype (JSONB) with a complete set of functions and operations, for all basic relational and non-relational manipulations.
Comparison with RDBMSs
An object database stores complex data and relationships between data directly, without mapping to relational rows and columns, and this makes them suitable for applications dealing with very complex data. Objects have a many-to-many relationship and are accessed by the use of pointers. Pointers are linked to objects to establish relationships. Another benefit of an OODBMS is that it can be programmed with small procedural differences without affecting the entire system.
See also
Comparison of object database management systems
Component-oriented database
EDA database
Enterprise Objects Framework
NoSQL
Object Data Management Group
Object–relational database
Persistence (computer science)
Relational model
Relational database management system (RDbMS)
References
External links
Object DBMS resource portal
Ranking of Object Oriented DBMS - by popularity, updated monthly from DB-Engines
Database management systems
Object-oriented programming
Database models
Types of databases |
34488113 | https://en.wikipedia.org/wiki/Bitcasa | Bitcasa | Bitcasa, Inc. was an American cloud storage company founded in 2011 in St. Louis, Missouri. The company was later based in Mountain View, California until it shut down in 2017.
Bitcasa provided client software for Microsoft Windows, OS X, Android and web browsers. An iOS client was pending Apple approval. Its former product, Infinite Drive, once provided centralized storage that included unlimited capacity, client-side encryption, media streaming, file versioning and backups, and multi-platform mobile access. In 2013 Bitcasa moved to a tiered storage model, offering from 1TB for $99/year up to Infinite for $999/year. In October 2014, Bitcasa announced the discontinuation of Infinite Drive; for $999/year, users would get 10TB of storage. Infinite Drive users would be required to migrate to one of the new pricing plans or delete their account. In May 2016, Bitcasa discontinued offering cloud storage for consumers, alleging that they will be focusing on their business products.
History
The company started after an idea was a finalist at the TechCrunch Disrupt conference in September 2011. In 2012 Tony Lee was recruited as vice president of engineering and Frank Meehan joined the company's board of directors. In June 2012 Bitcasa closed $9 million of investment. Investors included: CrunchFund, Pelion Venture Partners, Horizons Ventures, Andreessen Horowitz, Samsung Ventures and First Round Capital.
CEO Brian Taptich announced Jan 2017 that Bitcasa had been acquired by Intel. An Intel spokesperson later clarified that Intel had not acquired Bitcasa.
Products and services
Bitcasa provided client software for web browsers, OS X, Microsoft Windows, Linux and a mobile app for Android. Windows versions include XP, Vista, Windows 7 and Windows 8.
Bitcasa products provide centralized streaming storage so that all devices have simultaneous and real-time access to the same files. Files uploaded from one device are instantly available on all devices. Bitcasa does not require file syncing between devices. Centralized storage eliminates the need to duplicate files across devices or wait for files to become synchronized.
The company has a patent pending for an "infinite storage" algorithm designed to reduce the actual storage space by identifying duplicate content and providing encryption of the stored data. According to Popular Mechanics magazine, Bitcasa uses a convergent encryption method whereby a client's data is assigned an anonymous identifier before it is uploaded. If that data already exists on the Bitcasa servers (such as a popular song), it is not uploaded but is instead earmarked as available for download by that client. This protocol is said to reduce upload time. Bitcasa's encryption method reportedly cloaks the data while it is still on the client's computer and then blocks of data are sent by an enterprise-grade AES-256 encryption method to the data cloud for storage. According to ExtremeTech, this service gives users access and ownership rights to their own data.
In a review by Gizmodo of Australia, Bitcasa's cloud service was described as a "winner" that is "pricier than its competitors" but supported by Mac, PC and Android platforms.
Mobile
Users could access their Infinite Drive through mobile apps for Android, Windows RT, and browsers and support offline viewing of files. The app collects and displays individual media types such as photos, video, music, and documents, independently of the folder hierarchy that they are stored in. Video files are streamed and auto-transcoded based on the device bandwidth. Items may be uploaded or downloaded or shared directly with social media sites. Files of any size can be shared with a web link that can distributed via email, text or IM. After the initial server migration, only apps for Android, iOS and browsers were updated, effectively rendering other devices unusable with the service.
Security
A September 2011 article published in Extreme Tech said that Bitcasa's convergent encryption based system is "mostly" safe but has some risks associated with it.
New pricing and changes
November 2013
On November 19, 2013, the company announced that its Infinite Storage offering would increase in price. The move sparked an intense reaction from users at the company's forum, even though existing users were grandfathered into the original pricing plan. Reactions from bloggers were particularly critical. The announcement of the pricing plans change on the Bitcasa blog was commented on heavily by users. This post, and the ensuing comments were removed from the internet by Bitcasa.
Bitcasa introduced an interface for developers.
October 2014
On October 23, 2014, Bitcasa announced it would be removing all of its grandfathered 'infinite' plans. Although the company had assured customers that these plans would be continued as long as they had not cancelled their service(the company removed their official blog post about this, though it is still available on the WayBack Machine. Bitcasa backtracked due to 'lack of demand' and 'abuse'.
The company instead offered previous clients the same packages that regular users pay at $10/month for 1TB ($99 annually) or $99/month for 10TB ($999 annually).
The company gave users 23 days to migrate or download their data, or it would be deleted. This move was criticized by many users as not being physically possible at the download rates provided by Bitcasa.
As a result of a system migration, some users had data loss, some of which was not replaceable. Angry customers gave the company bad feedback, and the community forum became less active.
The company has offered yearly subscribers the right to cancel and get a prorated refund. However, it disabled the ability to cancel accounts and refused to delete accounts through its support system.
On November 13, 2014, Northern Californian district judge William Alsup granted a temporary restraining order, enjoining Bitcasa from deleting and disabling access to Infinite Plan subscribers' data. Bitcasa filed a response on 18 November, challenging the legality of the TRO. As an apparent result of the restraining order, Bitcasa announced a 5-day extension of the deadline in an email to users on November 16; the email did not mention the restraining order. A hearing was set for 10.00 on 19 November; Bitcasa 'won' the lawsuit.
In February 2015, the Community Forum was shut down.
April 2016
On April 7, 2016, the company switched their free 5GB plan to a free trial tier. Users with this account prior April 7 would automatically start the trial and after the 60-day trial, if the user has not changed to a paid plan, their account and data will be deleted from the server.
On April 21, 2016, Bitcasa announced they would discontinue their cloud storage service, and focus on business products. Users had until May 20, 2016 to download their data, when user data could be deleted. Bitcasa shut down their consumer cloud storage at the end of May 20, 2016, only offering products for developers.
September 2016
After four months, they did not refund customers and the website of Bitcasa was inaccessible.
See also
Comparison of file hosting services
Comparison of online backup services
References
Cloud applications
Cloud storage
Data synchronization
Online backup services
Companies established in 2011
Companies based in Palo Alto, California
2011 establishments in California
Internet technology companies of the United States
File hosting for macOS
File hosting for Windows |
13501234 | https://en.wikipedia.org/wiki/Database%20cinema | Database cinema | One of the principal features defining traditional cinema is a fixed and linear narrative structure. In Database Cinema however, the story develops by selecting scenes from a given collection like a computer game in which a player performs certain acts and thereby selects scenes and creating a narrative.
Structure
New Media objects lack this strong narrative component, they don’t have a beginning or an end but can start or stop at any point. They are collections of discrete items coming from the database. Lev Manovich first related the database to cinema in his effort to understand the changing technologies of filmmaking techniques in media landscapes. According to Manovich, cinema privileged narrative as the key form of cultural expression of modern age but the computer age introduced its correlate, the database: "As a cultural form, database represents the world as a list of items and it refuses to order this list. In contrast, a narrative creates a cause-and-effect trajectory of seemingly unordered items (events). Therefore, database and narrative are natural enemies. Competing for the same territory of human culture, each claims an exclusive right to make meaning out of the world."
Database artists
Manovich considers filmmakers Peter Greenaway and Dziga Vertov as pioneers in his database cinema genre. He explains how Greenaway sees the linear pursuit as standard format of filmmaking lagging behind modern literature in experimenting with narrative. Greenaway’s system for reconciling database and narrative uses sequences of numbers. They act as a narrative shell, which makes the viewer believe he is watching a story.
Dziga Vertov can be seen as an even earlier database filmmaker. Manovich cites Vertovs Man with a Movie Camera (USSR, 1929) as the most important example of database imagination in modern media art. The film has three levels: Cameraman filming the shots, audience watching the finished film and shots from street life in Ukrainian cities edited in chronological order of that particular day. While the last level can be seen as text or ‘the story’, the other two can be seen as meta-texts. By the use of meaningful effects, discovering the world by this ‘kino-eye’ Vertov uses the normally static and objective database as a dynamic and subjective form.
Manovich stated that new media artists working on database concepts could learn from cinema precisely because cinema has in fact always been at the nexus of database and narrative while the movie was still in the editing room. Manovich points out especially Vertov achieved a successful merging between database and narrative into a new form .
Implicit/explicit
The semiological theory of syntagm and paradigm (originally formulated by Ferdinand de Saussure and later worked on by Roland Barthes) helps to define the relationship between the database-narrative opposition. In this theory the syntagm is a linear stringing together of elements while at the paradigmatic each new element is chosen from a set of other related elements. In this case, the elements in syntagm dimensions are related in praesentia: it is the flow of words we hear, or the shots we see. On a paradigmatic dimension the elements are related in absentia: they exist in our minds or stuffed away in a database. To quote Manovich: “the database of choices from which narrative is constructed (the paradigm) is implicit; while the actual narrative (the syntagm) is explicit”.
In New Media projects, this is reversed according to Manovich. The paradigmatic database is tangible, while the syntagmatic narrative is virtual.
Soft cinema
Lev Manovich defines soft cinema as the creative possibilities at the intersection of software culture, cinema, and architecture. Its manifestations include films, dynamic visualizations, computer-driven installations, architectural designs, print catalogs, and DVDs. In parallel, the project investigates how the new representational techniques of software cinema can be deployed to address the new dimensions of our time, such as the rise of mega-cities, the "new" Europe, and the effects of information technologies on subjectivity.
Criteria
Manovich calls on 4 different criteria to define Soft Cinema in his research:
1. Following the standard convention of the human-computer interface, the display area is always divided into multiple frames.
2. Using a set of rules defined by the authors, the Soft Cinema software controls
both the layout of the screen (number and position of frames) and the sequences of media elements that appear in these frames.
3. The media elements (video clips, sound, still images, text, etc.) are selected
from a large database to construct a potentially unlimited number of different films.
4. In Soft Cinema ‘films’ video is used as only one type of representation among
others: motion graphics, 3D animations, diagrams, etc.
Works of Soft Cinema
Texas
The original piece was created for the 2002 "Soft Cinema installation", for an exhibition titled 'Future Cinema: Cinematic Imaginary after Film.'
Texas looks at the 'Modern experience of living between layers', that is, how time has created different 'layers' of space throughout the world we live in. The film calls on a number of databases, each structured in the same fashion. The database containing video footage (as opposed to music) holds 425 clips selected from footage that Manovich himself shot at various locations over several years. Manovich aims to capture the idea of a "Global City" throughout these shots. Each video clip in the database holds 10 parameters, including location, subject matter, average brightness, contrast, the type of space, the type of camera motion, and several more. The software uses these parameters in selecting each clip, finding clips that are all similar in some fashion to the next.
Mission to Earth
The original piece was commissioned in 2003 by the BALTIC Centre for Contemporary Art in Gateshead, UK.
Mission to Earth symbolizes the experiences of a modern immigrant as well as the experiences of those during the Cold War. It attempts to show the trauma associated with a shift in identity as one changes their life. The Soft Cinema software uses several frames at once in this piece, displaying different things in each frame to portray the split in identity that the main character, Inga, experiences. The software also changes the size and number of windows as it grabs the content from the database. Most of the video used for the database was shot in London, Berlin, Rio de Janeiro, Buenos Aires, and Sweden.
Absences
The Absences piece was created without a pre-set narrative. It takes advantage of the assumption that, given different sets of images and footage, the viewer will connect what is seen and create their own structured narrative. The theme surrounds the aspect of natural and urban surroundings. Like previous works, the images shown each hold unique parameters which the soft-cinema software chooses from when viewed by the user. These parameters include brightness, contrast, texture, activity, frequency, and several others.
References
Further reading
Linda Cowgill: Non-Linear Narratives: The Ultimate in Time Travel. Plots Inc. Productions (undated).
Lev Manovich: Database as a Symbolic Form. Accessed November 2017.
Jan Baetens: review Soft Cinema. Navigating the Database, published: November 2005
Lev Manovich & Andreas Kratky: Soft Cinema. Navigating the Database. MIT Press, Cambridge, Massachusetts, 2005, DVD with 40-page booklet,
Revolution of Open Source and Film Making Towards Open Film Making (M.A. thesis)
German A. Duarte: "Fractal Narrative: About the Relationship Between Geometries and Technology and Its Impact on Narrative Spaces". Transcript Verlag, Bielefeld, 2014,
See also
Hyperlink cinema
Art film
Director's cut
Film styles
2000s in film |
2244043 | https://en.wikipedia.org/wiki/CCM%20mode | CCM mode | CCM mode (counter with cipher block chaining message authentication code; counter with CBC-MAC) is a mode of operation for cryptographic block ciphers. It is an authenticated encryption algorithm designed to provide both authentication and confidentiality. CCM mode is only defined for block ciphers with a block length of 128 bits.
The nonce of CCM must be carefully chosen to never be used more than once for a given key.
This is because CCM is a derivation of CTR mode and the latter is effectively a stream cipher.
Encryption and authentication
As the name suggests, CCM mode combines the well known CBC-MAC with the well known counter mode of encryption. These two primitives are applied in an "authenticate-then-encrypt" manner, that is, CBC-MAC is first computed on the message to obtain a tag t; the message and the tag are then encrypted using counter mode. The main insight is that the same encryption key can be used for both, provided that the counter values used in the encryption do not collide with the (pre-)initialization vector used in the authentication. A proof of security exists for this combination, based on the security of the underlying block cipher. The proof also applies to a generalization of CCM for any size block cipher, and for any size cryptographically strong pseudo-random function (since in both counter mode and CBC-MAC, the block cipher is only ever used in one direction).
CCM mode was designed by Russ Housley, Doug Whiting and Niels Ferguson. At the time CCM mode was developed, Russ Housley was employed by RSA Laboratories.
A minor variation of the CCM, called CCM*, is used in the ZigBee standard. CCM* includes all of the features of CCM and additionally offers encryption-only capabilities.
Performance
CCM requires two block cipher encryption operations on each block of an encrypted-and-authenticated message, and one encryption on each block of associated authenticated data.
According to Crypto++ benchmarks, AES CCM requires 28.6 cycles per byte on an Intel Core 2 processor in 32-bit mode.
Notable inefficiencies:
CCM is not an "on-line" AEAD, in that the length of the message (and associated data) must be known in advance.
In the MAC construction, the length of the associated data has a variable-length encoding, which can be shorter than machine word size. This can cause pessimistic MAC performance if associated data is long (which is uncommon).
Associated data is processed after message data, so it is not possible to pre-calculate state for static associated data.
Patents
The catalyst for the development of CCM mode was the submission of OCB mode for inclusion in the IEEE 802.11i standard. Opposition was voiced to the inclusion of OCB mode because of a pending patent application on the algorithm. Inclusion of a patented algorithm meant significant licensing complications for implementors of the standard.
While the inclusion of OCB mode was disputed based on these intellectual property issues, it was agreed that the simplification provided by an authenticated encryption system was desirable. Therefore, Housley, et al. developed CCM mode as a potential alternative that was not encumbered by patents.
Even though CCM mode is less efficient than OCB mode, a patent free solution was preferable to one complicated by patent licensing issues. Therefore, CCM mode went on to become a mandatory component of the IEEE 802.11i standard, and OCB mode was relegated to optional component status, before eventually being removed altogether.
Use
CCM mode is used in the IEEE 802.11i (as CCMP, an encryption algorithm for WPA2), IPsec, and TLS 1.2, as well as Bluetooth Low Energy (as of Bluetooth 4.0). It is available for TLS 1.3, but not enabled by default in OpenSSL.
See also
Authenticated encryption
EAX mode
Galois/counter mode
Stream cipher
Stream cipher attack
CCMP
References
External links
: Counter with CBC-MAC (CCM)
: Using Advanced Encryption Standard (AES) CCM Mode with IPsec Encapsulating Security Payload (ESP)
: AES-CCM Cipher Suites for Transport Layer Security (TLS)
A Critique of CCM (by the designer of OCB)
Block cipher modes of operation
Authenticated-encryption schemes |
16429566 | https://en.wikipedia.org/wiki/5436%20Eumelos | 5436 Eumelos | 5436 Eumelos is a mid-sized Jupiter trojan from the Greek camp, approximately in diameter. It was discovered on 20 February 1990, by American astronomers Carolyn and Eugene Shoemaker at the Palomar Observatory in California. The dark Jovian asteroid has been identified as the principal body of the small Eumelos family and is likely elongated in shape with a longer-than-average rotation period of 38.4 hours. It was named after the Greek warrior and charioteer Eumelus from Greek mythology.
Orbit and classification
Eumelos is a dark Jovian asteroid in a 1:1 orbital resonance with Jupiter. It is located in the leading Greek camp at the Gas Giant's Lagrangian point, 60° ahead on its orbit . It orbits the Sun at a distance of 4.8–5.6 AU once every 11 years and 10 months (4,333 days; semi-major axis of 5.2 AU). Its orbit has an eccentricity of 0.08 and an inclination of 7° with respect to the ecliptic.
The body's observation arc begins with its first observation as at the CERGA Observatory in December 1986, more than three years prior to its official discovery observation at Palomar.
Eumelos family
Fernando Roig and Ricardo Gil-Hutton identified Eumelos as the principal body of a small Jovian asteroid family, using the hierarchical clustering method (HCM), which looks for groupings of neighboring asteroids based on the smallest distances between them in the proper orbital element space. According to the astronomers, the Eumelos family belongs to the larger Menelaus clan, an aggregation of Jupiter trojans which is composed of several families, similar to the Flora family in the inner asteroid belt.
However this family is not included in David Nesvorný HCM-analysis from 2014. Instead, Eumelos is listed as a non-family asteroid of the Jovian background population on the Asteroids Dynamic Site (AstDyS) which based on another analysis by Milani and Knežević.
Naming
This minor planet was named after Eumelus (Eumelos), son of King Admetus and leader of the Greek contingent from Pherae in the Trojan War. At funeral games for Patroclus, he was the fifth and last in the chariot races competing against Diomedes, Menelaus, Antilochus and Meriones. Though Eumelus came in last, he was awarded by Achilles with the bronze corselet stripped from the Trojan Asteropaios (see table below for the correspondingly named Jupiter trojans). The official naming citation was published by the Minor Planet Center on 12 July 1995 ().
Physical characteristics
Euryalos is an assumed C-type asteroid, while the majority of larger Jupiter trojans are D-type asteroids.
Rotation period
In 2013, a rotational lightcurve of Eumelos was obtained from photometric observations by Linda French and Lawrence Wasserman at the Anderson Mesa Station of the Lowell Observatory, using its 0.8-meter NURO telescope over three consecutive nights until 1 March 2013. Robert Stephens at the Center for Solar System Studies in Landers, California, then observed this asteroid for five more nights during 10–14 March 2013. Lightcurve analysis gave a well-defined rotation period of hours with a brightness amplitude of 0.68 magnitude, indicative of a non-spherical shape ().
In August 2015, observations by the Kepler space telescope during its K2 mission gave two lightcurves with an alternative period of and hours with a brightness variation of 0.40 and 0.43 magnitude, respectively (). The Collaborative Asteroid Lightcurve Link (CALL), labels the period determination for this asteroid as ambiguous.
Diameter and albedo
According to the survey carried out by the NEOWISE mission of NASA's Wide-field Infrared Survey Explorer, Eumelos measures 37.70 kilometers in diameter and its surface has an albedo of 0.086, while CALL assumes a standard albedo for a carbonaceous asteroid of 0.057 and calculates a diameter of 46.30 kilometers based on an absolute magnitude of 10.4.
Notes
References
External links
Lightcurve Database Query (LCDB), at www.minorplanet.info
Dictionary of Minor Planet Names, Google books
Discovery Circumstances: Numbered Minor Planets (5001)-(10000) – Minor Planet Center
Asteroid 5436 Eumelos at the Small Bodies Data Ferret
005436
Discoveries by Carolyn S. Shoemaker
Discoveries by Eugene Merle Shoemaker
Minor planets named from Greek mythology
Named minor planets
19900220 |
58923946 | https://en.wikipedia.org/wiki/1993%20Troy%20State%20Trojans%20football%20team | 1993 Troy State Trojans football team | The 1993 Troy State Trojans football team represented Troy State University in the 1993 NCAA Division I-AA football season. The Trojans played their home games at Veterans Memorial Stadium in Troy, Alabama and competed as an independent.
The Trojans finished the regular season with a 10–0–1 record and a No. 1 national ranking in The Sports Network poll. Troy State advanced to the NCAA Division I-AA Football Championship playoffs, beating in the first round and McNeese State in the quarterfinals, before losing a hard-fought game to Marshall in the semifinals by a score of 24–21.
Schedule
References
Troy State
Troy Trojans football seasons
Troy State Trojans football |
20608603 | https://en.wikipedia.org/wiki/Rajarata%20University%20of%20Sri%20Lanka | Rajarata University of Sri Lanka | Rajarata University of Sri Lanka (, , abbreviated RUSL) is a public university located in the historic city of Mihintale, near Anuradhapura, Sri Lanka. The Rajarata University of Sri Lanka was established as the eleventh University in Sri Lanka and was opened on 31 January 1996 by President Chandrika Kumaratunga.
Over the years, the University has developed to become a centre of excellence in higher education in the North Central Province as well as in Sri Lanka. The academic programs of the RUSL are offered by six faculties namely; Technology, Agriculture, Applied Sciences, Management Studies, Medicine and Allied Sciences and Social Sciences and Humanities. The main administrative complex, the Faculties of Applied Sciences, Technology, Management Studies and Social Sciences and Humanities are located at Mihintale while the Faculty of Agriculture and Faculty of Medicine Allied Sciences are located at Puliyankulama and Saliyapura, respectively.
History
Rajarata University was established as the eleventh National University in Sri Lanka and was opened on 31 January 1996 by President Chandrika Bandaranayake, in accordance with the Gazette Notification 896/2 and the University Act 16 of 1978.
The official opening ceremony was attended by Prime MinisterSirimavo Bandaranayake, Hon. Speaker Kiri Banda Ratnayake, Minister of Higher Education Richard Pathirana, Deputy Minister of Higher Education Wiswa Warnapala, Governor NCP Maithripala Senanayake, Chairman UGC Prof. Stanley Thilakaratne, the University's first Vice Chancellor Prof. W.I. Siriweera and the University's first chancellor, Dr. Jayantha Kelegama.
Having examined the Social variables encountered by Sri Lanka in the two decades 1970 and 1980, the then government decided to establish Affiliated University colleges to provide opportunities for higher education for youth who are qualified but deprived of University education. Consequently, Affiliated University colleges were established in Makandura, Kuliyapitiya and Anuradhapura. Subsequently, on 7 November 1995, the Affiliated Universities were amalgamated and the affiliated university of Kuliyapitiya and Makandura were named as Wayamba Campus of Rajarata University of Sri Lanka.
At the inception, four faculties namely Faculty of Social Sciences and Humanities, Faculty of Management Studies, Faculty of Agriculture and Faculty of Applied Sciences were established and in the year 2006, the Faculty of Medicine and Allied Sciences was established as the fifth faculty of RUSL. The faculties of Social Sciences and Humanities, Faculty of Management Studies and Faculty of Applied Sciences are located in the Mihintale premises while the Faculties of Agriculture and Medicine and Allied Sciences are situated in Puliyankulama and Saliyapura respectively.
In November 1995 the Rajarata University of Sri Lanka (RUSL) was established by the Gazette Notification No: 896/2 of 7 November 1995 in the administrative District of Anuradhapura. The Central Province Affiliated University College (CPAUC) in Polgolla, located 140 km from the main campus at Mihinthale, Anuradhapura was amalgamated to the RUSL as its Faculty of Applied Sciences (FASc). The immediate task of the FASc was to upgrade all the students of the CPAUC who had successfully completed their diploma requirements, to the graduate level. On this task the FASc was inaugurated on 10 January 1997 to commence the third-year degree programme with a batch of 102 students, who graduated in 1998. The first batch of students who were directly sent by the UGC to follow the degree programme was enrolled in November 1997.
After functioning for nearly 10 years at Polgolla, the faculty was established in the main campus at Mihinthale, on 16 January 2006 upon completion of Stage I of the building complex.
Faculties
The university consists of six faculties, Technology, Applied Sciences, Agriculture, Management Studies, Medicine and Allied Sciences, Social Sciences and Humanities
Faculty of Agriculture (FA)
Faculty of Agriculture (FA) offers B.Sc. Agriculture(Special) four-year degree in Agriculture
On the recommendation of the committee on affiliated University colleges (1994), nine Affiliated University colleges spread out in various provinces of the country were merged to form two National Universities, the Rajarata and the Sabaragamuwa University of Sri Lanka in 1996. The Affiliated University college of the North Western province which consisted of two academic sections namely; Home science and Nutrition and the Agriculture, originally affiliated with the University of Kelaniya and Peradeniya respectively, were merged to form the Wayamba Campus of the Rajarata University in terms of the provisions of the section 18 and 47(1) of the University act. No 16 of 1978 and campus board ordinance NO: 3 of 1995.
Two faculties were set up to form the Wayamba Campus namely, the faculty of Agricultural Science and the faculty of Applied Sciences, each with three departments of study.
The Faculty of Agricultural Sciences constituted the Department of plantation Management, Horticultural Sciences and Food Technology and Agricultural Engineering. A three-year general degree in Agricultural Science was offered.
Later in 1999, a committee was appointed to make recommendations to upgrade the Wayamba Campus to a fully-fledged university. Based on the recommendations of this committee the Wayamba University was established in August 1999. With that faculty of Agricultural Sciences taken over to the Wayamba University in 1999 and Rajarata University loses its Agriculture Faculty.
In the year 2001, a new Agriculture Faculty of the Rajarata University has incepted at Puliyankulama closes to ancient Anuradhapura and about ten kilometers away from Mihintale where administration building complex and other sister faculties were located. It was started at a renovated paddy store complex where the faculty of Social Sciences and Humanities was located earlier.
From the beginning, there were three departments namely,
Department of Agricultural Systems
Department of Plant Sciences.
Department of Agricultural Engineering and Soil Science
Department of Animal and food science.
A new curricular was designed with the objective of uplifting dry zone agriculture and awarded a B.Sc degree in Agriculture which is a four-year special degree.
The first batch of 17 students was recruited on 21 April 2001 under the patronage of the first dean of the Faculty of Agriculture at Puliyankulama Prof. S.H. Upasena. From that two comprehensive curriculum revisions were carried out in 2003 and 2006. The total annual intake is 100 students today.
Faculty of Applied Sciences (FASc)
The following degree programs are offered by the FASc.
The Department of Biological Sciences
General Degrees (3 Year)
B.Sc. in Applied Sciences
Honors Degrees (4 Year)
B.Sc. Honors in Applied Sciences
B.Sc. Joint Major in Biology & Physics
B.Sc. Honors in Applied Biology (Specialization area Biodiversity & Conservation)
B.Sc. Honors in Applied Biology (Specialization area Fisheries & Aquaculture Management)
B.Sc. Honors in Applied Biology (Specialization area Microbiology)
Department of Chemical Ciences
Bachelor of Science in Applied Sciences
Bachelor of Science Honours in Applied Sciences
Bachelor of Science Honours in Chemistry
Bachelor of Science Honours in Chemistry and Physics
Department of Computing
Bachelor of Science in Information Technology (B.Sc. in IT)
Department of Health Promotion
Bachelor’s degree in the region dedicated for Health Promotion
Department of Physical Sciences
General Degrees
B.Sc. (General) 3 year degree in Applied Sciences
4 Year Degrees
B.Sc. 4 year degree in Applied Sciences
B.Sc. 4 year degree in Industrial Mathematics
B.Sc. (Joint Major) 4 year degree in Chemistry & physics
B.Sc. 4 year degree in Computer Science
B.Sc. (Special) degree in Chemistry
The FASc, Mihintale consists of five departments: Biological Sciences, Physical Sciences, Chemical Sciences, Health Promotion and Computing. The Department of Biological Sciences offers courses in the fields of Botany/Zoology/Biology while the Department of Physical Sciences offers courses in the fields of Chemistry, Physics, Pure Mathematics, Applied Mathematics and Computer Science and Information and Communication Technology.
All the courses are offered in English.
Faculty of Management Studies
Faculty of Management Studies offers first-level and postgraduate degrees. It consists of four departments that offer B.Sc. degrees in
B.Sc. (Special) in Accountancy & Finance
B.Sc. (Special) in Business Management
B.Sc. (Special) in Tourism & Hospitality Management
B.Sc. (Special) in Business Information Technology
These degree programmes are conducted in English. The Faculty of Management Studies, Rajarata University is the only faculty that offers B.Sc. Degrees in Tourism & Hospitality Management and Business Information Technology in Sri Lanka.
Postgraduate studies involves postgraduate diplomas, MBA and Msc (Management).
Faculty of Medicine and Allied Sciences (FMAS)
This faculty was started in 2005 as a concept of President Mahinda Rajapaksa. The first acting dean was Prof. Malkanthi Chandrasekara followed by Prof. Malani Udupihille. In 2013 first dean from the permanent staff was appointed. The faculty was originally for 120 students and due to the problems prevailing in the country in 2007, 60 students from an eastern university was also sent to FMAS. After that each year nearly 180 students enters the faculty and three batches of students have already graduated (2014). Prof. Sisira Siribaddana, founding professor in Clinical Medicine in FMAS works as the present dean. Amidst notable problems related to inadequate staff, students performances are par with all other faculties in Sri Lanka.
Faculty of Social Sciences and Humanities (FSSH)
Faculty of Social Sciences and Humanities continues to function from the inception of the university in 1995. The Faculty of Faculty of Social Sciences and Humanities is the oldest in the Rajarata University and located in great historical Mihinthale premises. Three departments were assigned with Faculty of Social Sciences and Humanities. in 2015 another two department were assigned to the Faculty of Social Sciences and Humanities and there are five departments by now.
Department of Social Sciences
Department of Humanities
Department of Archaeology and Heritage Management
Department of Environmental Management
Department of Languages
Department of English Language Teaching
Department of Economics
Faculty of Technology (FOT)
The Technology Programme inaugurated at the FASc in January 2017. Despite the commencement of these programmers under FASc, a separate Faculty of Technology is to be established in due course which includes five departments and accommodating an estimated intake of 245 students annually.
The Technology Faculty consist of five Departments leading to their respective degree programmes:
Department of Materials Technology - ENT
Department of Electrical & Electronic Technology- ENT
Department of Bio Process Technology- BST
Department of Food Technology- BST
Department of Information & Communication Technology- ITT
The curricula under these degrees follow a hands-on approach with more applied course contents than fundamentals, thereby fulfilling the needs and voids of the respective industries of the country for which the degree programmes are meant to cater. All courses under the Technology Programme are offered in English medium. As such, the students are required to follow an intensive English Language course for the purpose of following the lecture material, as well as in preparation for their future endeavours upon successful completion of the degree programmes.
References
External links
Faculty of Medicine and Allied Sciences official webpage
Faculty of Technology official webpage
University Media Unit We Cover all the events inside Rajarata University of Sri Lanka
Buildings and structures in North Central Province, Sri Lanka
Educational institutions established in 1995
Education in North Central Province, Sri Lanka
Statutory boards of Sri Lanka
Universities in Sri Lanka
1995 establishments in Sri Lanka |
285106 | https://en.wikipedia.org/wiki/Australian%20National%20University | Australian National University | The Australian National University (ANU) is a public research university located in Canberra, the capital of Australia. Its main campus in Acton encompasses seven teaching and research colleges, in addition to several national academies and institutes.
ANU is regarded as one of the world's leading universities, and is ranked as the number one university in Australia and the Southern Hemisphere by the 2022 QS World University Rankings and second in Australia in the Times Higher Education rankings. Compared to other universities in the world, it is ranked 27th by the 2022 QS World University Rankings, and equal 54th by the 2022 Times Higher Education.
Established in 1946, ANU is the only university to have been created by the Parliament of Australia. It traces its origins to Canberra University College, which was established in 1929 and was integrated into ANU in 1960. ANU enrols 10,052 undergraduate and 10,840 postgraduate students and employs 3,753 staff. The university's endowment stood at A$1.8 billion as of 2018.
ANU counts six Nobel laureates and 49 Rhodes scholars among its faculty and alumni. The university has educated two prime ministers, 30 current Australian ambassadors and more than a dozen current heads of government departments of Australia. The latest releases of ANU's scholarly publications are held through ANU Press online.
History
Post-war origins
Calls for the establishment of a national university in Australia began as early as 1900. After the location of the nation's capital, Canberra, was determined in 1908, land was set aside for the university at the foot of Black Mountain in the city designs by Walter Burley Griffin. Planning for the university was disrupted by World War II but resumed with the creation of the Department of Post-War Reconstruction in 1942, ultimately leading to the passage of the Australian National University Act 1946 by the Chifley Government on 1 August 1946.
A group of eminent Australian scholars returned from overseas to join the university, including Sir Howard Florey (co-developer of medicinal penicillin), Sir Mark Oliphant (a nuclear physicist who worked on the Manhattan Project), and Sir Keith Hancock (the Chichele Professor of Economic History at Oxford). The group also included a New Zealander, Sir Raymond Firth (a professor of anthropology at LSE), who had earlier worked in Australia for some years. Economist Sir Douglas Copland was appointed as ANU's first Vice-Chancellor and former Prime Minister Stanley Bruce served as the first Chancellor. ANU was originally organised into four centres—the Research Schools of Physical Sciences, Social Sciences and Pacific Studies and the John Curtin School of Medical Research.
The first residents' hall, University House, was opened in 1954 for faculty members and postgraduate students. Mount Stromlo Observatory, established by the federal government in 1924, became part of ANU in 1957. The first locations of the ANU Library, the Menzies and Chifley buildings, opened in 1963. The Australian Forestry School, located in Canberra since 1927, was amalgamated by ANU in 1965.
Canberra University College
Canberra University College (CUC) was the first institution of higher education in the national capital, having been established in 1929 and enrolling its first undergraduate pupils in 1930. Its founding was led by Sir Robert Garran, one of the drafters of the Australian Constitution and the first Solicitor-General of Australia. CUC was affiliated with the University of Melbourne and its degrees were granted by that university. Academic leaders at CUC included historian Manning Clark, political scientist Finlay Crisp, poet A. D. Hope and economist Heinz Arndt.
In 1960, CUC was integrated into ANU as the School of General Studies, initially with faculties in arts, economics, law and science. Faculties in Oriental studies and engineering were introduced later. Bruce Hall, the first residential college for undergraduates, opened in 1961.
Modern era
The Canberra School of Music and the Canberra School of Art combined in 1988 to form the Canberra Institute of the Arts, and amalgamated with the university as the ANU Institute of the Arts in 1992.
ANU established its Medical School in 2002, after obtaining federal government approval in 2000.
On 18 January 2003, the Canberra bushfires largely destroyed the Mount Stromlo Observatory. ANU astronomers now conduct research from the Siding Spring Observatory, which contains 10 telescopes including the Anglo-Australian Telescope.
In February 2013, financial entrepreneur and ANU graduate Graham Tuckwell made the largest university donation in Australian history by giving $50 million to fund an undergraduate scholarship program at ANU.
ANU is well known for its history of student activism and, in recent years, its fossil fuel divestment campaign, which is one of the longest-running and most successful in the country. The decision of the ANU Council to divest from two fossil fuel companies in 2014 was criticised by ministers in the Abbott government, but defended by Vice Chancellor Ian Young, who noted: ANU holds investments in major fossil fuel companies.
A survey conducted by the Australian Human Rights Commission in 2017 found that the ANU had the second-highest incidence of sexual assault and sexual harassment. 3.5 per cent of respondents from the ANU reported being sexually assaulted in 2016. Vice Chancellor Brian Schmidt apologised to victims of sexual assault and harassment.
The ANU had funding and staff cuts in the School of Music in 2011-15 and in the School of Culture, History and Language in 2016. However, there is a range of global (governmental) endowments available for Arts and Social Sciences, designated only for ANU. Some courses are now delivered online.
ANU has exchange agreements in place for its students with many foreign universities, most notably in the Asia-Pacific region, including the National University of Singapore, the University of Tokyo, the University of Hong Kong, Peking University, Tsinghua University and Seoul National University. In other regions, notable universities include Université Paris Sciences et Lettres the George Washington University, the University of California, the University of Texas, the University of Toronto in North America and Imperial College London, King's College London, Sciences Po, ETH Zürich, Bocconi University, the University of Copenhagen and Trinity College Dublin in Europe.
In 2017, Chinese hackers infiltrated the computers of Australian National University, potentially compromising national security research conducted at the university.
Campus
The main campus of ANU extends across the Canberra suburb of Acton, which consists of of mostly parkland with university buildings landscaped within. ANU is roughly bisected by Sullivans Creek, part of the Murray–Darling basin, and is bordered by the native bushland of Black Mountain, Lake Burley Griffin, the suburb of Turner and the Canberra central business district. Many university sites are of historical significance dating from the establishment of the national capital, with over 40 buildings recognised by the Commonwealth Heritage List and several others on local lists.
With over 10,000 trees on its campus, ANU won an International Sustainable Campus Network Award in 2009 and was ranked the 2nd greenest university campus in Australia in 2011.
Four of Australia's five learned societies are based at ANU—the Australian Academy of Science, the Australian Academy of the Humanities, the Academy of the Social Sciences in Australia and the Australian Academy of Law. The Australian National Centre for the Public Awareness of Science and the National Film and Sound Archive are also located at ANU, while the National Museum of Australia and CSIRO are situated next to the campus.
ANU occupies additional locations including Mount Stromlo Observatory on the outskirts of Canberra, Siding Spring Observatory near Coonabarabran, a campus at Kioloa on the South Coast of New South Wales and a research unit in Darwin.
Library
The library of ANU originated in 1948 with the appointment of the first librarian, Arthur McDonald. The library holds over 2.5 million physical volumes distributed across six branches—the Chifley, Menzies, Hancock, Art & Music, and Law Libraries and the external Print Repository. Chifley and Hancock library are both accessible for ANU staff and students 24 hours a day.
Residential halls and colleges
Eleven residential facilities are affiliated with ANU—Bruce Hall, Burgmann College, Burton & Garron Hall, Fenner Hall, Gowrie Hall, Graduate House, John XXIII College, Toad Hall, Ursula Hall, Wamburun Hall, and Wright Hall. All are located on campus except Gowrie Hall, which is located in the nearby suburb of Braddon. Students also reside in the privately run units adjoining the campus—Davey Lodge, Kinloch Lodge, Warrumbul Lodge and Lena Karmel Lodge. In 2010, the non-residential Griffin Hall was established for students living off-campus. Another off-campus student accommodation was launched by UniGardens Pty, University Gardens located in Belconnen.
In 2014, 2019 and 2020 there were major protests organised by student leaders across all of the ANU's halls of residence against steep rent hikes, neglect of pastoral care support, and repeated failures to address issues relating to sexual assault and sexual harassment. Though supported by a majority of students living on residence, the ANU's response to past protests has been mixed, with many recommendations and requests for student consultations ignored. The outcome of the 2020 protests revolve around demands for stronger SASH policy, accountability surrounding tariff rises, and commitments to adequate pastoral care; the outcome of these protests is as yet unknown.
Drill Hall Gallery
The Drill Hall Gallery is housed a drill hall dating from the 1940s, for use in training soldiers for the Second World War, and as base for 3rd Battalion, Werriwa Regiment. The interior was remodelled to create an art gallery in 1984, and in 2004 the building was heritage-listed. Temporary exhibitions of the national collection were held in the all while the National Gallery of Australia was being built. ANU took over the hall in 1992 to exhibit its own collection of artworks, and also as a venue for temporary exhibitions.
There are four separate exhibition spaces, which provide the venues not only for exhibitions developed by or in collaboration with the university, but also to accompany major conferences and public events. The venue hosts both national and international exhibitions. Sidney Nolan's panorama, Riverbend, which comprises nine panels, ís on permanent display at the Drill Hall Gallery.
Academic structure
Colleges
ANU was reorganised in 2006 to create seven Colleges, each College leads both teaching and research.
Arts and Social Sciences
The ANU College of Arts and Social Sciences is divided into the Research School of Social Sciences (RSSS) and the Research School of Humanities and the Arts (RSHA). Within RSSS there are schools dedicated to history, philosophy, sociology, political science and international relations, Middle Eastern studies and Latin American studies. RSHA contains schools focusing on anthropology, archaeology, classics, art history, English literature, drama, film studies, gender studies, linguistics, European languages as well as an art and music school. In 2017, ANU ranked 6th in the world for politics, 8th in the world for Social Policy and Administration and 11th in the world for development studies. It is also home to the Australian Studies Institute, the ANU Centre for Aboriginal Economic Policy Research and the ANU Centre for Social Research and Methods.
The College's School of Philosophy houses the ANU Centre for Consciousness and the ANU Centre for Philosophy of the Sciences, as well as the ANU Centre for Moral, Social and Political Theory (CMSPT), an organization whose purpose is to "become a world-leading forum for exposition and analysis of the evolution, structure, and implications of our moral, social and political life." Its president is Nicholas Southwood and key people include Seth Lazar, Geoff Brennan, Bob Goodin, Frank Jackson, Philip Pettit and Michael Smith.
Asia and the Pacific
The ANU College of Asia and the Pacific (CAP) is a specialist centre of Asian and Pacific studies and languages, among the largest collections of experts in these fields of any university in the English-speaking world. The College is home to four academic schools: the Crawford School of Public Policy, a research intensive public policy school; the School of Culture, History and Language, the nation's centre dedicated to investigating and learning with and about the people, languages, and lands of Asia and the Pacific; Coral Bell School of Asia Pacific Affairs, Australia's foremost collection of expertise in the politics and international affairs of Asia and the Pacific; and the School of Regulation and Global Governance (RegNet, formerly the Regulatory Institutions Network), a world-renowned research school dedicated to the interdisciplinary study of regulation and governance.
The College also houses the Australian Centre on China in the World, the Strategic and Defence Studies Centre and the Council for Security Cooperation in the Asia Pacific (CSCAP), Australia. It has dedicated regional institutes for China, Indonesia, Japan, Korea, Malaysia, Mongolia, Myanmar, the Pacific, Southeast Asia and South Asia. The College hosts a series annual and biannual updates, on various regions in the Asia-Pacific. The Crawford School of Public Policy houses the Asia Pacific Arndt-Cohen Department of Economics, the Asia Pacific Network for Environmental Governance (APNEG), the Australia-Japan Research Centre, The Centre for Applied Macroeconomic Analysis, the Centre for Nuclear Non-Proliferation and Disarmament, the East Asian Bureau of Economic Research, the Tax and Transfer Policy Institute, the ANU National Security College, the East Asia Forum publication and a number of other centres. The Crawford School of Public Policy also hosts offices and programs for the Australia and New Zealand School of Government (ANZSOG). Many high performing Year in Asia program students gain the opportunity to travel to an Asian country of their choosing to study for one year specializing in one Asian language.
The College also has affiliation with Indiana University's Pan Asia Institute.
Business and Economics
The ANU College of Business and Economics comprises four Research Schools, which carries research and teaching in economics, finance, accounting, actuarial studies, statistics, marketing and management. Dedicated research centres within these schools include the Social Policy Evaluation, Analysis and Research Centre, the Australian National Centre for Audit and Assurance Research, the ANU Centre for Economic History, the National Centre for Information Systems Research and the ANU Centre for Economic Policy Research. The college is professionally accredited with the Institute of Chartered Accountants Australia, CPA Australia, the Australian Computer Society, the Actuaries Institute Australia, the Institute of Public Accountants, the Association of International Accountants, the Chartered Financial Analyst Institute and the Statistical Society of Australia Inc. It also has membership of the World Wide Web Consortium (W3C).
Engineering and Computer Science
The ANU College of Engineering and Computer Science is divided into two Research Schools, which study a range of engineering and computer science topics respectively. ANU is home to the National Computational Infrastructure National Facility and was a co-founder of NICTA, the chief information and communications technology research centre in Australia. Research groups in ANU College of Engineering and Computer Science include Algorithms and Data, Applied Signal Processing, Artificial Intelligence, Centre for Sustainable Energy Systems, Computer Systems, Computer Vision and Robotics, Data-Intensive Computing, Information and Human Centred Computing, Logic & Computation, Materials and Manufacturing, Semiconductor and Solar Cells, Software Intensive Systems Engineering, Solar Thermal Group, Systems and Control. Disciplinary areas include theories, operations and cutting-edge research that will enhance user experience by integrating ever-evolving information technology methods in engineering applications, with the emphasis on energy source.
Law
The ANU College of Law covers legal research and teaching, with centres dedicated to commercial law, international law, public law and environmental law. In addition to numerous research programs, the College offers the professional LL.B. and J.D. degrees. It is the 7th oldest of Australia's 36 law schools and was ranked 2nd among Australian and 12th among world law schools by the 2018 QS Rankings. Students are given the chance to spend three weeks in Geneva concerning the institutional practice of International Law.
Medicine, Biology and Environment
The ANU College of Medicine, Biology and Environment encompasses the John Curtin School of Medical Research (JCSMR), the ANU Medical School, the Fenner School of Environment & Society and Research Schools of Biology, Psychology and Population Health. JCSMR was established in 1948 as a result of the vision of Nobel laureate Howard Florey. Three further Nobel Prizes have been won as a result of research at JCSMR—in 1963 by John Eccles and in 1996 by Peter Doherty and Rolf M. Zinkernagel.
Physical and Mathematical Sciences
The ANU College of Physical & Mathematical Sciences comprises the Research Schools of Astronomy & Astrophysics, Chemistry, Earth Sciences, Mathematical Sciences and Physics. Under the direction of Mark Oliphant, nuclear physics was one of the university's most notable early research priorities, leading to the construction of a 500 megajoule homopolar generator and a 7.7 megaelectronvolts cyclotron in the 1950s. These devices were to be used as part of a 10.6 gigaelectronvolt synchrotron particle accelerator that was never completed, however they remained in use for other research purposes. ANU has been home to eight particle accelerators over the years and operates the 14UD and LINAS accelerators. Brian Schmidt (astrophysicist at Mount Stromlo Observatory) received the 2011 Nobel Prize for Physics for his work on the accelerating expansion of the universe.
Governance and funding
ANU is governed by a 15-member Council, whose members include the Chancellor and Vice-Chancellor. Gareth Evans, a former Foreign Minister of Australia, was ANU Chancellor from 2010 to December 2019 and Brian Schmidt, an astrophysicist and Nobel Laureate, has served as Vice-Chancellor since 1 January 2016. Evans was succeeded as Chancellor by a fellow former Foreign Minister, Julie Bishop, in January 2020.
Finances
At the end of 2018, ANU recorded an endowment of A$1.8 billion.
Rankings
ANU was ranked 27th in the world (first in Australia) by the 2022 QS World University Rankings, and equal 54th in the world, and equal 2nd in Australia (with the University of Queensland), by the 2022 Times Higher Education.
In the QS World University Rankings by Subject 2020, ANU was ranked 6th in the world for geology, 7th for philosophy, 8th in the world for politics, 9th in the world for sociology, 13th in the world for development studies and 15th in the world for linguistics.
A 2017 study by Times Higher Education reported that ANU was the world's 7th (first in Australia) most international university.
In the 2020 Times Higher Education Global Employability University Ranking, an annual ranking of university graduates' employability, ANU was ranked 15th in the world (first in Australia).
Student life
Australian National University Students' Association (ANUSA) is the students' union of the Australian National University and represents undergraduate and ANU College students, while the Postgraduate and Research Students' Association (PARSA) represents postgraduates. The Australian National University Union manages catering and retail outlets and function amenities on behalf of all students.
Woroni
Woroni is the student magazine of the Australian National University, first formed in 1947. Woroni is published fortnightly in full colour tabloid format, and features broad coverage of university and local news, opinion, features, arts and culture, sports, and leisure. Most of the newspaper since its beginnings have been digitised through the Australian Newspapers Digitisation Program of the National Library of Australia. Woroni also features an online radio broadcast, Woroni Radio, as well as video production through Woroni TV.
Network compromise
The network of the university was subject to serious compromise from November 9 to December 21, 2018. ABC News reported that the initial breach occurred when a phishing message was previewed. After investigating, the university published a report on the incident. The Chief Information Security Officer provides recommendations to avoid further compromise.
Notable alumni and faculty
Faculty
Notable past faculty include Mark Oliphant, Keith Hancock, Manning Clark, Derek Freeman, H. C. Coombs, Gareth Evans, John Crawford, Hedley Bull, Frank Fenner, C. P. Fitzgerald, Pierre Ryckmans, A. L. Basham, Bernhard Neumann, and former Indonesian Vice-President Boediono. Nobel Prizes have been awarded to former ANU Chancellor Howard Florey and faculty members John Eccles, John Harsanyi, Rolf M. Zinkernagel, Peter Doherty and Brian Schmidt. Notable present scholars include Hilary Charlesworth, Ian McAllister, Hugh White, Warwick McKibbin, Keith Dowding, Amin Saikal and Jeremy Shearmur.
Alumni
ANU alumni are often visible in government. Bob Hawke and Kevin Rudd, former Australian Prime Ministers, attended the university, as did senior politicians Annastacia Palaszczuk, Barry O'Farrell, Nick Minchin, Kim Beazley Sr, Peter Garrett, Craig Emerson, Stephen Conroy, Gary Gray, Warren Snowdon, Joe Ludwig and Catherine King and Michael Keenan. ANU has produced 30 current Australian Ambassadors, and more than a dozen current heads of Australian Public Service departments, including Prime Minister & Cabinet secretaries Michael Thawley and Martin Parkinson, Finance secretary Jane Halton, Education secretary Lisa Paul, Agriculture secretary Paul Grimes, Attorney-General's secretary Chris Moraitis, Environment secretary Gordon de Brouwer, Employment secretary Renee Leon, Social Services secretary Finn Pratt, Industry secretary Glenys Beauchamp, Australian Secret Intelligence Service director-general Nick Warner and Australian Competition & Consumer Commission chairman Rod Sims. Graduates also include Prime Minister of the Solomon Islands Gordon Darcy Lilo, Foreign Minister of Mongolia Damdin Tsogtbaatar, former Indonesian Foreign Minister Marty Natalegawa, former Governor of the Reserve Bank of New Zealand Don Brash, former British Secretary of State for Health Patricia Hewitt and former U.S. Ambassador to Israel Martin Indyk.
Other notable alumni include High Court of Australia judges Stephen Gageler and Geoffrey Nettle, Fijian archaeologist Tarisi Vunidilo, Wallisian member of the Congress of New Caledonia Ilaïsaane Lauouvéa, Chief Federal Magistrate John Pascoe, political journalist Stan Grant, human rights lawyer Jennifer Robinson, former Chief of Army David Morrison, Kellogg's CEO John Bryant, former Singapore Airlines CEO Cheong Choong Kong, Indiana University president Michael McRobbie, University of Melbourne Vice-Chancellors Alan Gilbert and Glyn Davis, mathematician John H. Coates, computer programmer Andrew Tridgell, public intellectual Clive Hamilton, journalist Bettina Arndt, and economists John Deeble, Ross Garnaut, Peter Drysdale, John Quiggin and commercial litigator Jozef Maynard Borja Erece, the youngest law graduate in Australian history.
Honorary doctorate recipients
Notable Honorary Doctorate recipients have included former Australian public officials Stanley Bruce, Robert Menzies, Richard Casey, Angus Houston, Brendan Nelson, Owen Dixon, Australian notable persons Sidney Nolan, Norman Gregg, Charles Bean, foreign dignitaries Harold Macmillan, Lee Kuan Yew, Aung San Suu Kyi, Sheikh Hasina, K. R. Narayanan, Nelson Mandela, Desmond Tutu, Saburo Okita and notable foreign scientists John Cockcroft, Jan Hendrik Oort and Alexander R. Todd.
Affiliations
ANU is a member of the Group of Eight, Association of Pacific Rim Universities, the International Alliance of Research Universities, UNESCO Chairs, U7 Alliance, Winter Institute. and Global Scholars Program.
ANU participates in the US Financial Direct Loan program. The RG Menzies Scholarship to Harvard University is awarded annually to at least one talented Australian who has gained admission to a Harvard graduate school. ANU and University of Melbourne are the only two Australian partner universities of Yale University's Fox Fellowship program. ANU has exchange partnership with Yale University, Brown University, MIT and Oxford University, and ANU has research partnership with Harvard University.
See also
ANU research centres and institutes
ARC Training Centre for Automated Manufacture of Advanced Composites
Australian National University Boat Club
List of universities in Australia
References
External links
Australian National University
National universities
1946 establishments in Australia
Educational institutions established in 1946
Universities in the Australian Capital Territory
Buildings and structures in Canberra
Group of Eight (Australian universities) |
24881187 | https://en.wikipedia.org/wiki/Getopt | Getopt | getopt is a C library function used to parse command-line options of the Unix/POSIX style. It is a part of the POSIX specification, and is universal to Unix-like systems.
It is also the name of a Unix program for parsing command line arguments in shell scripts.
History
A long-standing issue with command line programs was how to specify options; early programs used many ways of doing so, including single character options (-a), multiple options specified together (-abc is equivalent to -a -b -c), multicharacter options (-inum), options with arguments (-a arg, -inum 3, -a=arg), and different prefix characters (-a, +b, /c).
The function was written to be a standard mechanism that all programs could use to parse command-line options so that there would be a common interface on which everyone could depend. As such, the original authors picked out of the variations support for single character options,
multiple options specified together, and options with arguments (-a arg or -aarg), all controllable by an option string.
dates back to at least 1980 and was first published by AT&T at the 1985 UNIFORUM conference in Dallas, Texas, with the intent for it to be available in the public domain. Versions of it were subsequently picked up by other flavors of Unix (4.3BSD, Linux, etc.). It is specified in the POSIX.2 standard as part of the unistd.h header file. Derivatives of have been created for many programming languages to parse command-line options.
Extensions
is a system dependent function, and its behavior depends on the implementation in the C library. Some custom implementations like gnulib are available, however.
The conventional (POSIX and BSD) handling is that the options end when the first non-option argument is encountered, and that would return -1 to signal that. In the glibc extension, however, options are allowed anywhere for ease of use; implicitly permutes the argument vector so it still leaves the non-options in the end. Since POSIX already has the convention of returning -1 on and skipping it, one can always portably use it as an end-of-options signifier.
A GNU extension, getopt_long, allows parsing of more readable, multicharacter options, which are introduced by two dashes instead of one. The choice of two dashes allows multicharacter options (--inum) to be differentiated from single character options specified together (-abc). The GNU extension also allows an alternative format for options with arguments: --name=arg. This interface proved popular, and has been taken up (sans the permution) by many BSD distributions including FreeBSD as well as Solaris. An alternative way to support long options is seen in Solaris and Korn Shell (extending optstring), but it was not as popular.
Another common advanced extension of getopt is resetting the state of argument parsing; this is useful as a replacement of the options-anyware GNU extension, or as a way to "layer" a set of command-line interface with different options at different levels. This is achieved in BSD systems using an variable, and on GNU systems by setting to 0.
A common companion function to is . It parses a string of comma-separated sub-options.
Usage
For users
The command-line syntaxes for getopt-based programs is the POSIX-recommended Utility Argument Syntax. In short:
Options are single-character alphanumerics preceded by a - (hyphen-minus) character.
Options can take an argument, mandatory or optional, or none.
In order to specify that an option takes an argument, include : after the option name (only during initial specification)
When an option takes an argument, this can be in the same token or in the next one. In other words, if o takes an argument, -ofoo is the same as -o foo.
Multiple options can be chained together, as long as the non-last ones are not argument taking. If a and b take no arguments while e takes an optional argument, -abe is the same as -a -b -e, but -bea is not the same as -b -e a due to the preceding rule.
All options precede non-option arguments (except for in the GNU extension). -- always marks the end of options.
Extensions on the syntax include the GNU convention and Sun's specification.
For programmers
The getopt manual from GNU specifies such a usage for getopt:
#include <unistd.h>
int getopt(int argc, char * const argv[],
const char *optstring);
Here the and are defined exactly like they are in the C function prototype; i.e., argc indicates the length of the argv array-of-strings. The contains a specification of what options to look for (normal alphanumerals except ), and what options to accept arguments (colons). For example, refers to three options: an argumentless , an optional-argument , and a mandatory-argument . GNU here implements a extension for long option synonyms.
itself returns an integer that is either an option character or -1 for end-of-options. The idiom is to use a while-loop to go through options, and to use a switch-case statement to pick and act on options. See the example section of this article.
To communicate extra information back to the program, a few global variables are referenced by the program to fetch information from :
extern char *optarg;
extern int optind, opterr, optopt;
optarg A pointer to the argument of the current option, if present. Can be used to control where to start parsing (again).
optind Where getopt is currently looking at in .
opterr A boolean switch controlling whether getopt should print error messages.
optopt If an unrecognized option occurs, the value of that unrecognized character.
The GNU extension interface is similar, although it belongs to a different header file and takes an extra option for defining the "short" names of long options and some extra controls. If a short name is not defined, getopt will put an index referring to the option structure in the pointer instead.
#include <getopt.h>
int getopt_long(int argc, char * const argv[],
const char *optstring,
const struct option *longopts, int *longindex);
Examples
Using POSIX standard getopt
#include <stdio.h> /* for printf */
#include <stdlib.h> /* for exit */
#include <unistd.h> /* for getopt */
int main (int argc, char **argv) {
int c;
int digit_optind = 0;
int aopt = 0, bopt = 0;
char *copt = 0, *dopt = 0;
while ((c = getopt(argc, argv, "abc:d:012")) != -1) {
int this_option_optind = optind ? optind : 1;
switch (c) {
case '0':
case '1':
case '2':
if (digit_optind != 0 && digit_optind != this_option_optind) {
printf ("digits occur in two different argv-elements.\n");
}
digit_optind = this_option_optind;
printf ("option %c\n", c);
break;
case 'a':
printf ("option a\n");
aopt = 1;
break;
case 'b':
printf ("option b\n");
bopt = 1;
break;
case 'c':
printf ("option c with value '%s'\n", optarg);
copt = optarg;
break;
case 'd':
printf ("option d with value '%s'\n", optarg);
dopt = optarg;
break;
case '?':
break;
default:
printf ("?? getopt returned character code 0%o ??\n", c);
}
}
if (optind < argc) {
printf ("non-option ARGV-elements: ");
while (optind < argc) {
printf ("%s ", argv[optind++]);
}
printf ("\n");
}
exit (0);
}
Using GNU extension getopt_long
#include <stdio.h> /* for printf */
#include <stdlib.h> /* for exit */
#include <getopt.h> /* for getopt_long; POSIX standard getopt is in unistd.h */
int main (int argc, char **argv) {
int c;
int digit_optind = 0;
int aopt = 0, bopt = 0;
char *copt = 0, *dopt = 0;
static struct option long_options[] = {
/* NAME ARGUMENT FLAG SHORTNAME */
{"add", required_argument, NULL, 0},
{"append", no_argument, NULL, 0},
{"delete", required_argument, NULL, 0},
{"verbose", no_argument, NULL, 0},
{"create", required_argument, NULL, 'c'},
{"file", required_argument, NULL, 0},
{NULL, 0, NULL, 0}
};
int option_index = 0;
while ((c = getopt_long(argc, argv, "abc:d:012",
long_options, &option_index)) != -1) {
int this_option_optind = optind ? optind : 1;
switch (c) {
case 0:
printf ("option %s", long_options[option_index].name);
if (optarg) {
printf (" with arg %s", optarg);
}
printf ("\n");
break;
case '0':
case '1':
case '2':
if (digit_optind != 0 && digit_optind != this_option_optind) {
printf ("digits occur in two different argv-elements.\n");
}
digit_optind = this_option_optind;
printf ("option %c\n", c);
break;
case 'a':
printf ("option a\n");
aopt = 1;
break;
case 'b':
printf ("option b\n");
bopt = 1;
break;
case 'c':
printf ("option c with value '%s'\n", optarg);
copt = optarg;
break;
case 'd':
printf ("option d with value '%s'\n", optarg);
dopt = optarg;
break;
case '?':
break;
default:
printf ("?? getopt returned character code 0%o ??\n", c);
}
}
if (optind < argc) {
printf ("non-option ARGV-elements: ");
while (optind < argc) {
printf ("%s ", argv[optind++]);
}
printf ("\n");
}
exit (0);
}
In Shell
Shell script programmers commonly want to provide a consistent way of providing options. To achieve this goal, they turn to getopts and seek to port it to their own language.
The first attempt at porting was the program getopt, implemented by Unix System Laboratories (USL). This version was unable to deal with quoting and shell metacharacters, as it shows no attempts at quoting. It has been inherited to FreeBSD.
In 1986, USL decided that being unsafe around metacharacters and whitespace was no longer acceptable, and they created the builtin getopts command for Unix SVR3 Bourne Shell instead. The advantage of building the command into the shell is that it now has access to the shell's variables, so values could be written safely without quoting. It uses the shell's own variables to track the position of current and argument positions, and , and returns the option name in a shell variable.
In 1995, getopts was included in the Single UNIX Specification version 1 / X/Open Portability Guidelines Issue 4. Now a part of the POSIX Shell standard, getopts have spread far and wide in many other shells trying to be POSIX-compliant.
getopt was basically forgotten until util-linux came out with an enhanced version that fixed all of old getopt's problems by escaping. It also supports GNU's long option names. On the other hand, long options have been implemented rarely in the command in other shells, ksh93 being an exception.
In other languages
getopt is a concise description of the common POSIX command argument structure, and it is replicated widely by programmers seeking to provide a similar interface, both to themselves and to the user on the command-line.
C: non-POSIX systems do not ship in the C library, but gnulib and MinGW (both accept GNU-style), as well as some more minimal libraries, can be used to provide the functionality. Alternative interfaces also exist:
The library, used by RPM package manager, has the additional advantage of being reentrant.
The family of functions in glibc and gnulib provides some more convenience and modularity.
D programming language: has getopt module in the D standard library.
Go: comes with the package, which allows long flag names. The package supports processing closer to the C function. There is also another package providing interface much closer to the original POSIX one.
Haskell: comes with System.Console.GetOpt, which is essentially a Haskell port of the GNU getopt library.
Java: There is no implementation of getopt in the Java standard library. Several open source modules exist, including gnu.getopt.Getopt, which is ported from GNU getopt, and Apache Commons CLI.
Lisp: has many different dialects with no common standard library. There are some third party implementations of getopt for some dialects of Lisp. Common Lisp has a prominent third party implementation.
Free Pascal: has its own implementation as one of its standard units named GetOpts. It is supported on all platforms.
Perl programming language: has two separate derivatives of getopt in its standard library: Getopt::Long and Getopt::Std.
PHP: has a getopt function.
Python: contains a module in its standard library based on C's getopt and GNU extensions. Python's standard library also contains other modules to parse options that are more convenient to use.
Ruby: has an implementation of getopt_long in its standard library, GetoptLong. Ruby also has modules in its standard library with a more sophisticated and convenient interface. A third party implementation of the original getopt interface is available.
.NET Framework: does not have getopt functionality in its standard library. Third-party implementations are available.
References
External links
POSIX specification
GNU getopt manual
Full getopt port for Unicode and Multibyte Microsoft Visual C, C++, or MFC projects
C POSIX library
Command-line software |
24555020 | https://en.wikipedia.org/wiki/AN/DRC-8%20Emergency%20Rocket%20Communications%20System | AN/DRC-8 Emergency Rocket Communications System | The Emergency Rocket Communications System (ERCS) was designed to provide a reliable and survivable emergency communications method for the United States National Command Authority, using a UHF repeater placed atop a Blue Scout rocket or Minuteman II intercontinental ballistic missile. ERCS was deactivated as a communication means when President George H.W. Bush issued a message to stand down SIOP-committed bombers and Minuteman IIs on 27 September 1991. Headquarters SAC was given approval by the Joint Chiefs of Staff to deactivate the 494L payloads beginning 1 October 1992. However, Headquarters SAC believed it was inefficient and unnecessary to support ERCS past fiscal year 1991, and kept the accelerated deactivation schedule.
Mission
The mission of the Emergency Rocket Communications System was to provide assured communication to United States strategic forces in the event of a nuclear attack. ERCS was a rocket or missile that carried a UHF transmitter as a payload instead of a nuclear warhead. In the event of a nuclear attack, ERCS would launch the UHF transmitter into low space to transmit an Emergency Action Message (EAM) to Strategic Air Command units.
The ERCS sorties had two possible trajectories, East and West, to inform SAC alert forces in the northern tier bases (i.e. Minot AFB, Fairchild AFB, Grand Forks AFB).
ERCS was deactivated and taken out of the inventory as other means of emergency communication (i.e. ISST and Milstar) came online.
Nomenclature
ERCS was also known as Project 279 (Blue Scout version) and Project 494L (Minuteman version). Sources report that the Project 279 was also known as Project Beanstalk; while the Minuteman system may have been designated LEM-70A.
Operations
The Blue Scout version of ERCS (Program 279) was deployed to three sites near Wisner, West Point, and Tekamah, Nebraska. The Program 494L Minuteman version of ERCS was only deployed to Whiteman AFB, Missouri's 351st Strategic Missile Wing, under the direct control of the 510th Strategic Missile Squadron (later the 510th Missile Squadron).
ERCS was a three part communications system composed of the following elements:
The five 510th Strategic Missile Squadron Launch Control Centers, which exercised primary control over the ERCS
The Minuteman missiles configured with ERCS payloads that were capable of accepting a voice recorded message of up to 90 seconds in length
The SAC airborne command post (ABNCP) ALCC-equipped aircraft which served as an alternate ERCS control agency.
Interface with ERCS hardware was provided by three modes:
A land line through ground grouping points (North Bend, Nebraska and Red Oak, Iowa) allowed the airborne command post interface with 494L equipment
A UHF radio link through the Launch Control Center to the Launch Facility
A direct radio interface to the Launch Facility, through the Airborne Launch Control System
Headquarters Strategic Air Command had the ability to make inputs directly into the missile. The Numbered Air Forces could direct the missile crew to make the inputs. In the case of the airborne command post, inputs could be made directly into the missile and missile launch could be made from the aircraft.
Testing
Operational tests of the 494L Minuteman II ERCS were conducted by Air Force Systems Command and Strategic Air Command under the code name GIANT MOON. Launch Control Facility Oscar-1A (LCF O-1A) and Launch Facility Zero Four (LF-04) at Vandenberg AFB, California were modified in 1977 to perform ERCS-related test functions.
ERCS sortie location
After the system was declassified, the ten ERCS sorties were powered down and removed from their launch facilities. During these power down operations, the location of the sorties were:
Material and support
The Ogden Air Materiel Area at Hill AFB, Utah was made the Systems Support Manager in August 1963.
Chronology
29 September 1961 – HQ USAF issues Specific Operational Requirement (SOR) 192, for ERCS (designated Program 279)
27 December 1961 – Interim configuration finalized of three rockets with 1 KW transmitters, stationed around Omaha, Nebraska; four sites with three rockets each
5 April 1962 – Amendment to SOR 192 to include two east coast ERCS complexes, based on CHROME DOME routes and SAC elements in Europe
21 September 1962 – SAC study recommends use of Minuteman missile, to eliminate Program 279 and its proposed expansion
7 June 1962 – SAC proposes changes to SOR 192, such as using six Minuteman missiles selected from among the flights of an operational wing; this was envisioned not to impair the alternative capability of substituting nuclear warheads should future circumstances warrant.
11 July 1962 – Program 279 attains Initial Operating Capability (IOC); UHF transmitter payloads attached to three MER-6A Blue Scout rockets at three sites near Wisner, West Point, and Tekamah, Nebraska
13 December 1966 – A Minuteman II launched from Vandenberg AFB, Calif. carried the first Minuteman ERCS payload into space for testing and evaluation
17 April 1967 – Third, and last, test of the ERCS using a Minuteman booster; Emergency Action Message was inserted into the transmitter from an ALCS aircraft.
15 August 1967 – First Program 494L payload arrives at Whiteman AFB, Missouri
10 October 1967 – First two Program 494L ERCS payloads put on alert at Whiteman AFB, Missouri; IOC obtained for Program 494L ERCS
1 January 1968 – Full Operational Capability (FOC) obtained for Program 494L ERCS; Program 279 ERCS inactivated by SAC
23 October 1974 – ERCS test, designated GIANT MOON 6, launched from Vandenberg AFB. Test was monitored on two frequencies by ground facilities. PACOM at Hickam AFB maintained valid reception of the JCS WHITE DOT ONE message for 22 minutes and another message for 14 minutes
27 September 1991 – President George H. W. Bush terminated SAC's alert force operations, which included taking Minuteman II ICBMs (including ERCS sorties) off-alert.
In popular culture
ERCS is mentioned in The Dead Hand: The Untold Story of the Cold War Arms Race and its Dangerous Legacy by David Hoffman.
ERCS is mentioned in Arc Light by Eric Harry.
See also
Dead Hand – Russia's quasi-version of ERCS, relaying launch codes instead of messages
Post-Attack Command and Control System (PACCS)
Airborne Launch Control System (ALCS)
Ground Wave Emergency Network (GWEN)
Minimum Essential Emergency Communications Network (MEECN)
Survivable Low Frequency Communications System (SLFCS)
Primary Alerting System (PAS)
SAC Automated Command and Control System (SACCS)
References
External links
Nuclear warfare
Telecommunications equipment of the Cold War
United States nuclear command and control |
18693996 | https://en.wikipedia.org/wiki/Joe%20Whitley | Joe Whitley | Joe Dally Whitley (born November 12, 1950) is an American lawyer from Georgia. was the first General Counsel for the United States Department of Homeland Security. He works in private practice at Baker Donelson and has been named a Super Lawyer, listed in The Best Lawyers in America®, named a ‘’2019 Lawyer of the Year’’, is AV® Preeminent™ Peer Review Rated by Martindale-Hubbell, and listed in Chambers USA: America's Leading Business Lawyers.
Background
During the George H.W. Bush administration, Whitley served as the Acting United States Associate Attorney General, the third-ranking position in the United States Department of Justice. Under President Ronald Reagan, he was the U.S. Attorney in the Middle District of Georgia, and under President George H.W. Bush, Whitley served as the U.S. Attorney in the Northern and Middle Districts of Georgia in Atlanta. At the time of his appointment he was youngest person ever to be appointed a U.S. Attorney and only person to ever serve as U.S. Attorney for two separate federal jurisdictions.
Private Practice
Prior to joining the Department of Homeland Security, and immediately following his service at DHS, Whitley was a partner at Alston & Bird, where he served as head of the firm's White Collar Government Enforcement & Investigations Group, his practice concentrating on government investigations, environmental and health care fraud and complex civil litigation.
Whitley is the former Chair of the Section of Administrative Law & Regulatory Practice of the American Bar Association. He is a former council member of the Criminal Justice Section of the American Bar Association. He served as Vice Chair for Governmental Affairs of the 2002–03 ABA Criminal Justice Section. He also chairs annual seminars and institutes of continuing education on White Collar Crime, Health Care Fraud, the Foreign Corrupt Practices Act, Internal Investigations and Cybercrime. Whitley chairs the ABA's Annual National Homeland Security Law Institute in Washington, DC.
Education and Teaching
Whitley received his bachelor's degree, cum laude, from the University of Georgia in 1972, and a J.D., cum laude, in 1975 from the University of Georgia School of Law. He currently serves on the Board of Visitors for the University of Georgia School of Public and International Affairs and as Non-Resident Fellow at The Center for International Trade & Security (CITS) at the University of Georgia. He has been Adjunct Professor at the George Washington University Law School, and Adjunct Professor at the American University Washington College of Law, teaching Homeland Security Law at both.
Bar Admissions
Whitley is licensed to practice law in Georgia and the District of Columbia.
Publications and Presentations
Whitley is a frequent author, and a frequent speaker and lecturer at institutions, events and seminars, including, without limitation, the following.
Publications
Homeland Security: Legal and Policy Issues, with Lynne K. Zusman, book published by the American Bar Association
Co-Author, "The Case for Reevaluating DOJ Policies on Prosecuting White Collar Crime," Washington Legal Foundation, Critical Legal Issues, No. 108, May 2002
Author, "Business Continuity", Directors & Boards, February 2006
Author, "Homeland Security: Preparing for Legal and Policy Changes," Bloomberg Corporate Law Journal, Winter 2006
Author, "Homeland Security After Hurricane Katrina: Where Do We Go From Here," Natural Resources & Environment, June 6, 2006
Author, "The SAFETY Act: A Vital Tool In The Fight Against Terrorism," Contemporary Legal Notes, November 2006
Author, "New Federal Rule Dictating Anti-Terrorism Standards for Chemical Facilities," Washington Legal Foundation Contemporary Legal Series, June 2007
Author, "Critical Infrastructure," American Bar Association Homeland Security and National Defense Newsletter, Fall 2007
Author, "The ICE-Man Cometh: Crackdown of Immigration in the Meat Processing Industry," American Bar Association Homeland Security and National Defense Newsletter, Fall 2007
Author, "Chemical Security: Recent Regulation and the Impact on the Private Sector," New Jersey Law Journal, Fall 2007
Author, "Recent Developments in Critical Infrastructure Protection," The Real Estate Finance Journal, Fall 2007
Co-author – "Homeland Security and Domestic Intelligence: Legal Considerations," The U.S. Intelligence Community Law Sourcebook, American Bar Association, 2011 Edition (August 2011)
"Judge Approves New Scheduling Order in Expedia Case," Ledger-Enquirer (February 2012)
Quoted – "Likely FBI Nominee to Face NSA Debate," The Wall Street Journal (June 2013)
Quoted – "The Morning Risk Report: Drop in FCPA Independent Monitors Continues in 2013," The Wall Street Journal (July 2013)
Co-author – "Lessons for General Counsel from Recent Cyberattack on the U.S. Office of Personnel Management," Daily Report (August 2015)
"WLF Overcriminalization Timeline:Deferred-Prosecution and Non-Prosecution Agreements, Washington Legal Foundation (November 2015)
Co-author – "Creating Value in FCPA Investigations Through Increasing Cooperation Credit," FCPA Report (January 2016)
Co-author – "What To Do Before Government Agents Come Knocking," Attorney at Law Magazine, Vol. 5 No. 1 (March 2016)
Co-author – "How to Value Compliance Programs, Internal Investigations," Daily Report (July 2016)
Co-author – "Cybersecurity Public Private Partnerships: Challenges and Opportunities, Cybersecurity Law & Strategy (February 2017)
Co-author – "INSIGHT: The Looming Litigation Buried in the Mueller Report," Bloomberg Law (March 2019)
Presentations
Joe D. Whitley On the C-SPAN Networks
ABA 10th Annual ADMINISTRATIVE LAW & REGULATORY PRACTICE INSTITUTE (Section Chair – Joe D. Whitley)
"National Institute on White Collar Crime" (March 2019)
The National Security Institute, Antonin Scalia Law School, George Mason University
"Ethical Guidance in the Corporate Board Room," American Bar Association (ABA)
Southeastern White Collar Crime Institute (September 2018)
American Bar Association (ABA) Regional Southeastern White Collar Crime Institute (September 2018)
"National Institute on Health Care Fraud" (May 2018)
"Federal Bar Association's Current Issues in Government Investigations" (April 2018)
"The 32nd Annual National Institute on White Collar Crime" (February 2018)
"Georgia ICLE Health Care Fraud Institute" (December 2017)
"Health Care Fraud Institute'' Institute of Continuing Legal Education (December 2017)
"Baker Donelson Compliance Symposium" (November 2017)
"Cybercon 2017" (October 2017)
"The Role of Lawyers in Cybersecurity," Homeland Security Law Institute, George Washington University, Washington D.C. (September 2017)
"The State of Homeland Security and the Rule of Law," National Security Program and the National Security Law Association, George Washington University Law School, Washington, D.C. (September 2017)
"ABA Criminal Justice Section's Southeast White Collar Crime Institute" (September 2017)
Panelist – "2017 NACUA Annual Conference" (June 2017)
"Cybercon 2016" (September 2016)
References
Baker Donelson profile of Joe D. Whitley
Department of Homeland Security press release on nomination of Joe D. Whitley to serve as first General Counsel
External links
Living people
United States Attorneys for the Middle District of Georgia
United States Attorneys for the Northern District of Georgia
United States Department of Homeland Security officials
University of Georgia alumni
University of Georgia School of Law alumni
1950 births |
507692 | https://en.wikipedia.org/wiki/PC%20speaker | PC speaker | A PC speaker is a loudspeaker built into some IBM PC compatible computers. The first IBM Personal Computer, model 5150, employed a standard 2.25 inch magnetic driven (dynamic) speaker. More recent computers use a tiny moving-iron or piezo speaker instead. The speaker allows software and firmware to provide auditory feedback to a user, such as to report a hardware fault. A PC speaker generates waveforms using the programmable interval timer, an Intel 8253 or 8254 chip.
Usage
BIOS/UEFI error codes
The PC speaker is used during power-on self-test (POST) sequence to indicate errors during the boot process. Since it is active before the graphics card, it can be used to communicate "beep codes" related to problems that prevent the much more complex initialization of the graphics card to take place. For example, the Video BIOS usually cannot activate a graphics card unless working RAM is present in the system, while beeping the speaker is possible with just ROM and the CPU registers. Usually, different error codes will be signaled by specific beeping patterns, such as e.g. "one beep; pause; three beeps; pause; repeat". These patterns are specific to the BIOS/UEFI manufacturer and are usually documented in the technical manual of the motherboard.
Software
Several programs, including music software, operating systems or games, could play pulse-code modulation (PCM) sound through the PC speaker using special Pulse-width Modulation techniques explained later in this article.
Games
The PC speaker was often used in very innovative ways to create the impression of polyphonic music or sound effects within computer games of its era, such as the LucasArts series of adventure games from the mid-1990s, using swift arpeggios. Several games such as Space Hulk and Pinball Fantasies were noted for their elaborate sound effects; Space Hulk, in particular, even had full speech.
However, because the method used to reproduce PCM was very sensitive to timing issues, these effects either caused noticeable sluggishness on slower PCs, or sometimes failed completely on faster PCs (that is, significantly faster than the program was originally developed for). Also, it was difficult for programs to do much else, even update the display, during the playing of such sounds. Thus, when sound cards (which can output complex sounds independent from the CPU once initiated) became mainstream in the PC market after 1990, they quickly replaced the PC speaker as the preferred output device for sound effects. Most newly released PC games stopped supporting the speaker during the second half of the 1990s.
Other programs
Several programs, including MP (Module Player, 1989), Scream Tracker, Fast Tracker, Impulse Tracker, and even device drivers for Linux and Microsoft Windows, could play PCM sound through the PC speaker.
Modern Microsoft Windows systems have PC speaker support as a separate device with special capabilities – that is, it cannot be configured as a normal audio output device. Some software uses this special sound channel to produce sounds. For example, Skype can use it as a reserve calling signal device for the case where the primary audio output device cannot be heard (for example because the volume is set to the minimum level or the amplifier is turned off).
In the 1990s, a computer virus for Microsoft DOS named "Techno" appeared, playing a melody through the PC speaker while printing the word "TECHNO" on the screen until filled.
Pinouts
In some applications, the PC speaker is affixed directly to the computer's motherboard; in others, including the first IBM Personal Computer, the speaker is attached by wire to a connector on the motherboard. Some PC cases come with a PC speaker preinstalled. A wired PC speaker connector may have a two-, three-, or four-pin configuration, and either two or three wires. The female connector of the speaker connects to pin headers on the motherboard, which are sometimes labeled or .
Pulse-width modulation
The PC speaker is normally meant to reproduce a square wave via only 2 levels of output (two voltage levels, typically 0 V and 5 V), driven by channel 2 of the Intel 8253 (PC, XT) or 8254 (AT and later) Programmable Interval Timer operating in mode three (square wave signal). The speaker hardware itself is directly accessible via PC I/O port 61H (61 hexadecimal) via bit 1 and can be physically manipulated for 2 levels of output (i.e. 1-bit sound). However, by carefully timing a short pulse (i.e. going from one output level to the other and then back to the first), and by relying on the speaker's physical filtering properties (limited frequency response, self-inductance, etc.), it is possible to drive the speaker to various intermediate output levels, functioning as a crude digital-to-analog converter. This technique is called pulse-width modulation (PWM) and allows approximate playback of PCM audio. (A more refined version of this technique is used in class D audio amplifiers.)
With the PC speaker this method achieves limited quality playback, but a commercial solution named RealSound used it to provide improved sound on several games.
Obtaining a high fidelity sound output using this technique requires a switching frequency much higher than the audio frequencies meant to be reproduced (typically with a ratio of 10:1 or more), and the output voltage to be bipolar, in order to make better use the output devices' dynamic range and power. On the PC speaker, however, the output voltage is either zero or TTL level (unipolar).
The quality depends on a trade-off between the PWM carrier frequency (effective sample rate) and the number of output levels (effective bit depth). The clock rate of the PC's programmable interval timer which drives the speaker is fixed at 1,193,180 Hz, and the product of the audio sample rate times the maximum DAC value must equal this.
Typically, a 6-bit DAC with a maximum value of 63 is used at a sample rate of 18,939.4 Hz, producing poor but recognizable audio.
The audio fidelity of this technique is further decreased by the lack of a properly sized dynamic loudspeaker, specially in modern machines and particularly laptops that use a tiny moving-iron speaker (often confused with piezoelectric). The reason for this is that PWM-produced audio requires a low-pass filter before the final output in order to suppress switching noise and high harmonics. A normal dynamic loudspeaker does this naturally, but the tiny metal diaphragm of the moving-iron speaker will let much switching noise pass, as will many direct couplings (though there are exceptions to this, e.g. filtered "speaker in" ports on some motherboards and sound cards).
This use of the PC speaker for complex audio output became less common with the introduction of Sound Blaster and other sound cards.
See also
Intel 8253
RealSound
Loudspeaker enclosure
Notes
External links
Smacky Open-source C++ software for playing (monophonic) music on the PC speaker.
Site for old PC without sound cards.
Programming the PC Speaker, by Mark Feldman for PC-GPE.
Programming the PC Speaker, by Phil Inch: part 1, part 2 (includes a very detailed explanation of how to play back PCM audio on the PC speaker, and why it works)
Bleeper Music Maker A freeware to use the PC speaker to make music (superseded by BaWaMI)
Beep for Linux and Windows, by Frank Buß. APIs for beeping.
Commandline PC speaker program for LinuxFTP
Practical article on implementing a Linux Kernel Driver
Timing on the PC family under DOS (Sections 7.5, 7.29, 7.30, and 10.7 – 10.7.4 in particular)
Legacy hardware
Loudspeakers
Computer-related introductions in 1981 |
41214386 | https://en.wikipedia.org/wiki/Institution%20of%20Analysts%20and%20Programmers | Institution of Analysts and Programmers | The Institution of Analysts and Programmers is a professional body that represents those working in Systems Analysis, Design, Programming and implementation of computer systems both in the United Kingdom and internationally. Established in 1972 it has supported system developers across the world.
Overview
With a worldwide membership, the IAP is a private company limited by guarantee and a registered charity in England and Wales. Its objectives are to promote the ethical development of computer systems and applications. In addition, it promotes the learning of systems development to all ages.
The IAP has its head office in Hanwell, London and its Administration Centre in Worthing.
Members have access to a wide range of information and can download the ''Software Development Practice online magazine on a regular basis.
Timeline
1972: Founded as the University Computer Association
1981: Name Changed to The Institution of Analysts and Programmers
1981: The late Bob Charles appointed as Secretary General
1990: Mike Ryan appointed Director General
1992: Institution Incorporated
1994: Granted Coat of Arms
2010: Alastair Revell appointed as Director General
2011: Transformation Programme Started
2017: Adopted a new constitution
2018: The Institution became a charity in England and Wales (Charity Number 1179558)
2021: Founding Member of the UK Cyber Security Council
Governance
The IAP is governed by a Trustee Board, which comprises:-
(a) up to six Elected Trustees elected at a general meeting by the membership;
(b) up to three Lay Trustees appointed from outside of the membership of the Institution by the Trustee Board;
(c) an appointed Trustee to act as Treasurer as an Ex Officio Trustee;
(d) the Director General as an Ex Officio Trustee (unless remunerated);
(e) the Chair of the council as an Ex Officio Trustee;
(f) the vice-chair of the council as an Ex Officio Trustee.
The Trustee Board elects the President and vice-president from the Elected Trustees.
The day-to-day operation of the Institution is delegated to the Director General, who appoints the executive board, which includes an Operations Director and a Director for Membership Engagement.
Membership
The IAP has the following grades of membership:-
Licentiate (LIAP)
Graduate Member (GradIAP)
Associate Member (AMIAP)
Member (MIAP)
Fellow (FIAP)
Distinguished Fellow (DFIAP)
It also has two grades that do not carry post-nominal letters: Registrant and Affiliate.
Work
The Institution has been extensively involved in the formation of the UK Cyber Security Council, becoming a founding member of the Cyber Security Alliance in 2016, which successfully bid to form the Council for HM Government. The project was led by the IET, a fellow alliance partner.
In 2021. the Institution supported the inaugural Cyber OSPAs fielding Alastair Revell (its Director General) as a judge.
Communities of Practice
The Institution has recently established a Community of Practice around cyber security (Cyber COP), bringing together a number of leading software developers with experience in writing secure code.
Academic Prizes Programme
The Academic Prizes Programme is a venture with some universities where the IAP awards prizes to students for their software projects. The awards are given for excellence in design and development. John Thompson was an early recipient of this scheme at the University of Plymouth.
Subsidiary
The Institution is the parent body of the Trustworthy Software Foundation, the successor body to the Trustworthy Software Initiative (TSI) established under the UK National Cyber Security Programme I to promote good software development practices.
References
External links
IAP Website
Software Development Practice website
Trustworthy Software Foundation
Wiki links
Trustworthy Software Foundation
Professional associations based in the United Kingdom
Information technology organisations based in the United Kingdom
Information technology charities |
38992674 | https://en.wikipedia.org/wiki/ISEE%20%28company%29 | ISEE (company) | ISEE is a European multinational company that designs and manufactures small computer-on-modules (COMs), single-board computers, expansion boards, radars and other embedded systems.
The abbreviation of ISEE refers to Integration, Software & Electronics Engineering. Their products are based on the IGEP Technology, the ISEE Generic Enhanced Platform using Texas Instruments OMAP processors.
Some of their products, including IGEPv2 and IGEP COM MODULE, are open hardware, licensed under a Creative Commons Attribution-Non Commercial-Share-alike 3.0 unported license.
Products
ISEE products have been used in various industrial and commercial projects such automotive and transportation applications, medical devices, vending machines, security and protection, robotics and radar applications under the commercial brand name of IGEP Technology.
All IGEP products include pre-installed Linux-based distributions with functional software and other resources such developing tools, IDEs, schematics, mechanical drawings, hardware manuals and software manuals. Other tutorials, articles, FAQs and a public GIT repository are also available by the IGEP Community, a collaborative user support community.
IGEP processor boards
IGEPv2
IGEPv2 was released in 2009.
It consists of a low-power, fanless, industrial single-board computer (SBC) based on the Texas Instruments DM3730 ARM Cortex-A8 processor in a 65mm x 95mm board. IGEPv2 was the first open hardware IGEP Processor Board from ISEE and may be used to evaluate IGEP Technology, develop full-fledged product prototypes or can be completely customized by the user thanks to the freely available schematics.
IGEPv5
IGEPv5 was presented in September 2013.
It is based on the Texas Instruments OMAP5 SoC, which uses a dual-core ARM Cortex-A15 CPU. IGEPv5 allows additional connectivity via its on-board connectors and can be used to develop applications with advanced multimedia requirements.
IGEP COM PROTON
IGEP COM PROTON was released in 2010.
It provides the same processor and performance as IGEPv2 but without most of its on-board connectors, so it results in a smaller industrial form factor. There are four connectors of 70 pins for extended connectivity and measures 35mm x 51,2mm.
IGEP COM MODULE
IGEP COM MODULE was released in 2010.
It measures 18mm x 68,5mm and is the smallest computer-on-module (COM) released by ISEE and features Texas Instruments DM3730. It provides USB OTG, Wi-Fi and Bluetooth on-board and two connectors of 70 pins for extended connectivity.
IGEP COM AQUILA
IGEP COM AQUILA was released in 2013.
It is based on Texas Instruments AM3354 Cortex-A8 CPU and is the first IGEP Processor Board with standard SO-DIMM size format.
IGEP Expansion Boards
IGEPv2 EXPANSION
IGEPv2 EXPANSION was released in 2009.
It adds connectivity to IGEPv2 Processor Board (RS232, VGA Output, CAN interface and GSM/GPRS Modem).
IGEP PARIS
IGEP PARIS was released in 2010.
It consists of an Expansion Board for IGEP COM MODULE and IGEP COM PROTON with basic functional connectivity (Ethernet, UARTs, TFT Video interface and USB).
IGEP BERLIN
IGEP BERLIN was released in 2010.
It is based on IGEP PARIS connectivity with extended connectivity (DVI video, stereo audio in/out, CAN interface, RS485 and other Digital and Analog I/O).
IGEP NEW YORK
IGEP NEW YORK is the simplest expansion board for IGEP COM MODULE and IGEP COM PROTON with two 2.54-inch DIP connectors.
IGEP Radar Technology
ISEE presented their Radar Technology in 2009. It consists of a 24 GHz band FMCW Radar Technology for IGEPv2 and IGEP COM MODULE, that carry the digital processing and implement the communication with the user system. Later, they manufactured IGEP Radar Lambda, and IGEP Radar Epsilon.
Pre-installed software
The preinstalled demo software on all ISEE products consists of:
IGEP X-Loader: a bootloader compatible with all IGEP processor boards
IGEP Kernel: a Linux Kernel maintained by ISEE and IGEP community members
IGEP Firmware Yocto: a Linux Distribution with a X Window System and GNOME mobile-based applications created with Yocto Platform Builder
Additional software and firmware releases can be downloaded prebuilt directly from the IGEP Community GIT repositories or compiled using OpenEmbedded software framework.
Development tools
ISEE offers free development tools and resources for developing under IGEP Technology:
IGEP SDK Yocto Toolchain: provides all necessary tools like a cross compiler, embedded libraries, etc. to compile program sources for IGEP devices.
IGEP SDK Virtual Machine: a virtual machine that includes all the developer tools for IGEP Technology, which are already installed and configured.
IGEP DSP Gstreamer Framework: based on TI DVSDK it provides all DSP essential packages and the "gstreamer DSP plugin".
See also
IGEPv2
Gumstix
PandaBoard
Mobile Robot Programming Toolkit
Raspberry Pi
Arduino
References
External links
ISEE website
Companies based in Catalonia
Motherboard companies
Robotics companies
Spanish brands |
24593246 | https://en.wikipedia.org/wiki/ATM%20Adaptation%20Layer%202 | ATM Adaptation Layer 2 | ATM Adaptation Layer 2 (AAL2) is an Asynchronous Transfer Mode (ATM) adaptation layer, used primarily in telecommunications; for example, it is used for the Iu interfaces in the Universal Mobile Telecommunications System, and is also used for transporting digital voice. The standard specifications related to AAL2 are ITU standards I.363.2 and I366.1.
What is AAL2?
AAL2 is a variable-bitrate connection-oriented low-latency service originally intended to adapt voice for transmission over ATM. Like other ATM adaptation layers, AAL2 defines segmentation and reassembly of higher-layer packets into ATM cells, in this case packets of data containing voice and control information. AAL2 is further separated into two sub-layers that help with the mapping from upper-layer services to ATM cells. These are named Service Specific Convergence Sub-layer (SSCS) and Common Part Sub-layer (CPS).
The AAL2 protocol improves on other ATM Adaptation Layers, by packing lots of small packets efficiently into one standard-sized ATM cell of 53 bytes. A one-byte packet thus no longer has an overhead ratio of 52 unused bytes out of 53 (i.e. 98%). Potentially, total of 11 one-byte CPS packets (plus 3/4 of a 12th CPS packet) could squeeze into a single cell. Of course, CPS packets can come in other sizes with other CIDs, too. When the transmission is ready, the CPS packets are all multiplexed together into a single cell and transported over standard ATM network infrastructure.
The transport networks for ATM are well standardized fiber optic (SDH/Sonet, i.e. STM-1/OC-3 or higher) or copper cable (PDH, i.e. E1/T1/JT1 or higher bandwidth fixed lines) based synchronous networks with built-in redundancy and OAM-related network features which Ethernet networks never had originally (in order to keep things simple) but are sorely missed in metro Ethernet standard networks.
Efforts to improve Ethernet networks are in a sense trying to reinvent the wheel à la ATM. AAL2 is one example of a useful benefit of ATM, as a general standard for Layer 2 protocols. ATM/AAL2's efficient handling of small packets contrasts with Ethernet's minimum payload of 46 bytes vs the 1-byte minimum size for an AAL2 CPS packet.
AAL2 is the standard layer 2 protocol used in all Iu interfaces, i.e. the interfaces between UMTS base stations and UMTS Radio Network Controllers (RNCs) (Iu-B), inter-RNCs (Iu-R), UMTS RNCs and UMTS Serving GPRS Support Nodes (SGSNs) (Iu-PS), and UMTS RNCs and media gateways (MGWs) (Iu-CS).
AAL2 and the ATM Cell
The basic component of AAL2 is the CPS packet. A CPS packet is an unanchored unit of data that can cross ATM cells and can start from anywhere in the payload of the ATM cell, other than the start field (STF). The STF is the first byte of the 48-byte ATM payload. The STF gives the byte index into the ATM cell where the first CPS packet in this cell begins. Byte 0 is the STF. The data from byte 1 ... (STF+1), would be the straddled remainder of the previous ATM cell's final CPS packet. If there is no remainder from the previous cell, the STF is 0, and the first byte of the cell after the STF is also the location of the start of the first CPS packet.
The format for the 1 byte STF at the beginning of the ATM cell is:
6 bits - offset field (OSF)
1 bit - sequence number (SN)
1 bit - parity (P)
OSF
The Offset Field carries the binary value of the offset, in octets, between the end of the P bit and the start of the CPCS-PDU Payload. Values of greater than 47 are not allowed.
SN
The Sequence Number numbers the stream of CPCS-PDUs.
P
The Parity bit is used to detect error in the OSF and SN fields.
If the ATM cell has fewer than 47 bytes, the remainder will be filled by padding.
AAL2u
One common adaptation of AAL2, AAL2u, doesn't use the STF field at all. In this case, one single CPS packet is aligned to the beginning of the cell. AAL2u is not used in standardized interfaces, but rather in proprietary equipment implementations where the multiplexing/demultiplexing, etc. that needs to be done for standard AAL2 either is too strenuous, is unsupported, or requires too much overhead (i.e. the 1 byte of STF) from the internal system's point of view. Most computer chips do not support AAL2, so stripping this layer away makes it easier to interwork between the ATM interface and the rest of the network.
ATM AAL2 Cell Diagram
The following is diagram of the AAL2 ATM cell:
AAL2 and the CPS Packet
A CPS packet has a 3-byte header and a payload of between one and 45 octets. The standard also defines a 64-octet mode, but this is not commonly used in real 3G networks.
The 3-byte CPS header has following fields:
8 bits - channel identifier (CID)
6 bits - length indicator (LI)
5 bits - user to user indication (UUI)
5 bits - header error control (HEC)
CID
The Channel Identifier identifies the user of the channel. The AAL2 channel is a bi-directional channel and the same channel identification value is used for both directions. The maximum number of multiplexed user channels is 248. As some channels are reserved for other uses, such as peer-to-peer layer management.
CE : Channel Element
CID = CE -E + ID
LI
The Length Indicator indicates the length (in number of octets) of the CPS information field, and can have a value between 1 and 45 (default) or sometimes between 1 and 64. For a given CID all channels must be of the same maximum length (either 45 or 64 octets) NB: the LI is one less than the actual length of the payload, so 0 corresponds to the minimum length of 1 octet, and 0x3f to 64 octets.
UUI
User to User Indication conveys specific information transparently between the users. For example, in SSSAR, UUI is used to indicate that this is the final CPS packet for the SSSAR PDU.
HEC
This is Header Error Control and checks for errors in the CID, LI and UUI fields. The generator polynomial for the CPS HEC is:
ATM AAL2 CPS Packet Diagram
The following is a diagram of the CPS packet:
References
External links
Broadband Forum - ATM Forum Technical Specifications
AAL2 ITU Standard
Network protocols
ITU-T recommendations |
630354 | https://en.wikipedia.org/wiki/Dungeon%20Siege | Dungeon Siege | Dungeon Siege is an action role-playing game developed by Gas Powered Games and published by Microsoft in April 2002, for Microsoft Windows, and the following year by Destineer for Mac OS X. Set in the pseudo-medieval kingdom of Ehb, the high fantasy game follows a young farmer and his companions as they journey to defeat an invading force. Initially only seeking to warn the nearby town of the invasion of a race of creatures named the Krug, the farmer and the companions that join him along the way are soon swept up in finding a way to defeat another race called the Seck, resurgent after being trapped for 300 years. Unlike other role-playing video games of the time, the world of Dungeon Siege does not have levels but is a single, continuous area without loading screens that the player journeys through, fighting hordes of enemies. Also, rather than setting character classes and manually controlling all of the characters in the group, the player controls their overall tactics and weapons and magic usage, which direct their character growth.
Dungeon Siege was the first title by Gas Powered Games, which was founded in May 1998 by Chris Taylor, then known for the 1997 real-time strategy game Total Annihilation. Joined by several of his coworkers from Cavedog Entertainment, Taylor wanted to create a different type of game, and after trying several concepts they decided to make an action role-playing game as their first title. Taylor also served as one of the designers for the game, joined by Jacob McMahon as the other lead designer and producer and Neal Hallford as the lead story and dialogue writer. The music was composed by Jeremy Soule, who had also worked on Total Annihilation. Gas Powered Games concentrated on making a role-playing game that was stripped of the typical genre elements they found slow or frustrating, to keep the player focused on the action. Development took over four years, though it was initially planned to take only two; completing the game within even four years required the team to work 12- to 14-hour days and weekends for most of the time.
The game was highly rated by critics upon release; it is listed by review aggregator Metacritic as the third-highest rated computer role-playing game of 2002. Critics praised the graphics and seamless world, as well as the fun and accessible gameplay, but were dismissive of the plot. Dungeon Siege sold over 1.7 million copies, and was nominated for the 2003 Computer Role-Playing Game of the Year award by the Academy of Interactive Arts & Sciences. Gas Powered Games emphasized creating and releasing tools for players to use in making mods for the game during development, which resulted in an active modding community after release. An expansion pack, Dungeon Siege: Legends of Aranna, was released in 2003, and a further series of games was developed in the franchise, consisting of Dungeon Siege II (2005) and its own expansion Dungeon Siege II: Broken World (2006), a spinoff PlayStation Portable game titled Dungeon Siege: Throne of Agony (2006), and a third main title, Dungeon Siege III (2011). A trilogy of movies, with the first loosely inspired by the plot of Dungeon Siege, were released as In the Name of the King: A Dungeon Siege Tale (2007, theaters), In the Name of the King 2: Two Worlds (2011, home video), and In the Name of the King 3: The Last Mission (2014, home video).
Gameplay
Dungeon Siege is an action role-playing game set in a pseudo-medieval high fantasy world, presented in 3D with a third-person virtual camera system under the control of the player, in which the player characters navigate the terrain and fight off hostile creatures. The player chooses the gender and customizes the appearance of the main character of the story prior to the start of the game and typically controls them. The main character is joined by up to seven other characters, which are controlled via artificial intelligence; the player may switch which character they are controlling at any time. The other characters move in relation to the controlled character according to the formation and level of aggression towards enemies chosen by the player. The additional characters can be removed from the group and re-recruited at any given time.
The game world is not broken up into levels, but is instead one large area not separated by loading screens. As the player journeys through the largely linear world, they encounter numerous monsters and enemies of varying types that attack whenever the party of player characters approach. The party defends themselves and attacks enemies using melee and ranged weapons, and nature and combat magic. The player does not select a character class for the characters, unlike other role-playing video games; instead, using weapons or magic of a particular type increases the character's skill with them over time. Whenever a player gains enough experience points from killing enemies and reaches a new level in that weapon type, they gain some number of points in their strength, dexterity, or intelligence statistics, which in turn relate to the number of health points and mana that they have, and damage that they do with weapons.
Characters can equip weapons, armor, rings, and amulets, which provide attack or defense points, or give bonuses to some other statistic. There are also usable items such as potions to restore a character's health or mana. Weapons, armor, and other items are found by killing enemies, breaking containers, or by purchase from vendors. Each character has an inventory, represented as a fixed grid, with each item depicted by a shape taking up spaces on the grid. One character type, the mule, cannot use weapons or magic, but has a much larger inventory.
Dungeon Siege has both a single-player and multiplayer mode. The single-player mode consists of a single story and world; players can either create a new character when starting the story or use one created in a prior playthrough. The cooperative multiplayer mode allows for up to eight players to play through either the single-player storyline or in the multiplayer map, which features a central town hub with increasingly difficult enemies as players move away from it. Multiplayer games can be set to different difficulty levels, allowing accommodation of higher-leveled characters. Additional maps can be created by players that can allow for competitive multiplayer instead. Multiplayer matches can be created and joined via local area networks, direct IP addresses, and, prior to its closure in 2006, through the Microsoft Zone matchmaking service.
Plot
Dungeon Siege is set in the Kingdom of Ehb, a varied region on the continent of Aranna containing deserts, swamps, forests, and mountains, created three centuries earlier at the dissolution of the Empire of Stars. At the beginning of the game, the player character's farming village is attacked by a race of creatures named the Krug. The main character, a farmer with no given background who is named by the player, journeys through the Krug forces to the town of Stonebridge. Upon breaking the siege of the town, and gaining their first companion, the player character is tasked by the town's garrison leader Gyorn with journeying to the town of Glacern and alerting the Ehb military forces, called the 10th Legion, of the incursion and defeating any forces they encounter along the way. After journeying through crypts, mines, and mountains, the player character reaches Glacern, where they are informed that the Krug invasion happened the same day that the Grand Mage Merik disappeared, and are charged with traveling over the mountains to Fortress Kroth to assist the legion there. In the mountains they find Merik, who informs them that the Krug invasion is part of a larger invasion by the Seck, who destroyed the Empire of Stars before being imprisoned underneath Castle Ehb, and who have escaped and taken the castle. Merik asks the player to help recover the Staff of Stars from the Goblins. Prior to its theft, the Staff had kept the Seck imprisoned in the Vault of Eternity.
The player fights through monsters and bandits in crystal caves, a forest, a swamp, and an underground Goblin fortress filled with mechanical war machines. After recovering the Staff from the Goblins, the player character meets a division of the 10th Legion and is pointed towards Fortress Kroth, which has been overrun with undead. After clearing the fortress and fighting monsters and a dragon in the Cliffs of Fire, they march on Castle Ehb. The player characters then storm the castle and fight through the Seck forces to rescue King Konreid. He informs the party that the Seck's leader, Gom, is seeking the magical weapons from the Empire of Stars stored in the Chamber of Stars, and that the characters must secure the weapons and then defeat the remaining Seck. The player character collects the weapons and fights through lava caves and the Vault of Eternity where the Seck had been imprisoned. The player character kills Gom, defeating the Seck and saving the kingdom.
Development
Gas Powered Games was founded in May 1998 by Chris Taylor, then known for the 1997 real-time strategy game Total Annihilation. Joined by several of his coworkers from Cavedog Entertainment, Taylor wanted to create a different type of game than before, and after trying several concepts the team decided to make an action role-playing game as their first title. As well as helping create the initial concept, Taylor served as one of the designers for the game, joined by Jacob McMahon as the game's other lead designer and producer and Neal Hallford as the game's lead story and dialogue writer. Hallford was brought onto the project after it had already started; Taylor had devised the start and end of the game but left the intervening details and background story to him. The game's music was composed by Jeremy Soule, who had also worked on Total Annihilation. The development team included around thirty people during development, with changes over time, and reached forty at the project's conclusion. The development of the game took over four years, though it was initially planned to take only two.
Dungeon Siege was inspired by prior role-playing games such as Baldur's Gate and the Ultima series, but primarily by Diablo, which Taylor admired for having an experience that "concentrated on action" that players could jump into without first researching the gameplay details and settings. Taylor wanted to expand that concept into a streamlined, immersive, and action-heavy role-playing game that removed common elements of the genre that he found boring, frustrating, or slow. Taylor also wanted to make the gameplay itself simpler than contemporary role-playing games, so as to appeal to a wider audience. To that end, he asked Hallford to craft a narrative that was also fast and streamlined; he had him write a detailed backstory for the game, which would not be presented to players but would inform and inspire the developers, leaving the in-game text restrained to keep players engaged with the action. Hallford described the process of writing for the game as similar to other game projects, besides a greater emphasis on brevity, though he has said that he was brought onto the project much later than he usually is, which meant that he had to create a story that worked as a background to the set pieces that had already been developed. The plot of the game was intended by Taylor to be subordinate to the gameplay; to that end, he was unconcerned that his overall story arc was considered, even by the development team, to be somewhat of a cliché, as he felt that journeying to defeat an "ultimate evil" was very motivating to players. Taylor and Hallford discussed producing a Dungeon Siege novel to explore Hallford's story, though it never came to fruition.
Taylor wanted to further improve on the Diablo role-playing game formula by removing the concept of picking a character class at all and omitting Diablos long loading times. The development team also tried to make the game more streamlined by removing the need to backtrack to previously visited towns to sell items, by adding inventories to companion characters and pack mules. At one point in development, they planned to have a "helper" character who would pick up items dropped by enemies to let the player avoid doing so themselves. The developers also changed some elements that were standard in role-playing games that Taylor and the other developers found frustrating, such as letting players resell items to vendors for the same price that they were bought for instead of a steep discount, and "sipping" or only partially using potions instead of always using up the whole item.
Gas Powered Games included their game development tool, called the Siege Editor, as a tool for players to mod the game. Having seen the output of players creating mods for Total Annihilation, Taylor wanted to "take that to the extreme" and provide a full set of tools to foster a community of players enhancing and changing the game after release. He felt that the tools, which could allow players to make new game worlds, characters, and gameplay, would help support a large, long-term community of players around the game. Gas Powered Games hoped that providing what Daily Radar called "one of the most comprehensive level toolsets we've ever seen" would allow players to quickly and easily create small game regions, as well as allow more serious modders the ability to develop entire parallel games using the Dungeon Siege game engine. They also hoped that this modding community would be able to enhance and extend the multiplayer gameplay beyond what they could release. Taylor attributed his enthusiasm for releasing their own development tools for modding to both his enjoyment of seeing mods for Total Annihilation produced years after its release as well as the lack of negative consequences to John Carmack and id Software's historical tendency to release the entire source code to their games. Taylor later estimated that the company spent around twenty percent of their budget on developing the modding tools.
After the first year of development, Gas Powered Games found that they were not going to be able to finish the game within the planned two years; not only was the seamless world without loading screens harder to create than they had thought, but, according to lead developer Bartosz Kijanka, they had been overambitious in choosing how many innovative features they could put into the game's custom engine, such as the wide range through which the virtual camera system could zoom in and out. Other features supported and later dropped included allowing up to ten characters at once—and therefore maintaining up to ten areas of the single-player world—instead of the final maximum of eight, and a weather system that included wind blowing projectiles off course. According to Kijanka, the developers also spent a lot of time changing technologies mid-development, such as building a custom animation editor before moving to a licensed one, and starting with the OpenGL graphics library only to switch to Direct3D. As a result, the team was required to work 12- to 14-hour days and weekends for most of the development time in order to complete the game within four years. In a 2011 interview, Taylor stated that in retrospect the final cost in development time of the seamless world may have been too high, and also that the team tried to make too large of a game for their budget; he believed that a game with closer to 35 hours of playtime instead of 70 would have been a better and more polished experience given their constraints.
By 2000, Gas Powered Games had begun to search for a publisher for the game. Taylor claims that multiple publishers were interested in the game, but he was convinced by Ed Fries to partner with the newly established Microsoft PC publishing group. Although Microsoft's publishing wing was established in part to publish games for the newly announced Xbox console, Gas Powered Games and Microsoft did not strongly consider bringing the game to the console. Taylor believes that this was due to the size of the game itself, as well as the small market for role-playing games on consoles at the time. Dungeon Siege was initially planned for release in the third quarter of 2001, before being delayed to the following year, and Gas Powered Games spent the added time tuning and polishing the game and expanding the game's items and multiplayer features. Dungeon Siege was released for Microsoft Windows on April 5, 2002, by Microsoft, and for Mac OS X on May 2, 2003, by Destineer.
Reception
Dungeon Siege was commercially successful, selling over 1.7 million copies. According to the NPD Group, preorders of the game in the month before its release made it the eighth-best selling computer game of March 2002, and upon release in the following month it rose to second-best selling, after The Sims: Vacation. It fell to seventh and then thirteenth place the following two months, and finished in 14th place for the year overall. By August 2006, it had sold 360,000 copies and earned $14.5 million in the United States alone. This led Edge to declare it the country's 44th-best selling computer game between January 2000 and August 2006. By September 2002, Dungeon Siege had also received a "Gold" certification from the Verband der Unterhaltungssoftware Deutschland (VUD), indicating sales of at least 100,000 units across Germany, Austria and Switzerland.
The game was highly rated by critics upon release; it is listed by review aggregator Metacritic as the third-highest rated computer role-playing game of 2002, behind Neverwinter Nights and The Elder Scrolls III: Morrowind, and the twenty-first-highest computer game overall for the year. The graphics were highly praised; Dan Adams of IGN called it "ridiculously pretty to watch", while reviewers for GameSpot and GamePro praised the environments as being detailed and varied. Robert Coffey of Computer Gaming World and Greg Vederman of PC Gamer similarly lauded the detailed environments, while Andy McNamara and Kristian Brogger of Game Informer and GameSpy's Peter Suciu called out the seamless world without loading screens as especially worthy of note. Suciu further praised how the freeform, seamless map was used to create areas that were not shaped like rectangular regions with a winding path filling up the space, as was typical with other role-playing games of the time. The IGN and GamePro reviewers commended the sound effects as excellent and for helping to create the atmosphere of the game, while the IGN and GameSpot reviewers also praised the "ambient orchestral score".
The gameplay was similarly lauded; the GamePro review claimed that "Dungeon Sieges gameplay is perhaps its biggest and most transparent improvement over previous titles in the genre." Several reviewers compared it favorably to Diablo II (2000), then one of the most popular computer action role-playing games, with Adams of IGN claiming that it was very similar to Diablo II with some changes and improvements, and Coffey of Computer Gaming World stating that the only thing keeping it from being directly rated as better was that the shift to a more tactical gameplay made it too different of a game to directly compare. PC Gamers Vederman, Computer Gaming Worlds Coffey, and the GameSpot reviewer praised the gameplay as being streamlined and accessible; they liked the tactical nature of controlling a party of adventurers who improved according to how they were used rather than directly controlling their actions and statistics. IGN's Adams, however, said that the gameplay could get monotonous, Vederman of PC Gamer felt that the gameplay combat choices were somewhat limited, and GameSpy's Suciu disliked the linearity of the single-player game. Adams further added that many of the tactical choices in the game were inconsequential, as all battles quickly devolved into brawls, and that the freeform system of leveling was essentially the same as four character classes as pursuing multiple tracks was ineffective.
The multiplayer content received mixed reviews: Adams praised the amount of additional content, while Suciu and the GameSpot reviewer noted that the multiplayer gameplay could easily become unbalanced between different players. The single-player plot was generally dismissed as inconsequential: the GamePro reviewer termed it "skeletal" and the Game Informer reviewers "lackluster", and the GameSpot reviewer called it "bland and forgettable" and concluded that players who wanted a "deeper role-playing game" would be disappointed. Overall, Vederman of PC Gamer called Dungeon Siege "one of the best, most enjoyable games of the year" and GamePros reviewer claimed it "walks all over its competition with almost effortless grace", while Adams of IGN concluded that it was entertaining but had "untapped potential".
Legacy
After it was showcased at E3 2000, Dungeon Siege proceeded to win the Best RPG award from Game Revolution and Most Immersive Role-playing Game award from GameSpot. After release, it was nominated for the Academy of Interactive Arts & Sciences's 2003 Annual Interactive Achievement Awards in the Computer Role-Playing Game of the Year and Innovation in Computer Gaming categories, though it did not win either, losing to Neverwinter Nights and Battlefield 1942, respectively. The game was also a nominee for PC Gamer USs "2002 Best Roleplaying Game" award, but lost again to Neverwinter Nights. It did win the Best PC Game Graphics award from IGN.
Gas Powered Games' release of the Siege Editor did spark the rise of a modding community around the game; even before release several modding groups announced intentions to use the engine to create large-scale mods remaking games from the Ultima series of role playing games. After the game's release, numerous mods were created, including several "total conversion" mods that made wholly new games and stories such as "The Lands of Hyperborea" and "Elemental". Gas Powered Games released one mod of their own in July 2002 titled "Yesterhaven", created by six designers over six weeks, which provided a short multiplayer storyline for low-level characters wherein they defended a town from three thematic plagues of monsters. It was followed up by Legends of Aranna, a full expansion pack developed by Mad Doc Software and released on November 11, 2003 for Windows and Mac OS X by Microsoft. The expansion pack added little new gameplay besides new terrains, creatures, and items, but featured an entirely separate story from the original game. In Legends, the player controls another unnamed farmer; after the Staff of Stars is stolen by a creature called the Shadowjumper, they set off to retrieve it. After fighting their way through monsters in icy hills, jungles, and islands, the player arrives at the mystical Great Clock, a giant artifact which controls Aranna's seasons. There they defeat the Shadowjumper and retrieve the Staff of Stars. It received generally less positive reviews than the original, with critics praising the amount of content but criticizing the lack of changes to the base gameplay.
Several other games have been released in the Dungeon Siege series, beginning with Dungeon Siege II (2005). That game received its own expansion pack, Dungeon Siege II: Broken World (2006), and was followed by a spinoff PlayStation Portable game titled Dungeon Siege: Throne of Agony (2006) and a third main title, Dungeon Siege III (2011). A movie directed by Uwe Boll and inspired by the original game, In the Name of the King: A Dungeon Siege Tale, was released in theaters in 2007; it has been described as being "loosely based" on the game, and was a commercial and critical failure. It was followed by the home video sequels In the Name of the King 2: Two Worlds (2011) and In the Name of the King 3: The Last Mission (2014).
References
External links
2002 video games
Action role-playing video games
Cooperative video games
Fantasy video games
MacOS games
Multiplayer and single-player video games
Square Enix franchises
Video games adapted into films
Video games scored by Jeremy Soule
Video games developed in the United States
Video games featuring protagonists of selectable gender
Video games with expansion packs
Windows games |
41316662 | https://en.wikipedia.org/wiki/Heinrich%20Scholz | Heinrich Scholz | Heinrich Scholz (; December 17, 1884 – December 30, 1956) was a German logician, philosopher, and Protestant theologian. He was a peer of Alan Turing who mentioned Scholz when writing with regard to the reception of "On Computable Numbers, with an Application to the Entscheidungsproblem": "I have had two letters asking for reprints, one from Braithwaite at King's and one from a proffessor [sic] in Germany... They seemed very much interested in the paper. [...] I was disappointed by its reception here."
Scholz had an extraordinary career (he was considered an outstanding scientist of national importance) but was not considered a brilliant logician, for example on the same level as Gottlob Frege or Rudolf Carnap. He provided a suitable academic environment for his students to thrive. He founded the Institute of Mathematical Logic and Fundamental Research at the University of Münster in 1936, which can be said enabled the study of logic at the highest international level after World War II up until the present day.
Personal life
Herman Scholz father was a protestant minister at St. Mary's Church, Berlin. From 1903 to 1907 he studied philosophy and theology at Erlangen University and Berlin University achieving a Licentiate in theology (Lic. theol.). He was a student of Adolf von Harnack, in philosophy with peers Alois Riehl and Friedrich Paulsen. On 28 July 1910, Scholz habilitated in the subjects of religious philosophy and systematic theology in Berlin, and was promoted to full professor, therein working as a lecturer. In 1913, at Erlangen, Heinrich Scholz took his examination for promotion of Dr. phil. with Richard Falckenberg, studying the work of Schleiermacher and Goethe with a thesis titled: Schleiermacher und Goethe. Ein Beitrag zur Geschichte des deutschen Geistes. In 1917 he was appointed to the chair of Philosophy of Religion at the Breslau succeeding Rudolf Otto to teach religious philosophy and systematic theology. In the same year he married his fiancée, Elisabeth Orth. Due to 8 years of continuous gastric trouble, he was exempted from military service. In 1919, he underwent an operation in which he believed to be a large part of his stomach was removed. That year he took the call to Kiel University, as the chair of philosophy. It was while at Kiel, in 1924, that Scholz's first wife, Elisabeth Orth died.
From October 1928 onwards, he taught in Münster University, first as Professor of Philosophy. In 1938, this was changed to Professor of Philosophy of Mathematics and Science and again in 1943 to Chair of Mathematical Logic and Fundamental Questions in Mathematics working as head of the Institute for Mathematical Logic and Fundamental Research at Münster until he retired in 1952 as professor emeritus.
Scholz was survived by his second wife, Erna. Scholz grave is located on the Park Cemetery Eichhof near Kiel.
Work
From his own account, in 1921, having by accident came across Principia Mathematica by Bertrand Russell and Alfred North Whitehead he began studying logic, which he had abandoned in his youth to study theology, leading later to a study of mathematics and theoretical physics by taking an undergraduate degree at Kiel. However another factor in his change of focus was the mathematician Otto Toeplitz. Toeplitz's broad research interests including Hilbert spaces and spectral theory encouraged Scholz interest in mathematics. Indeed, Segal suggests that Scholz love of structure was also an important factor in his move into mathematical logic, describing it this:
Scholz's feeling for structure was no small thing. He apparently felt that when having guests for dinner: (1) no more than six people should be invited; (2) there must be an excellent menu; (3) a discussion theme must be planned; and (4) the guests should have prepared themselves as much as possible beforehand on this theme.
In 1925, he was a peer of Karl Barth at Münster University, in which he taught Protestant theology. Under the influence of conversations with Scholz, Barth later wrote in 1930/31. his book about the Anselm of Canterbury proof of God "fides quaerens intellectum."
In the 1930s, he maintained contact with Alan Turing who later – in a letter home dated 22 February 1937 – wrote with regard to the reception of his article "On Computable Numbers, with an Application to the Entscheidungsproblem":
At the University of Münster, his study into mathematical logic and basic research, provided many of the critical insights, that contributed to the foundations of theoretical computer science. Right from the time he arrived at Münster, Scholz worked towards building a school of mathematical logic. By 1935, his research team at Münster were being referred to as the Münster school of mathematical logic. Scholz names 1936, as the year the Münster School was born. His professorship was rededicated in 1936 to a lectureship for mathematical logic and fundamental research and in 1943 the first chair in Germany for mathematical logic and fundamental research. The Münster Chair is still regarded as one of the best in Germany.
Scholz was considered a Platonist, and in that sense, he regarded the mathematical logic as the foundation of knowledge. In 1936 he was awarded a grant from the DFG, for the production of three volumes of research in logic and for the editing of the Gottlob Frege papers. He is considered the discoverer of the estate of Gottlob Frege.
Gisbert Hasenjaeger whose thesis had been supervised by Scholtz, produced a book Grundzüge der mathematischen Logik in 1961 which was jointly authored with Scholz despite being published five years after Scholz's death.
Work during World War II
Initially Scholz was pleased with the rise of Nazi power in Germany. Describing himself a conservative nationalist, describing himself as such "We felt like Prussians right to the bone,"" and described by his friend Heinrich Behnke as a "small-minded Prussian nationalist". Behnke found discussing political issues difficult. In the beginning the Nazi laws helped establish Münster as an important centre for Logic as other university staff at Göttingen and Berlin Universities were being obliterated.
On 14 March 1940, Scholz sent a letter to the Education department of occupied Poland, seeking the release of Jan Salamucha, who had been professor of theology at Kraków University. Salamucha was sent to Sachsenhausen concentration camp in 1940. In October 1940, Scholz received a reply for the education minister which stated he had "injured the national honour" and was forbidden to send further petitions. Salamucha was later released but killed by the Nazis in 1944 However, Scholz persisted, first helping Alfred Tarski, who had fled Poland to the United States, to correspond with his wife who remained in Poland and later helping the Polish Logician Jan Łukasiewicz, who he had been corresponding since 1938, to leave Poland with his wife and hide in Germany.
Although Scholz recognized the true nature of the Nazis and abhorred them from mid-1942 onwards, he remained on good terms with Nazi academics like Ludwig Bieberbach. During the period of National Socialism, Max Steck, who championed the German Mathematics which rejected the formalist approach to mathematics, deeply opposed Hilbert's approach which he described as Jewish – the worst possible insult in Germany at this time. Max Steck acknowledged the "per se outstanding achievement of formalism" ("an sich betrachtet einmaligen Leistung des Formalismus"), but criticized the "missing epistemological component" ("Jede eigentliche Erkenntnistheorie fehlt im Formalismus") and on the only page of his main work where he connects formalism and Jews he mentions that "Jews were the actual trendsetters of formalism" ("die eigentlichen Schrittmacher des Formalismus"). In response to this, Bieberbach asked Scholz to write an article for Deutsche Mathematik, to answer the attacks on mathematical formalism by Steck, which was surprising since Bieberbach led the Nazi mathematicians' attack on Jewish mathematics. Ensuring that Hilbert was not considered "Jewish", Scholz wrote "What does formalised study of the foundations of mathematics aim at?." Scholz had received funding from Bieberbach as early as 1937, which prompted an annoyed Steck to write in his 1942 book:
There were three other articles by Heinrich Scholz in the journal German Mathematics: Ein neuer Vollständigkeitsbeweis für das reduzierte Fregesche Axiomensystem des Aussagenkalküls (1936), a review of the Nazi philosopher Wolfgang Cramer's book Das Problem der reinen Anschauung (1938) and a review of Andreas Speiser's Ein Parmenideskommentar (1938).
World's first computer science seminar
In the late 2000s, Achim Clausing was tasked with going through the remaining estate of Scholz at Münster University, and while going through the archive papers in the basement of the Institute of Computer Science, Clausing discovered two original prints of the most important publication of Alan Turing, which had been missing since 1945. In this case, the work "On Computable Numbers, with an Application to the Entscheidungsproblem" from 1936, which Scholz had requested, and a postcard from Turing. Based on the work by Turing and conversations with Scholz, Clausing stated "[it was] the world's first seminar on computer science." The second work, which was a Mind (journal) article, dates from 1950 and is a treatise on the development of artificial intelligence, Turing provided them with a handwritten comment. This is probably my last copy. At Sotheby's recently, comparable prints of Turing, with no attached dedication, sold for 180,000 euros.
Bibliography
Christianity and Science in Schleiermacher's Doctrine of the Faith, 1909
Belief and unbelief in world history. One Response to Augustine de Civitate Dei, 1911
Idealism as a carrier of the war thought. Friedrich Andreas Perthes, Gotha, 1915. Perthes' writings on World War II, Volume 3
Politics and morality. An investigation of the moral character of modern realpolitik. Friedrich Andreas Perthes, Gotha, 1915. Perthes' writings on the World War, Volume 6
The war and Christianity. Friedrich Andreas Perthes, Gotha, 1915. Perthes' writings on World War II, Volume 7
The essence of the German spirit. Grote'sche Verlagsbuchhandlung, Berlin, 1917.
The idea of immortality as a philosophical problem, 1920
Philosophy of religion. Reuther & Reichard, Berlin, 1921, 2nd revised edition, 1922.
On The 'Decline' of the West. A dispute with Oswald Spengler . Reuther & Reichard, Berlin; 2nd revised and supplemented edition, 1921.
The religious philosophy of the as-if. A review of Kant and the idealistic positivism, 1921
The importance of Hegel's philosophy for philosophers of the present day. Reuther & Reichard, 1921 Berlin
The legacy of Kant's doctrine of space and time, 1924
The Basics of Greek Mathematics, 1928 with Helmut Hasse
Eros and Caritas. The platonic love and the love within the meaning of Christianity, 1929
History of logic. Junker and Dünnhaupt, Berlin 1931 (1959 under outline of the history of logic Alber, Freiburg im Breisgau)
Goethe's attitude to the question of immortality, 1934
The new logistic logic and science teaching. In: Research and progress, Volume 11, 1935.
The classical and modern logic. In: Sheets for German Philosophy, Volume 10, 1937, pp. 254–281.
Fragments of a Platonist. Staufen, Cologne undated (1940).
Metaphysics as a rigorous science. Staufen, Cologne 1941.
A new form of basic research. Research and progress No. 35/36 born 1941, pp. 382ff.
Logic, grammar, metaphysics. In: Archives of philosophy, Volume 1, 1947, pp. 39–80.
Encounter with Nietzsche. Furrow, Tübingen 1948.
Principles of mathematical logic. Berlin, Göttingen 1961 Gisbert Hasenjaeger
Mathesis universalis. Essays on the philosophy as rigorous science, Edited by Hans Hermes, Friedrich Kambartel and Joachim Ritter, University Press, Darmstadt 1961.
Scholz Leibniz and the mathematical basis for research, annual report German mathematician club 1943
Papers
Fichte und Napoleon. In: Preußische Jahrbücher (in German), Volume 152, 1913, pp. 1–12.
The religious philosophy of the as-if. In: Annals of Philosophy, 1 Vol 1919, pp. 27–113
The religious philosophy of the as-if. In: Annals of Philosophy, 3 Bd, H. 1 1923, pp. 1–73
Why the Greeks did not build the irrational numbers?. In: Kant Studies Vol.3, 1928, pp. 35–72
Augustine and Descartes. In: Sheets for German Philosophy, Volume 5, 1932, Issue 4, pp. 405–423.
The idea of God in mathematics. In: Sheets for German Philosophy, Volume 8, 1934/35, pp. 318–338.
Logic, grammar, metaphysics. In: Archives for Law and Social Philosophy, Volume 36, 1943/44, pp. 393–433
References
Sources
External works
John J. O'Connor, Edmund F. Robertson : Heinrich Scholz (logician). In: MacTutor History of Mathematics archive (English)
Publications by and on Heinrich Scholz in the catalog of the German National Library
1884 births
1956 deaths
German logicians
German philosophers
20th-century German Protestant theologians
Mathematical logicians
20th-century German mathematicians
German male non-fiction writers
German cryptographers |
1999186 | https://en.wikipedia.org/wiki/XGameStation%20series | XGameStation series | The XGameStation is a series of embedded systems, primarily designed as a dedicated home video game console, created by Andre LaMothe and sold by his company Nurve Networks LLC. Originally designed to teach electronics and video game development to programmers, newer models concentrate more on logic design, multi-core programming, game programming, and embedded system design and programming with popular microcontrollers.
Prototype Versions
The XGameStation was originally conceived of as a handheld system called the nanoGear based around the 68HC12 microprocessor, a modern derivative of the 6809. The system would also contain modern derivatives of the 6502 and Z-80 microprocessors, for retro coders and hackers, and to make emulation of classic computer and video game systems easier. After several iterations, the plan changed to use an ARM microprocessor and an FPGA on which a custom designed GPU was implemented. But after finishing this project it was decided that the resulting system was cost prohibitive and much too advanced for beginners. Instead, the plan was changed again finally resulting in the XGS Micro Edition, based on the SX52 microcontroller. The ARM and FPGA-based system was renamed the XGS Mega Edition after the release of the Micro Edition, and though planned to be sold, it was never released.
XGS Micro Edition (ME)
The XGS Micro Edition is a pre-built video game console based around the SX52 microcontroller, which is a high-speed PIC microcontroller running at 80 MHz for a total of 80 MIPS. The color television video signal is generated in software on the microcontroller. Sound is generated by a ROHM BU8763 chip. For input, the system has a single PS/2 connector for keyboard or mouse input, as well as two DB-9 for connecting Atari-compatible joysticks. Programming is done in assembly language or in a custom written XGS Basic, either on a PC and then transferred to the console or on the system itself. It has add-on packs for creating your own expansion card and electronic experimenting kit. The Micro Edition contains the XGameStation unit, "Designing Your Own Video Game Console" - a detailed book in PDF format teaching the basics of electronics, a power supply, A/V cables, a joystick, a COM cable, and a few extras such as a PDF version of one of Andre LaMothe's previous books "Tricks of the Windows Game Programming Gurus".
Video signal generated by software
The most remarkable aspect of the SX52 Processor is its ability to create a color video signal using only software, and still have the power to simultaneously run the software that uses this video display in order to create an elementary video game or game demo. These latter programs may or may not evolve into a real (playable) game, as often the memory of the SX52 processor is too restricted to support them. Some people also write non-game video demos to show off the video display possibilities of the system.
Obsolete status of the SX52
The SX52 has been made obsolete by Parallax, the company that now packages the SX-series of micro controller dies, made by Ubicom, because they did not have a package with 52 pins. However, according to the people at XGameStation, there are enough SX52 chips available for all their future needs.
On July 31, 2009, Parallax announced that the whole line of SX microcontrollers will be discontinued.
XGS Pico Edition (PE)
The Pico Edition is a simplified version of the Micro Edition in a build-it-yourself kit. The Pico Edition is based around the SX28 microcontroller, which, like the SX52, is a high-speed PIC microcontroller running at 80 MHz for a total of 80 MIPS, though it has less RAM and Flash capacity. Like the Micro Edition, the color television video signal is generated in software on the microcontroller. However, unlike the Micro Edition, the audio signal is also generated directly by the microcontroller and not by an external chip. For input, the system simply reads pushbuttons connected to its input pins. Programming is done in assembly language or in a custom written XGS Basic, on a PC and then transferred to the console. The Pico comes in several different kit forms: the 1.0 kit which comes with a breadboard, a CD with assembly instructions and selected chapters of the same ebook as the Micro Edition and the same extras, the SX28, and the discrete components of the system; and the 2.0 kit, which consists of the 1.0 kit and a PCB (which is also available as an add-on separately); and the Game Console Starter Kit, which includes the 2.0 kit, a hard copy of "The Black Art of Video Game Console Design", and a soldering iron and solder.
XGS AVR 8-Bit and XGS PIC 16-Bit Development Systems
Released on December 26, 2008, the XGS AVR 8-Bit and XGS PIC 16-Bit development systems are embedded system development kits, meant to be very competitive entry/midrange development kits for their respective microcontrollers. The systems were designed together and so share much of the same design other than the main processor. The video signal is generated in software like the XGS Micro and Pico Editions; however, there is color helper hardware to generate the colorburst part of the video signal. The audio signal is also generated directly by the microcontroller. For input, like the XGS Micro, two DB-9 ports and a PS/2 port are supplied. However, instead of being compatible with Atari joysticks, the DB-9 ports are compatible with Nintendo gamepads (though directly connecting an NES or SNES controller would require a pin adapter). Unlike the prior XGS and Hydra systems, programming is primarily in C/C++, utilizing system-specific libraries, though assembly programming and a custom written XGS Basic are also available. The XGS Basic code runs on both systems without modification. Unlike the XGS Micro Edition, code cannot be edited on the system itself - a PC is required. The XGS AVR 8-Bit processor is an Atmel MEGA AVR 644P with 64K FLASH and 4K SRAM running at over 28 MIPS. The XGS PIC 16-Bit processor is a PIC24 with 256K FLASH and 16K SRAM running at over 40 MIPS.
Hydra System
In 2006 Andre LaMothe launched his new HYDRA Game Development Kit, a much more powerful system than the XGS Micro Edition. Unlike the other systems by Nurve, the Hydra does not carry the XGS branding. The Hydra uses the multi-core Parallax Propeller microcontroller, which has an architecture resembling the Cell microprocessor used in the PlayStation 3. The Propeller runs at 80 MHz and uses eight processor cores, called COGs, to reach a performance of 160 MIPS. It also has much more memory than the Micro Edition's SX52: 32K RAM and a 32K ROM which contains a bitmap font for the video display generator, (the Propeller can generate a high quality VGA or PAL/NTSC color picture using software and some special support logic built into each CPU core), tables for mathematical function, and an interpreter for the multi threaded SPIN language. Each CPU core also has its own 2K RAM (512 32bit words) of dedicated memory. For input, the system has two PS/2 ports for a mouse and keyboard (which are sold with the system) and two NES compatible game controller ports (one game controller is included). The system also has a mini USB interface for programming the system, an RJ-11 Ethernet port, and a 128K serial Flash EEPROM for storage. There are also add-ons, such as a 512K external RAM card.
References
External links
XGameStation official site
XGameStation official community board
Video game culture
Fangames |
48402 | https://en.wikipedia.org/wiki/Internet%20backbone | Internet backbone | The Internet backbone may be defined by the principal data routes between large, strategically interconnected computer networks and core routers of the Internet. These data routes are hosted by commercial, government, academic and other high-capacity network centers, as well as the Internet exchange points and network access points, that exchange Internet traffic between the countries, continents, and across the oceans. Internet service providers, often Tier 1 networks, participate in Internet backbone traffic by privately negotiated interconnection agreements, primarily governed by the principle of settlement-free peering.
The Internet, and consequently its backbone networks, do not rely on central control or coordinating facilities, nor do they implement any global network policies. The resilience of the Internet results from its principal architectural features, most notably the idea of placing as few network state and control functions as possible in the network elements and instead relying on the endpoints of communication to handle most of the processing to ensure data integrity, reliability, and authentication. In addition, the high degree of redundancy of today's network links and sophisticated real-time routing protocols provide alternate paths of communications for load balancing and congestion avoidance.
The largest providers, known as Tier 1 networks, have such comprehensive networks that they do not purchase transit agreements from other providers.
Infrastructure
The Internet backbone consists of many networks owned by numerous companies. Optical fiber trunk lines consists of many fiber cables bundled to increase capacity, or bandwidth. Fiber-optic communication remains the medium of choice for Internet backbone providers for several reasons. Fiber-optics allow for fast data speeds and large bandwidth, they suffer relatively little attenuation, allowing them to cover long distances with few repeaters, and they are also immune to crosstalk and other forms of electromagnetic interference which plague electrical transmission. The real-time routing protocols and redundancy built into the backbone is also able to reroute traffic in case of a failure. The data rates of backbone lines have increased over time. In 1998, all of the United States' backbone networks had utilized the slowest data rate of 45 Mbit/s. However, technological improvements allowed for 41 percent of backbones to have data rates of 2,488 Mbit/s or faster by the mid 2000s.
History
The first packet-switched computer networks, the NPL network and the ARPANET were interconnected in 1973 via University College London. The ARPANET used a backbone of routers called Interface Message Processors. Other packet-switched computer networks proliferated starting in the 1970s, eventually adopting TCP/IP protocols, or being replaced by newer networks. The National Science Foundation created the National Science Foundation Network (NSFNET) in 1986 by funding six networking sites using interconnecting links, with peering to the ARPANET. In 1987, this new network was upgraded to T1 links for thirteen sites. These sites included regional networks that in turn connected over 170 other networks. IBM, MCI and Merit upgraded the backbone to bandwidth (T3) in 1991. The combination of the ARPANET and NSFNET became known as the Internet. Within a few years, the dominance of the NSFNet backbone led to the decommissioning of the redundant ARPANET infrastructure in 1990.
In the early days of the Internet, backbone providers exchanged their traffic at government-sponsored network access points (NAPs), until the government privatized the Internet, and transferred the NAPs to commercial providers.
Modern backbone
Because of the overlap and synergy between long-distance telephone networks and backbone networks, the largest long-distance voice carriers such as AT&T Inc., MCI (acquired in 2006 by Verizon), Sprint, and CenturyLink also own some of the largest Internet backbone networks. These backbone providers sell their services to Internet service providers (ISPs).
Each ISP has its own contingency network and is equipped with an outsourced backup. These networks are intertwined and crisscrossed to create a redundant network. Many companies operate their own backbones which are all interconnected at various Internet exchange points (IXPs) around the world. In order for data to navigate this web, it is necessary to have backbone routers—routers powerful enough to handle information—on the Internet backbone and are capable of directing data to other routers in order to send it to its final destination. Without them, information would be lost.
Economy of the backbone
Peering agreements
Backbone providers of roughly equivalent market share regularly create agreements called peering agreements, which allow the use of another's network to hand off traffic where it is ultimately delivered. Usually they do not charge each other for this, as the companies get revenue from their customers regardless.
Regulation
Antitrust authorities have acted to ensure that no provider grows large enough to dominate the backbone market. In the United States, the Federal Communications Commission has decided not to monitor the competitive aspects of the Internet backbone interconnection relationships as long as the market continues to function well.
Transit agreements
Backbone providers of unequal market share usually create agreements called transit agreements, and usually contain some type of monetary agreement.
Regional backbone
Egypt
During the Egyptian revolution of 2011, the government of Egypt shut down the four major ISPs on January 27, 2011 at approximately 5:20 p.m. EST. Evidently the networks had not been physically interrupted, as the Internet transit traffic through Egypt was unaffected. Instead, the government shut down the Border Gateway Protocol (BGP) sessions announcing local routes. BGP is responsible for routing traffic between ISPs.
Only one of Egypt's ISPs was allowed to continue operations. The ISP Noor Group provided connectivity only to Egypt's stock exchange as well as some government ministries. Other ISPs started to offer free dial-up Internet access in other countries.
Europe
Europe is a major contributor to the growth of the international backbone as well as a contributor to the growth of Internet bandwidth. In 2003, Europe was credited with 82 percent of the world's international cross-border bandwidth. The company Level 3 Communications began to launch a line of dedicated Internet access and virtual private network services in 2011, giving large companies direct access to the tier 3 backbone. Connecting companies directly to the backbone will provide enterprises faster Internet service which meets a large market demand.
Caucasus
Certain countries around the Caucasus have very simple backbone networks; for example, in 2011, a 70 year old woman in Georgia pierced a fiber backbone line with a shovel and left the neighboring country of Armenia without Internet access for 12 hours. The country has since made major developments to the fiber backbone infrastructure, but progress is slow due to lack of government funding.
Japan
Japan's Internet backbone needs to be very efficient due to high demand for the Internet and technology in general. Japan had over 86 million Internet users in 2009, and was projected to climb to nearly 91 million Internet users by 2015. Since Japan has a demand for fiber to the home, Japan is looking into tapping a fiber-optic backbone line of Nippon Telegraph and Telephone (NTT), a domestic backbone carrier, in order to deliver this service at cheaper prices.
China
In some instances, the companies that own certain sections of the Internet backbone's physical infrastructure depend on competition in order to keep the Internet market profitable. This can be seen most prominently in China. Since China Telecom and China Unicom have acted as the sole Internet service providers to China for some time, smaller companies cannot compete with them in negotiating the interconnection settlement prices that keep the Internet market profitable in China. This imposition of discriminatory pricing by the large companies then results in market inefficiencies and stagnation, and ultimately affects the efficiency of the Internet backbone networks that service the nation.
See also
Default-free zone
Internet2
Mbone
Network service provider
Root name server
Packet switching
Trunking
Further reading
Greenstein, Shane. 2020. "The Basic Economics of Internet Infrastructure." Journal of Economic Perspectives, 34 (2): 192-214. DOI: 10.1257/jep.34.2.192
References
External links
About Level 3
Russ Haynal's ISP Page
US Internet backbone maps
Automatically generated backbone map of the Internet
IPv6 Backbone Network Topology
Backbone, Internet
IT infrastructure |
1628271 | https://en.wikipedia.org/wiki/Tracks%20%28Bruce%20Springsteen%20album%29 | Tracks (Bruce Springsteen album) | Tracks is a four-disc box set by American singer-songwriter Bruce Springsteen, released in 1998 containing 66 songs. This box set mostly consists of never-before-released songs recorded during the sessions for his many albums, but also includes a number of single B-sides, as well as demos and alternate versions of already-released material.
History
The project began in February 1998, when Springsteen and his chief recording engineer, Toby Scott, began going through his massive collection of unreleased songs. Springsteen had been known as a very prolific songwriter (Darkness on the Edge of Town, The River, and Born in the U.S.A. each had more than 50 songs written for them), and by 1998 the number of unreleased songs was up to more than 350— 3/4 of all his recorded material. Scott had begun on a computerized database of Springsteen's archives in 1985 in order to allow Springsteen to find specific songs that hadn't been released yet, and it was understood by Scott and others since the 1980s that Springsteen would eventually compile a selection of these unreleased recordings into a box set.
Springsteen, engineer Chuck Plotkin and manager Jon Landau had considered releasing these songs in their current rough-mix form, going as far as mastering them in a test-run to get an idea of what they would sound like, but following a listening session in June, it was decided to mix them properly from the original multi-track tapes. Around this time, Sony Music was alerted that the project was in-progress, and they created their own timetable for promotion and release with a September 10 deadline for the final submission of the master tapes. According to Scott, they hadn't even completed a final list of songs by June, and the three-month schedule placed a lot of pressure on them to locate, remix and master the final track list in time to meet Sony's deadline.
Springsteen, Scott, and three sets of engineers spent the next three months going through Springsteen's massive song library, locating the multi-track reels with Scott's database, mixing songs and picking out the best of the unreleased material. Sometimes, a song would need extra parts added on, such as in the case of "Thundercrack", a song dating back to 1973. Springsteen called in then-former bandmates Danny Federici and Clarence Clemons, along with original drummer Vini Lopez to fill in the missing pieces.
Though Springsteen already had a personal recording studio on his Jersey estate, the set-up was awkward, using modest equipment in unconventional ways just to meet contemporary standards of professional recording. By the end of June, Scott was upgrading the facility into a far more sophisticated operation in order to meet the September deadline. Additionally, they began scheduling mix sessions across three different studios as the engineers' availability would be limited due to work with other clients. Springsteen's longtime engineer Bob Clearmountain would work remotely from Los Angeles, where he was already booked on other projects through August. Ed Thacker would mix at Springsteen's newly upgraded facility from July through September. Thom Panunzio would also mix at Springsteen's estate from the end of July through all of August, but he would work out of a mobile studio rented from the Record Plant as Thacker would be working out of Springsteen's studio at the same time. The material would also be divided up chronologically among the three engineers. For example, Panunzio would remix the earliest material as he had worked on many of those recordings when they were first made. Clearmountain mixed all of the material from the 1990s and Thacker mixed all of the material in-between as well as some of the earlier recordings. Up until early August, Scott would be coordinating the entire project by phone from his home in Whitefish, Montana as he was expecting the birth of his first child, and engineer Greg Goldman would join the project as Scott's eyes and ears on the ground in Jersey where much of the team was located.
On a typical day in August, when all three engineers were working simultaneously, Panunzio and Thacker would generally set up a mix during the evening, returning the next morning to finish. Springsteen would call in during the afternoon and show up between 4 p.m. and 7 p.m. to listen to mixes and make any suggested changes. Plotkin would be present, adding his input, and he would also have his mixes played back in real time on a receiving unit set up in Springsteen's living room at the compound.
One of the most common changes between the new mixes and the vintage rough mixes was the difference in reverb. According to Thacker, Springsteen's vocals were originally "very big and sitting in the track surrounded by reverb," but Springsteen was now requesting him to "make the vocals drier than they might have been 20 years ago [and to] make them a little more personal."
By the end of June, they had a preliminary list of 128 songs selected for the box set, and the following July, they cut it down to 100 songs (six CDs worth). However, Springsteen eventually decided to cut the number to 66, leaving a total of four CDs. By then, Scott’s wife had delivered and he was back on-site in Jersey. As the final mixes were approved, Scott loaded them on to a digital workstation and assembled them in sequence as they would appear in the final boxed set. This meant setting spacings, doing crossfades and other editing tasks that are often saved for the mastering stage if more time had permitted. These final sequences were outputted on to a hard drive and sent to Gateway Mastering in Portland, Maine where they were mastered in a week. After three days of listening tests, Scott, Plotkin and Springsteen signed off on the project and submitted the finished masters on schedule.
Even though the original intention was to cover material from all aspects of Springsteen's career, acoustic demos from 1972 (such as "Arabian Nights", "Jazz Musician", "Ballad of the Self-Loading Pistol", and "Visitation at Fort Horn") were not available for release, due to ongoing court proceedings surrounding the songs (concerning the attempted release of these songs by a different, European based label in 1993). Songs from the "Electric Nebraska" sessions, as well as songs from an unreleased 1994 album, were also missing.
Song backgrounds
"Roulette", "Be True", "Pink Cadillac", "Johnny Bye Bye", "Shut Out the Light", "Stand on It", "Janey Don't You Lose Heart", "Lucky Man", "Two for the Road", and "Part Man, Part Monkey" were all B-sides to singles. However, the take of "Stand on It" included on Tracks is a previously unknown alternate version that features an extra verse and a fully finished ending (as opposed to the fade-out on the original B-side).
"Bishop Danced", "Santa Ana", "Seaside Bar Song", "Zero and Blind Terry", "Thundercrack", "Rendezvous", "So Young and in Love", "Man at the Top", "The Wish", "When the Lights Go Out", and "Brothers Under the Bridge" were all known from previous live performances.
"Hearts of Stone" was previously recorded by Southside Johnny.
A rerecorded "This Hard Land" was released on Greatest Hits, although the original recording did not appear until Tracks.
"Linda Let Me Be the One", "Iceman", "Bring On the Night", "Don't Look Back", "Restless Nights", "Where the Bands Are", "Loose Ends", "Living on the Edge of the World", "Take 'Em as They Come", "Ricky Wants a Man of Her Own", "I Wanna Be with You", "Mary Lou", "Cynthia", "My Love Will Not Let You Down", "Frankie", "T.V. Movie", and "Back in Your Arms" had previously been unofficially released on bootlegs, sometimes under different titles.
"A Good Man Is Hard to Find (Pittsburgh)" and "Lion's Den" were documented to exist, but had not been officially released.
"Give the Girl a Kiss", "Dollhouse", "Wages of Sin", "Car Wash", "Rockaway the Days", "Brothers Under the Bridges ('83)", "When You Need Me", "The Honeymooners", "Leavin' Train", "Seven Angels", "Gave It a Name", "Sad Eyes", "My Lover Man", "Over the Rise", "Loose Change", "Trouble in Paradise", "Goin' Cali", and "Happy" were all unknown songs before Tracks.
Reception, legacy, and follow-up
The box set was a minor success, peaking at #27 on the Billboard 200 album chart. It has been certified platinum in the U.S. and gold in Canada.
In a mostly positive review, AllMusic's music critic Stephen Thomas Erlewine opined that "If the end result isn't as revelatory as some may have expected (even the acoustic "Born in the U.S.A.," powerful as it is, doesn't sound any different than you may have imagined it), it's because Springsteen is, at heart, a solid craftsman, not a blinding visionary like Dylan. That's why Tracks is for the dedicated fan, where The Bootleg Series and The Basement Tapes are flat-out essential for rock fans."
Since its release, 44 of the songs on the set have been played live at least once, with "My Love Will Not Let You Down" receiving the most attention at over 100 plays.
The box set was later condensed into a single-disc album called 18 Tracks, with three songs ("Trouble River", "The Fever", and "The Promise") not on the 4-CD box set.
As a result of the project, Sony Music also created its own archive database, making extensive use of Scott’s cataloging efforts over the previous decade.
In an interview with Rolling Stone in September 2020, Springsteen suggested that a follow-up box-set of unreleased material is in the works: "There's a lot of really good music left. You just go back there. It’s not that hard. If I pull out something from 1980, or 1985 or 1970, it's amazing how you can slip into that voice. It's just sort of headspace. All of those voices remain available to me, if I want to go to them," he told interviewer Brian Hiatt. Drummer Max Weinberg has overdubbed drum parts for over 40 songs since 2017 for potential inclusion in the next box-set and commented that "any other artist would kill to get these songs." Springsteen also hinted that he may release some "lost albums" in full from his vault, including "Electric Nebraska".
Track listing
Personnel
Disc 1
Bruce Springsteen – guitar, lead vocals, piano
Steve Van Zandt – guitar, background vocals (tracks 11–17)
Garry Tallent – bass guitar, background vocals
Roy Bittan – piano (tracks 9, 11–17)
David Sancious – piano (tracks 6–8,10)
Max Weinberg – drums (tracks 9, 11–17)
Vini Lopez – drums, background vocals (tracks 6–8, 10–1997)
Danny Federici – organ, accordion
Clarence Clemons – saxophone, tambourine, vocals
Mario Cruz – tenor saxophone on "Hearts of Stone" 1997
Ed Manion – baritone saxophone on "Hearts of Stone" 1997
Richie Rosenberg – trombone on "Hearts of Stone" 1997
Mike Spengler – trumpet on "Hearts of Stone" 1997
Mark Pender – trumpet on "Hearts of Stone" 1997
Disc 2
Bruce Springsteen – guitar, lead vocals
Steve Van Zandt – guitar, background vocals
Garry Tallent – bass guitar
Roy Bittan – piano
Max Weinberg – drums
Danny Federici – organ, glockenspiel
Clarence Clemons – saxophone, tambourine
Soozie Tyrell – violin on "Shut Out the Light"
Disc 3
Bruce Springsteen – guitar, lead vocals, bass guitar (tracks 13, 15–18), keyboards (tracks 13, 15–18)
Steve Van Zandt – guitar, background vocals
Nils Lofgren – guitar, background vocals on "Janey, Don't You Lose Heart"
Garry Tallent – bass guitar (tracks 1–12, 14)
Roy Bittan – piano (tracks 1–12, 14)
Max Weinberg – drums (tracks 1–12, 14)
Gary Mallaber – drums (tracks 13, 15–18)
Danny Federici – organ, glockenspiel
Clarence Clemons – saxophone, tambourine
Mario Cruz – tenor saxophone on "Lion's Den" 1997
Ed Manion – baritone saxophone on "Lion's Den" 1997
Richie Rosenberg – trombone on "Lion's Den" 1997
Mike Spengler – trumpet on "Lion's Den" 1997
Mark Pender – trumpet on "Lion's Den" 1997
Disc 4
Bruce Springsteen – guitar, lead vocals, bass guitar, keyboards, percussion
Randy Jackson – bass guitar ("Leavin' Train", "Seven Angels", "Sad Eyes", "Trouble in Paradise")
Garry Tallent – bass guitar ("Back in Your Arms", "Brothers Under the Bridge")
Roy Bittan – piano ("Seven Angels", "Trouble in Paradise", "Back in Your Arms")
Jeff Porcaro – drums ("Leavin' Train", "Sad Eyes", "My Lover Man", "When the Lights Go Out", "Trouble in Paradise")
Omar Hakim – drums on "Part Man, Part Monkey"
Shawn Pelton – drums ("Seven Angels", "Happy")
Max Weinberg – drums on "Back in Your Arms"
Gary Mallaber – drums on "Brothers Under the Bridge"
Michael Fisher – percussion on "Sad Eyes"
David Sancious – keyboards ("Sad Eyes", "Part Man, Part Monkey")
Danny Federici – organ ("Back in Your Arms", "Brothers Under the Bridge")
Ian McLagan – organ on "Leavin' Train"
Clarence Clemons – saxophone on "Back in Your Arms"
Certifications
References
Bruce Springsteen compilation albums
1998 compilation albums
Columbia Records compilation albums |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.