id
stringlengths 3
8
| url
stringlengths 32
207
| title
stringlengths 1
114
| text
stringlengths 93
492k
|
---|---|---|---|
14725499 | https://en.wikipedia.org/wiki/NetBarrier%20X4 | NetBarrier X4 | Netbarrier X4 is a discontinued version of the Intego Netbarrier line of its firewall for OS X. It features a cookie cleaner, a browser history cleaner, internet traffic statistic, an Internet bandwidth meter, cookie filters and information hiding. The firewall is customizable, with already configured options: "No Restrictions", "No Network", "Client (Local Server)", "Server Only", "Client Only", and "Customized". The "Customized" option allows flexible firewall configuration.
NetBarrier's features have been integrated into VirusBarrier X6, and it is no longer sold as a standalone product.
References
http://www.intego.com/netbarrier/
http://www.macworld.com/article/50970/2006/05/netbarrierx4.html
Firewall software |
3930374 | https://en.wikipedia.org/wiki/Political%20positions%20of%20Noam%20Chomsky | Political positions of Noam Chomsky | Noam Chomsky (born December 7, 1928) is an intellectual, political activist, and critic of the foreign policy of the United States and other governments. Noam Chomsky describes himself as an anarcho-syndicalist and libertarian socialist, and is considered to be a key intellectual figure within the left wing of politics of the United States.
Political views
Chomsky is often described as one of the best-known figures of the American Left, although he doesn't agree with the usage of the term. He has described himself as a "fellow traveller" to the anarchist tradition, and refers to himself as a libertarian socialist, a political philosophy he summarizes as challenging all forms of authority and attempting to eliminate them if they are unjustified for which the burden of proof is solely upon those who attempt to exert power. He identifies with the labor-oriented anarcho-syndicalist current of anarchism in particular cases, and is a member of the Industrial Workers of the World. He also exhibits some favor for the libertarian socialist vision of participatory economics, himself being a member of the Interim Committee for the International Organization for a Participatory Society.
He believes that libertarian socialist values exemplify the rational and morally consistent extension of original unreconstructed classical liberal and radical humanist ideas in an industrial context.
Chomsky has further defined himself as having held Zionist beliefs, although he notes that his definition of Zionism would be considered by most as anti-Zionism these days, the result of what he perceives to have been a shift (since the 1940s) in the meaning of Zionism (Chomsky Reader).
Chomsky is considered "one of the most influential left-wing critics of American foreign policy" by the Dictionary of Modern American Philosophers.
Views on ideologies and values
Freedom of speech
Chomsky has taken strong stands against censorship and for freedom of speech, even for views he personally condemns. He has stated that "with regard to freedom of speech there are basically two positions: you defend it vigorously for views you hate, or you reject it and prefer Stalinist/fascist standards".
Views on globalization
Chomsky made early efforts to critically analyze globalization. He summarized the process with the phrase "old wine, new bottles", maintaining that the motive of the elites is the same as always: they seek to isolate the general population from important decision-making processes, the difference being that the centers of power are now transnational corporations and supranational banks. Chomsky argues that transnational corporate power is "developing its own governing institutions" reflective of their global reach.
According to Chomsky, a primary ploy has been the co-opting of the global economic institutions established at the end of World War II, the International Monetary Fund (IMF) and the World Bank, which have increasingly adhered to the "Washington Consensus", requiring developing countries to adhere to limits on spending and make structural adjustments that often involve cutbacks in social and welfare programs. IMF aid and loans are normally contingent upon such reforms. Chomsky claims that the construction of global institutions and agreements such as the World Trade Organization, the General Agreement on Tariffs and Trade (GATT), the North American Free Trade Agreement (NAFTA), and the Multilateral Agreement on Investment constitute new ways of securing élite privileges while undermining democracy.
Chomsky believes that these austere and neoliberal measures ensure that poorer countries merely fulfill a service role by providing cheap labor, raw materials and investment opportunities for the developed world. This means that corporations can threaten to relocate to poorer countries, and Chomsky sees this as a powerful weapon to keep workers in richer countries in line.
Chomsky takes issue with the terms used in discourse on globalization, beginning with the term "globalization" itself, which he maintains refers to a corporate-sponsored economic integration rather than being a general term for things becoming international. He dislikes the term anti-globalization being used to describe what he regards as a movement for globalization of social and environmental justice. Chomsky understands what is popularly called "free trade" as a "mixture of liberalization and protection designed by the principal architects of policy in the service of their interests, which happen to be whatever they are in any particular period."
In his writings, Chomsky has drawn attention to globalization resistance movements. He described Zapatista defiance of NAFTA in his essay "The Zapatista Uprising." He also criticized the Multilateral Agreement on Investment, and reported on the activist efforts that led to its defeat. Chomsky's voice was an important part of the critics who provided the theoretical backbone for the disparate groups who united for the demonstrations against the World Trade Organization in Seattle in November 1999.
Views on socialism and communism
Chomsky is deeply critical of what he calls the "corporate state capitalism" that he believes is practiced by the United States and other Western states. He supports many of Mikhail Bakunin's anarchist (or libertarian socialist) ideas. Chomsky has identified Bakunin's comments regarding the totalitarian state as predictions for the brutal Soviet police state that would come in essays like The Soviet Union Versus Socialism. He has also defined Soviet communism as another form of "state capitalism", particularly because any socialism worthy of the name requires authentic democratic control of production and resources as well as public ownership. He has said that contrary to what many in America claim, the collapse of the Soviet Union should be regarded as "a small victory for socialism", not capitalism. Before the collapse of the Soviet Union, Chomsky explicitly condemned Soviet imperialism; for example in 1986 during a question/answer following a lecture he gave at Universidad Centroamericana in Nicaragua, when challenged by an audience member about how he could "talk about North American imperialism and Russian imperialism in the same breath", Chomsky responded: "One of the truths about the world is that there are two superpowers, one a
huge power which happens to have its boot on your neck, another, a smaller power which happens to have its boot on other people's necks. And I think that anyone in the Third World would be making a grave error if they succumbed to illusions about these matters."
Chomsky was also impressed with socialism as practiced in Vietnam. In a speech given in Hanoi on April 13, 1970, and broadcast by Radio Hanoi the next day, Chomsky spoke of his "admiration for the people of Vietnam who have been able to defend themselves against the ferocious attack, and at the same time take great strides forward toward the socialist society." Chomsky praised the North Vietnamese for their efforts in building material prosperity, social justice, and cultural progress. He also went on to discuss and support the political writing of Lê Duẩn.
In his 1973 book For Reasons of State, Chomsky argues that instead of a capitalist system in which people are "wage slaves" or an authoritarian system in which decisions are made by a centralized committee, a society could function with no paid labor. He argues that a nation's populace should be free to pursue jobs of their choosing. People will be free to do as they like, and the work they voluntarily choose will be both "rewarding in itself" and "socially useful." Society would be run under a system of peaceful anarchism, with no state or other authoritarian institutions. Work that was fundamentally distasteful to all, if any existed, would be distributed equally among everyone.
Chomsky was always critical of the Soviet Union. In a 2016 interview he said that Mao's revolution in China was responsible for "a huge death toll, in the tens of millions" during the Great Leap Forward; but he also gave the revolution credit for saving 100 million lives between 1949 and 1979 through rural health and development programs. In 1960s, Chomsky noted what he considered to be grassroots elements within both Chinese and Vietnamese communism. In December 1967, during a forum in New York, Chomsky responded to criticisms of the Chinese revolution as follows, "I don't feel that they deserve a blanket condemnation at all. There are many things to object to in any society. But take China, modern China; one also finds many things that are really quite admirable." Chomsky continued: "There are even better examples than China. But I do think that China is an important example of a new society in which very interesting positive things happened at the local level, in which a good deal of the collectivization and communization was really based on mass participation and took place after a level of understanding had been reached in the peasantry that led to this next step." He said of Vietnam: "Although there appears to be a high degree of democratic participation at the village and regional levels, still major planning is highly centralized in the hands of the state authorities."
In the context of remarks on the topic of peak oil in April 2005, Chomsky stated: "China is probably the most polluted country in the world – you can't see. It's kind of a totalitarian state, so they kind of force it on people, but the level of pollution is awful, and India too. Still in per-capita terms, the U.S. is way above anybody else, and we don't do anything about it."
Views on Marxism
Chomsky is critical of Marxism's dogmatic strains, and the idea of Marxism itself, but still appreciates Marx's contributions to political thought. Unlike some anarchists, Chomsky does not consider Bolshevism "Marxism in practice", but he does recognize that Marx was a complicated figure who had conflicting ideas; while he acknowledges the latent authoritarianism in Marx he also points to the libertarian strains which developed into the council communism of Rosa Luxemburg and Pannekoek. His commitment to libertarian socialism however has led him to characterize himself as an anarchist with radical Marxist leanings.
Views on anarchism
In practice Chomsky has tended to emphasize the philosophical tendency of anarchism to criticize all forms of illegitimate authority. He has been reticent about theorizing an anarchist society in detail, although he has outlined its likely value systems and institutional framework in broad terms. According to Chomsky, the variety of anarchism which he favors is:
On the question of the government of political and economic institutions, Chomsky has consistently emphasized the importance of grassroots democratic forms. Accordingly, current Anglo-American institutions of representative democracy "would be criticized by an anarchist of this school on two grounds. First of all because there is a monopoly of power centralized in the state, and secondly – and critically – because the representative democracy is limited to the political sphere and in no serious way encroaches on the economic sphere."
Chomsky believes anarchism is a direct descendant of liberalism, further developing the ideals of personal liberty and minimal government of the Enlightenment. He views libertarian socialism thus as the logical conclusion of liberalism, extending its democratic ideals into the economy, making anarchism an inherently socialist philosophy.
Views on American libertarianism
Noam Chomsky has described libertarianism, as it is understood in the United States, as, "extreme advocation of total tyranny" and "the extreme opposite of what's been called libertarian in every other part of the world since the Enlightenment."
Views on the welfare state
Chomsky is scathing in his opposition to the view that anarchism is inconsistent with support for "welfare state" measures, stating in part that
Views on antisemitism
In a 2004 interview with Jennifer Bleyer published in Ugly Planet, issue two and in Heeb Magazine, Chomsky engaged in the following exchange:
Views on the death penalty
Chomsky is a vocal advocate against the use of the death penalty. When asked his opinion on capital punishment in Secrets, Lies, and Democracy, he stated:
He has commented on the use of the death penalty in Texas as well as other states. On August 26, 2011 he spoke out against the execution of Steven Woods in Texas.
Views on copyright and patents
Chomsky has criticized copyright laws as well as patent laws. On copyright he argued in a 2009 interview:
On patents, he stated:
Views on institutions
Criticism of United States government
Chomsky has been a consistent and outspoken critic of the United States government, and criticism of the foreign policy of the United States has formed the basis of much of his political writing. Chomsky gives reasons for directing his activist efforts to the state of which he is a citizen. He believes that his work can have more impact when directed at his own government, and that he holds a responsibility as a member of a particular country of origin to work to stop that country from committing crimes. He expresses this idea often with a comparison of other countries holding that every country has flexibility to address crimes by unfavored countries, but is always unwilling to deal with their own. Speaking in Nicaragua in 1986, Chomsky was asked "We feel that through what you say and write you are our friend but at the same time you talk about North American imperialism and Russian imperialism in the same breath. I ask you how you can use the same arguments as reactionaries?" to which Chomsky responded,
He also contends that the United States, as the world's remaining superpower, acts in the same offensive ways as all superpowers. One of the key things superpowers do, Chomsky argues, is try to organize the world according to the interests of their establishment, using military and economic means. Chomsky has repeatedly emphasized that the overall framework of US foreign policy can be explained by the domestic dominance of US business interests and a drive to secure the state capitalist system. Those interests set the political agenda and the economic goals that aim primarily at US economic dominance.
His conclusion is that a consistent part of the United States' foreign policy is based on stemming the "threat of a good example." This 'threat' refers to the possibility that a country could successfully develop outside the US-managed global system, thus presenting a model for other countries, including countries in which the United States has strong economic interests. This, Chomsky says, has prompted the United States to repeatedly intervene to quell "independent development, regardless of ideology" in regions of the world where it has little economic or safety interests. In one of his works, What Uncle Sam Really Wants, Chomsky argues that this particular explanation accounts in part for the United States' interventions in Guatemala, Laos, Nicaragua, and Grenada, countries that pose no military threat to the US and have economic resources that are not important to the US establishment.
Chomsky claims that the US government's Cold War policies were not primarily shaped by anti-Soviet paranoia, but rather toward preserving the United States' ideological and economic dominance in the world. In his book Deterring Democracy he argues that the conventional understanding of the Cold War as a confrontation of two superpowers is an "ideological construct." He insists that to truly understand the Cold War one must examine the underlying motives of the major powers. Those underlying motives can only be discovered by analyzing the domestic politics, especially the goals of the domestic elites in each country:
Chomsky says the US economic system is primarily a state capitalist system, in which public funds are used to research and develop pioneering technology (the computer, the internet, radar, the jet plane etc.) largely in the form of defense spending, and once developed and mature these technologies are turned over to the corporate sector where civilian uses are developed for private control and profit.
Chomsky often expresses his admiration for the civil liberties enjoyed by US citizens. According to Chomsky, other Western democracies such as France and Canada are less liberal in their defense of controversial speech than the US. However, he does not credit the American government for these freedoms but rather mass social movements in the United States that fought for them. The movements he most often credits are the abolitionist movement, the movements for workers' rights and trade union organization, and the fight for African-American civil rights. Chomsky is often sharply critical of other governments who suppress free speech, most controversially in the Faurisson affair but also of the suppression of free speech in Turkey.
At the fifth annual Edward W. Said Memorial Lecture hosted by the Heyman Center for the Humanities in December 2009, Chomsky began his speech on "The Unipolar Moment and the Culture of Imperialism" by applauding Edward Said for calling attention to America's "culture of imperialism".
When the US establishment celebrated the 20th anniversary of the fall of the Berlin Wall in November 2009, Chomsky said this commemoration ignored a forgotten human rights violation that occurred only one week after that event. On November 16, 1989, the US-armed Atlacatl Battalion in El Salvador assassinated six leading Latin American Jesuit priests, he explained. He contrasted the US' "self-congratulation" of the Berlin Wall destruction with the "resounding silence" that surrounds the assassination of these priests, contending that the US sacrifices democratic principles for its own self-interest, and without any self-criticism it tends to "focus a laser light on the crimes of enemies, but crucially we make sure to never look at ourselves."
View on State terrorism
In response to US declarations of a "War on Terrorism" in 1981 and the redeclaration in 2001, Chomsky has argued that the major sources of international terrorism are the world's major powers, led by the United States government. He uses a definition of terrorism from a US army manual, which defines it as "the calculated use of violence or threat of violence to attain goals that are political, religious, or ideological in nature. This is done through intimidation, coercion, or instilling fear." In relation to the US invasion of Afghanistan he stated:
On the efficacy of terrorism:
As regards support for condemnation of terrorism, Chomsky opines that terrorism (and violence/authority in general) is generally bad and can only be justified in those cases where it is clear that greater terrorism (or violence, or abuse of authority) is thus avoided. In a debate on the legitimacy of political violence in 1967, Chomsky argued that the "terror" of the Vietnam National Liberation Front (Viet Cong) was not justified, but that terror could in theory be justified under certain circumstances:
Chomsky believes that acts he considers terrorism carried out by the US government do not pass this test, and condemnation of United States foreign policy is one of the main thrusts of his writings which he has explained is because he lives in the United States, and thus holds a responsibility for his country's actions.
Criticism of United States democracy
Chomsky maintains that a nation is only democratic to the degree that government policy reflects informed public opinion. He notes that the US does have formal democratic structures, but they are dysfunctional. He argues that presidential elections are funded by concentrations of private power and orchestrated by the public relations industry, focusing discussion primarily on the qualities and the image of a candidate rather than on issues. Chomsky makes reference to several studies of public opinion by pollsters such as Gallup and Zogby and by academic sources such as the Program on International Policy Attitudes at the University of Maryland (PIPA). Quoting polls taken near the 2004 election, Chomsky points out that only a small minority of voters said they voted because of the candidate's "agendas/ideas/platforms/goals." Furthermore, studies show that the majority of Americans have a stance on domestic issues such as guaranteed health care that is not represented by either major party. Chomsky has contrasted US elections with elections in countries such as Spain, Bolivia, and Brazil, where he claims people are far better informed on important issues.
Views on tactical voting in the United States
Since the 2000 election, with regards to third-party voting, Chomsky has maintained "if it's a swing state, keep the worst guys out. If it's another state, do what you feel like." When asked if he voted in the 2008 election, he responded:
Views on Trump and Biden in 2020
In an interview for The Intercept, Mehdi Hasan asked Chomsky
"What do you make of the 'Never Biden' movement?" Chomsky answered
Criticism of intellectual communities
Chomsky has at times been outspokenly critical of scholars and other public intellectuals; while his views sometimes place him at odds with individuals on particular points, he has also denounced intellectual sub-communities for what he sees as systemic failings. Chomsky sees two broad problems with academic intellectuals generally:
They largely function as a distinct class, and so distinguish themselves by using language inaccessible to people outside the academy, with more or less deliberately exclusionary effects. In Chomsky's view there is little reason to believe that academics are more inclined to engage in profound thought than other members of society and that the designation "intellectual" obscures the truth of the intellectual division of labour: "These are funny words actually, I mean being an 'intellectual' has almost nothing to do with working with your mind; these are two different things. My suspicion is that plenty of people in the crafts, auto mechanics and so on, probably do as much or more intellectual work as people in the universities. There are plenty of areas in academia where what's called 'scholarly' work is just clerical work, and I don't think clerical work's more challenging than fixing an automobile engine – in fact, I think the opposite. ... So if by 'intellectual' you mean people who are using their minds, then it's all over society" (Understanding Power, p. 96).
The corollary of this argument is that the privileges enjoyed by intellectuals make them more ideologised and obedient than the rest of society: "If by 'intellectual' you mean people who are a special class who are in the business of imposing thoughts, and framing ideas for people in power, and telling everyone what they should believe, and so on, well, yeah, that's different. These people are called 'intellectuals' – but they're really more a kind of secular priesthood, whose task is to uphold the doctrinal truths of the society. And the population should be anti-intellectual in that respect, I think that's a healthy reaction" (ibid, p. 96; this statement continues the previous quotation).
Chomsky is elsewhere asked what "theoretical" tools he feels can be produced to provide a strong intellectual basis for challenging hegemonic power, and he replies: "if there is a body of theory, well tested and verified, that applies to the conduct of foreign affairs or the resolution of domestic or international conflict, its existence has been kept a well-guarded secret", despite much "pseudo-scientific posturing." Chomsky's general preference is, therefore, to use plain language in speaking with a non-elite audience.
The American Intellectual climate is the focus of "The Responsibility of Intellectuals", the essay which established Chomsky as one of the leading political philosophers in the second half of the 20th century. Chomsky's extensive criticisms of a new type of post-WW2 intellectual he saw arising in the United States were the focus of his book American Power and the New Mandarins. There he described what he saw as the betrayal of the duties of an intellectual to challenge received opinion. The "new Mandarins", who he saw as responsible in part for the Vietnam War, were apologists for United States as an imperial power; he wrote that their ideology demonstrated
Chomsky has shown cynicism towards the credibility of postmodernism and poststructuralism. In particular he has criticised the Parisian intellectual community; the following disclaimer may be taken as indicative: "I wouldn't say this if I hadn't been explicitly asked for my opinion – and if asked to back it up, I'm going to respond that I don't think it merits the time to do so" (ibid). Chomsky's lack of interest arises from what he sees as a combination of difficult language and limited intellectual or "real world" value, especially in Parisian academe: "Sometimes it gets kind of comical, say in post-modern discourse. Especially around Paris, it has become a comic strip, I mean it's all gibberish ... they try to decode it and see what is the actual meaning behind it, things that you could explain to an eight-year old child. There's nothing there." (Chomsky on Anarchism, pg. 216). This is exacerbated, in his view, by the attention paid to academics by the French press: "in France if you're part of the intellectual elite and you cough, there's a front-page story in Le Monde. That's one of the reasons why French intellectual culture is so farcical – it's like Hollywood" (Understanding Power, pg. 96).
Chomsky made a 1971 appearance on Dutch television with Michel Foucault, the full text of which can be found in Foucault and his Interlocutors, Arnold Davidson (ed.), 1997 (). Of Foucault, Chomsky wrote that:
Mass media analysis
Another focus of Chomsky's political work has been an analysis of mainstream mass media (especially in the United States), which he accuses of maintaining constraints on dialogue so as to promote the interests of corporations and the government.
Edward S. Herman and Chomsky's book Manufacturing Consent: The Political Economy of the Mass Media explores this topic and presents their "propaganda model" hypothesis as a basis to understand the news media with several case studies to support it. According to their propaganda model, more democratic societies like the U.S. use subtle, non-violent means of control, unlike totalitarian societies, where physical force can readily be used to coerce the general population. Chomsky asserts that "propaganda is to a democracy what the bludgeon is to a totalitarian state" (Media Control).
The model attempts to explain such a systemic bias in terms of structural economic causes rather than a conspiracy of people. It argues the bias derives from five "filters" that all published news must pass through which combine to systematically distort news coverage.
The first filter, ownership, notes that most major media outlets are owned by large corporations.
The second, funding, notes that the outlets derive the majority of their funding from advertising, not readers. Thus, since they are profit-oriented businesses selling a product – readers and audiences – to other businesses (advertisers), the model would expect them to publish news which would reflect the desires and values of those businesses.
In addition, the news media are dependent on government institutions and major businesses with strong biases as sources (the third filter) for much of their information.
Flak, the fourth filter, refers to the various pressure groups which go after the media for supposed bias and so on when they go out of line.
Norms, the fifth filter, refer to the common conceptions shared by those in the profession of journalism.
The model therefore attempts to describe how the media form a decentralized and non-conspiratorial but nonetheless very powerful propaganda system that is able to mobilize an "élite" consensus, frame public debate within "élite" perspectives and at the same time give the appearance of democratic consent.
Views on the Labour Party under Jeremy Corbyn
In May 2017, Chomsky endorsed Labour Party leader Jeremy Corbyn in the forthcoming UK general election saying, "If I were a voter in Britain, I would vote for him [Jeremy Corbyn]." He claimed that Corbyn would be doing better in opinion polls if it was not for the "bitter" hostility of the mainstream media, he said, "If he had a fair treatment from the media – that would make a big difference."
In November 2019, along with other public figures, Chomsky signed a letter supporting Corbyn describing him as "a beacon of hope in the struggle against emergent far-right nationalism, xenophobia and racism in much of the democratic world" and endorsed him in the 2019 UK general election. In December 2019, along with 42 other leading cultural figures, he signed a letter endorsing the Labour Party under Corbyn's leadership in the 2019 general election. The letter stated that "Labour's election manifesto under Jeremy Corbyn's leadership offers a transformative plan that prioritises the needs of people and the planet over private profit and the vested interests of a few."
Views on MIT, military research and student protests
The Massachusetts Institute of Technology is a major research center for US military technology. As Chomsky says: "[MIT] was a Pentagon-based university. And I was at a military-funded lab." Having kept quiet about his anti-militarist views in the early years of his career at MIT, Chomsky became more vocal as the war in Vietnam intensified. For example, in 1968, he supported an attempt by MIT's students to give an army deserter sanctuary on campus. He also gave lectures on radical politics.
Throughout this period, MIT's various departments were researching helicopters, smart bombs and counterinsurgency techniques for the war in Vietnam. Jerome Wiesner, the military scientist who had initially employed Chomsky at MIT, also organised a group of researchers from MIT and elsewhere to devise a barrier of mines and cluster bombs between North and South Vietnam. By his own account, back in the 1950s, Wiesner had "helped get the United States ballistic missile program established in the face of strong opposition". He then brought nuclear missile research to MIT – work which, as Chomsky says "was developed right on the MIT campus." Until 1965, much of this work was supervised by a Vice-President at MIT, General James McCormack, who had earlier played a significant role supervising the creation of the US's nuclear arsenal. Meanwhile, Professor Wiesner played an important advisory role in organising the US's nuclear command and control systems.
Chomsky has rarely talked about the military research done at his own lab but the situation has been made clear elsewhere. In 1971, the US Army's Office of the Chief of Research and Development published a list of what it called just a "few examples" of the "many RLE research contributions that have had military applications". This list included: "helical antennas", "microwave filters", "missile guidance", "atomic clocks" and "communication theory". Chomsky never produced anything that actually worked for the military. However, by 1963 he had become a "consultant" for the US Air Force's MITRE Corporation who were using his linguistic theories to support "the design and development of U.S. Air Force-supplied command and control systems."
The MITRE documents that refer to this consultancy work are quite clear that they intended to use Chomsky's theories in order to establish natural languages such as English "as an operational language for command and control". According to one of his students, Barbara Partee, who also worked on this project, the military justification for this was: "that in the event of a nuclear war, the generals would be underground with some computers trying to manage things, and that it would probably be easier to teach computers to understand English than to teach the generals to program."
Chomsky's complicated attitude to MIT's military role was expressed in two letters published in the New York Review Books in 1967. In the first, he wrote that he had "given a good bit of thought to ... resigning from MIT, which is, more than any other university associated with the activities of the Department of 'Defense'." He also stated that MIT's "involvement in the [Vietnam] war effort is tragic and indefensible." Then, in the second letter written to clarify the first, Chomsky said that "MIT as an institution has no involvement in the war effort. Individuals at MIT, as elsewhere, have direct involvement and that is what I had in mind."
By 1969, MIT's student activists were actively campaigning "to stop the war research" at MIT. Chomsky was sympathetic to the students but disagreed with their immediate aims. In opposition to the radical students, he argued that it was best to keep military research on campus rather than having it moved away. Against the students' campaign to close down all war-related research, he argued for restricting such research to "systems of a purely defensive and deterrent character". MIT's student president at this time, Michael Albert, has described this position as, in effect, "preserving war research with modest amendments".
During this period, MIT had six students sentenced to prison terms, prompting Chomsky to say that MIT's students suffered things that "should not have happened". Despite this, he has described MIT as "the freest and the most honest and has the best relations between faculty and students than at any other ... [with] quite a good record on civil liberties." Chomsky's differences with student activists at this time led to what he has called "considerable conflict". He described the rebellions across US campuses as "largely misguided" and he was unimpressed by the student uprising of May 1968 in Paris, saying, "I paid virtually no attention to what was going on in Paris as you can see from what I – rightly, I think." On the other hand, Chomsky was also very grateful to the students for raising the issue of the war in Vietnam.
Chomsky's particular interpretation of academic freedom led him to give support to some of MIT's more militaristic academics, even though he disagreed with their actions. For example, in 1969, when he heard that Walt Rostow, a major architect of the Vietnam war, wanted to return to work at the university, Chomsky threatened "to protest publicly" if Rostow was "denied a position at MIT." In 1989, Chomsky then gave support to a long-standing Pentagon adviser, John Deutch, by backing his candidacy for President of MIT. Deutch was an influential advocate of both chemical and nuclear weapons and later became head of the CIA. The New York Times quoted Chomsky as saying, "He has more honesty and integrity than anyone I've ever met in academic life, or any other life. ... If somebody's got to be running the C.I.A., I'm glad it's him."
Comments on world affairs and conflicts
Over his career, Chomsky has produce commentaries on many world conflicts.
Views and activism opposing the Vietnam War
Chomsky became one of the most prominent opponents of the Vietnam War in February 1967, with the publication of his essay "The Responsibility of Intellectuals" in the New York Review of Books.
Allen J. Matusow, "The Vietnam War, the Liberals, and the Overthrow of LBJ" (1984):
A contemporary reaction came from New York University Professor of Philosophy Emeritus Raziel Abielson:
Chomsky also participated in "resistance" activities, which he described in subsequent essays and letters published in the New York Review of Books: withholding half of his income tax, taking part in the 1967 march on the Pentagon, and spending a night in jail. In the spring of 1972, Chomsky testified on the origins of the war before the Senate Foreign Relations Committee, chaired by J. William Fulbright.
Chomsky's view of the war is different from orthodox anti-war opinion which holds the war as a tragic mistake. He argues that the war was a success from the US point of view. According to Chomsky's view the main aim of US policy was the destruction of the nationalist movements in the Vietnamese peasantry. In particular he argues that US attacks were not a defense of South Vietnam against the North but began directly in the early 1960s (covert US intervention from the 1950s) and at that time were mostly aimed at South Vietnam. He agrees with the view of orthodox historians that the US government was concerned about the possibility of a "domino effect" in South-East Asia. At this point Chomsky diverts from orthodox opinion – he holds that the US government was not so concerned with the spread of state Communism and authoritarianism but rather of nationalist movements that would not be sufficiently subservient to US economic interests.
Views on Cambodia and doubting the genocide
The Khmer Rouge (KR) communists took power in Cambodia in April 1975 and expelled all citizens of Western countries. The only sources of information about the country were a few thousand refugees who escaped to Thailand and the official pronouncements of the KR government. The refugees told stories of mass murders perpetrated by the Khmer Rouge and widespread starvation. Many leftist academics praised the Khmer Rouge and discounted the stories of the refugees.
In July 1978, Chomsky and his collaborator, Edward S. Herman jumped into the controversy. Chomsky and Herman reviewed three books about Cambodia. Two of the books by John Barron (and Anthony Paul) and François Ponchaud were based on interviews with Cambodian refugees and concluded that the Khmer Rouge had killed or been responsible for the death of hundreds of thousands of Cambodians. The third book, by scholars Gareth Porter and George Hildebrand, described the KR in highly favorable terms. Chomsky and Herman called Barron and Paul's book "third rate propaganda" and part of a "vast and unprecedented propaganda campaign" against the KR. He said Ponchaud was "worth reading" but unreliable. Chomsky said that refugee stories of KR atrocities should be treated with great "care and caution" as no independent verification was available. By contrast, Chomsky was highly favorable toward the book by Porter and Hildebrand, which portrayed Cambodia under the Khmer Rouge as a "bucolic idyll." Chomsky also opined that the documentation of Gareth Porter's book was superior to that of Ponchaud's, despite almost all of the references cited by Porter coming from Khmer Rouge documents while Ponchaud's came from speaking to Cambodian refugees.
Chomsky and Herman later co-authored a book about Cambodia titled After the Cataclysm (1979), which appeared after the Khmer Rouge regime had been deposed. The book was described by Cambodian scholar Sophal Ear as "one of the most supportive books of the Khmer revolution" in which they "perform what amounts to a defense of the Khmer Rouge cloaked in an attack on the media". In the book, Chomsky and Herman acknowledged that "The record of atrocities in Cambodia is substantial and often gruesome", but questioned their scale, which may have been inflated "by a factor of 100". Khmer Rouge agricultural policies reportedly produced "spectacular" results."
Contrary to Chomsky and Herman, the reports of massive Khmer Rouge atrocities and starvation in Cambodia proved to be accurate and uninflated. Many deniers or doubters of the Cambodian genocide recanted their previous opinions, but Chomsky continued to insist that his analysis of Cambodia was without error based on the information available to him at the time. Herman addressed critics in 2001: "Chomsky and I found that the very asking of questions about ... the victims in the anti-Khmer Rouge propaganda campaign of 1975–1979 was unacceptable, and was treated almost without exception as 'apologetics for Pol Pot'."
Chomsky's biographers look at this issue in different ways. In Noam Chomsky: A life of dissent, Robert Barsky focuses on Steven Lukes' critique of Chomsky in The Times Higher Education Supplement. Barsky cites Lukes' claim that, obsessed by his opposition to the United States' role in Indochina, Chomsky had "lost all sense of perspective" when it came to Pol Pot's Cambodia. Barsky then cites a response by Chomsky in which he says that, by making no mention of this, Lukes is demonstrating himself to be an apologist for the crimes in Timor and adds on this subject, "Let us say that someone in the US or UK... did deny Pol Pot atrocities. That person would be a positive saint as compared to Lukes, who denies comparable atrocities for which he himself shares responsibility and know how to bring to an end, if he chose". Barsky concludes that the vigor of Chomsky's remarks "reflects the contempt that he feels" for all such arguments.
In Decoding Chomsky, Chris Knight takes a rather different approach. He claims that because Chomsky never felt comfortable about working in a military-funded laboratory at MIT, he was reluctant to be too critical of any regime that was being targeted by that same military. Knight writes that "while Chomsky has denounced the Russian Bolsheviks of 1917, he has been less hostile towards the so-called communist regimes which later took power in Asia. ... He also seemed reluctant to acknowledge the full horror of the 'communist' regime in Cambodia. The explanation I favour is that it pained Chomsky's conscience to denounce people anywhere who were being threatened by the very war machine that was funding his research."
Scholar and former Cambodia refugee, Sophal Ear, has reacted personally to Chomsky's views about Cambodia. "While my family worked and died in rice fields," Ear said, "Chomsky sharpened his theories and amended his arguments while seated in his armchair in Cambridge, Massachusetts ... the peasants of Indochina, will write no memoirs and will be forgotten ... For decades, Chomsky has vilified his critics as only a world class linguist can. However, for me and the surviving members of my family, questions about life under the Khmer Rouge are not intellectual parlour games."
Views and activism on East Timor
In 1975, the Indonesian army, under the command of President Suharto, invaded East Timor, occupying it until 1999, which resulted in between 80,000 and 200,000 East Timorese deaths. A detailed statistical report prepared for the Commission for Reception, Truth and Reconciliation in East Timor cited a lower range of 102,800 conflict-related deaths in the period 1974–1999, namely, approximately 18,600 killings and 84,200 'excess' deaths from hunger and illness. The former figure is considered "proportionately comparable" to the Cambodian genocide though the total deaths in the latter were incomparably more.
Chomsky argued that decisive military, financial and diplomatic support was provided to Suharto's regime, by successive U.S. administrations; beginning with Gerald Ford who, with Henry Kissinger as Secretary of State, provided a 'green light' to the invasion. Prior to the invasion, the U.S. had supplied the Indonesian army with 90% of its arms, and "by 1977 Indonesia found itself short of weapons, an indication of the scale of its attack. The Carter Administration accelerated the arms flow. Britain joined in as atrocities peaked in 1978, while France announced that it would sell arms to Indonesia and protect it from any public "embarrassment". Others, too, sought to gain what profit they could from the slaughter and torture of Timorese." This humanitarian catastrophe went virtually unnoticed by the international community.
Noam Chomsky attempted to raise consciousness about the crisis at a very early stage. In November 1978 and October 1979, Chomsky delivered statements to the Fourth Committee of the U.N. General Assembly about the East Timor tragedy and the lack of media coverage.
In 1999, when it became clear that the majority of Timorese people were poised to vote in favour of their national independence in U.N. sponsored elections, Indonesian armed forces and paramilitary groups reacted by attempting to terrorize the population. At this time Chomsky chose to remind Americans of the three principal reasons why he felt they should care about East Timor:
Weeks later, following the independence vote, the Indonesian military drove "hundreds of thousands from their homes and destroying most of the country. For the first time the atrocities were well publicized in the United States."
Australian historian Clinton Fernandes, writes that "When Indonesia invaded East Timor with US support in 1975, Chomsky joined other activists in a tireless campaign of international solidarity. His speeches and publications on this topic were prodigious and widely read, but his financial support is less well known. When the US media were refusing to interview Timorese refugees, claiming that they had no access to them, Chomsky personally paid for the airfares of several refugees, bringing them from Lisbon to the US, where he tried to get them into the editorial offices of The New York Times and other outlets. Most of his financial commitment to such causes has – because of his own reticence – gone unnoticed. A Timorese activist says, "we learnt that the Chomsky factor and East Timor were a deadly combination" and "proved to be too powerful for those who tried to defeat us".
Standing before The UN Independent Special Commission of Inquiry for Timor-Leste whose major report was released in 2006. Arnold Kohen a U.S activist vitally important to the raising of Western consciousness of the catastrophe since 1975, testified that,
When José Ramos-Horta and Bishop Carlos Belo of East Timor were honored with the Nobel Peace Prize, Chomsky responded "That was great, a wonderful thing. I ran into José Ramos-Horta in São Paulo. I haven't seen his official speech yet, but certainly he was saying in public that the prize should have been given to Xanana Gusmão, who is the leader of the resistance to Indonesian aggression. He's in an Indonesian jail. But the recognition of the struggle is a very important thing, or will be an important thing if we can turn it into something."
Views on Israel and Palestine
Chomsky "grew up ... in the Jewish-Zionist cultural tradition" (Peck, p. 11). His father was one of the foremost scholars of the Hebrew language and taught at a religious school. Chomsky has also had a long fascination with and involvement in Zionist politics. As he described:
He is highly critical of the policies of Israel towards the Palestinians and its Arab neighbors. His book The Fateful Triangle is considered one of the premier texts on the Israeli-Palestinian conflict among those who oppose Israel's policies in regard to the Palestinians as well as American support for the state of Israel. He has also accused Israel of "guiding state terrorism" for selling weapons to apartheid South Africa and Latin American countries that he characterizes as U.S. puppet states, e.g. Guatemala in the 1980s, as well as U.S.-backed paramilitaries (or, according to Chomsky, terrorists) such as the Nicaraguan Contras. (What Uncle Sam Really Wants, Chapter 2.4) Chomsky characterizes Israel as a "mercenary state", "an Israeli Sparta", and a militarized dependency within a U.S. system of hegemony. He has also fiercely criticized sectors of the American Jewish community for their role in obtaining U.S. support, stating that "they should more properly be called 'supporters of the moral degeneration and ultimate destruction of Israel'" (Fateful Triangle, p. 4). He says of the Anti-Defamation League (ADL):
In a 2004 interview with Jennifer Bleyer published The Ugly Planet, issue two and in Heeb magazine, Chomsky stated:
In May 2013, Chomsky, along with other professors such as Malcolm Levitt, advised Stephen Hawking to boycott an Israeli conference.
As a result of his views on the Middle East, Chomsky was banned from entering Israel in 2010.
Views on the Iraq War
Chomsky opposed the Iraq War for what he saw as its consequences for
the international system, namely that the war perpetuated a system in
which power and force trump diplomacy and law. He summarised this view in
Hegemony or Survival, writing:
Views on the Cuban embargo
In February 2009, Chomsky described the publicly stated U.S. goal of bringing "democracy to the Cuban people" as "unusually vulgar propaganda". In Chomsky's view, the U.S. embargo of Cuba has actually achieved its stated purpose. The goal of the embargo according to Chomsky has been to implement "intensive U.S. terror operations" and "harsh economic warfare" in order to cause "rising discomfort among hungry Cubans" in the hope that out of desperation they would overthrow the regime. In lieu of this goal, Chomsky believes that "U.S. policy has achieved its actual goals" in causing "bitter suffering among Cubans, impeding economic development, and undermining moves towards more internal democracy." In Chomsky's view, the real "threat of Cuba" is that successful independent development on the island might stimulate others who suffer from similar problems to follow the same course, thus causing the "system of U.S. domination" to unravel.
Turkish oppression of Kurds
Chomsky has been very critical of Turkey's policies in regards to their Kurdish population, while also denouncing the military aid given to the Turkish government by the United States. Such aid Chomsky states allowed Turkey during the 1990s to conduct "US-backed terrorist campaigns" in southeast Turkey, which Chomsky believes "rank among the most terrible crimes of the grisly 1990s", featuring "tens of thousands dead" and "every imaginable form of barbaric torture." In 2016 he was one of the signatories of a petition of the Academics for Peace called “We will not be a party to this crime!” demanding a peaceful solution for the Kurdish Turkish conflict.
Chomsky has described contemporary Turkey as a degrading democracy:
Chomsky and his publishers against the Turkish Courts
In 2002 the Turkish state indicted a Turkish publisher, Fatih Tas, for distributing a collection of Chomsky's essays under the title "American Intervention". The state charged that the book "promoted separatism" violating Article 8 of the Turkish Anti-Terror Law. One essay in the book was a reprint of a speech that Chomsky had made in Toledo, Ohio containing material claiming that the Turkish state had brutally repressed its Kurdish population. Prosecutors cited the following passages as particularly offensive:
At the request of Turkish activists, Chomsky petitioned the Turkish courts to name him as a co-defendant. He testified at the court trial in Istanbul in 2002. Fatih Tas was acquitted. After the trial the BBC reported Tas as saying, "If Chomsky hadn't been here we wouldn't have expected such a verdict."
While Chomsky was in Turkey for the trial he traveled to the southern city of Diyarbakır, the unofficial capital of the Kurdish population in Turkey, where he delivered a controversial speech, urging the Kurds to form an autonomous, self-governing community. Police handed recorded cassettes and translations of the speech over to Turkish courts for investigation a few days later.
In June 2006, Turkish publisher Tas was again prosecuted, along with two editors and a translator, for publishing a Turkish translation of Manufacturing Consent, authored by Chomsky and Edward S. Herman. The defendants were accused under articles 216 and 301 of the Turkish Penal Code for "publicly denigrating Turkishness, the Republic and the Parliament" and "inciting hatred and enmity among the people". In December 2006, the four defendants were acquitted by Turkish courts.
In 2003, in the New Humanist, Chomsky wrote about repression of free speech in Turkey and "the courage and dedication of the leading artists, writers, academics, journalists, publishers and others who carry on the daily struggle for freedom of speech and human rights, not just with statements but also with regular acts of civil disobedience. Some have spent a good part of their lives in Turkish prisons because of their insistence on recording the true history of the miserably oppressed Kurdish population."
Views on the Sri Lanka conflict
Chomsky supports the Tamils' right to self-determination in Tamil Eelam, their homeland in the North and East of Sri Lanka. In a February 2009 interview, he said of the Tamil Eelam struggle: "Parts of Europe, for example, are moving towards more federal arrangements. In Spain, for example, Catalonia by now has a high degree of autonomy within the Spanish state. The Basque Country also has a high degree of autonomy. In England, Wales and Scotland in the United Kingdom are moving towards a form of autonomy and self-determination and I think there are similar developments throughout Europe. Though they're mixed with a lot of pros and cons, but by and large I think it is a generally healthy development. I mean, the people have different interests, different cultural backgrounds, different concerns, and there should be special arrangements to allow them to pursue their special interests and concerns in harmony with others."
In a September 2009 submitted Sri Lankan Crisis Statement, Chomsky was one of several signatories calling for full access to internment camps holding Tamils, the respect of international law concerning prisoners of war and media freedom, the condemnation of discrimination against Tamils by the state since independence from Britain, and to urge the international community to support and facilitate a political solution that addresses the self-determination aspirations of Tamils and protection of the human rights of all Sri Lankans. A major offensive against the Tamils in the Vanni region of their homeland in 2009 resulted in the deaths of at least 20,000 Tamil civilians in 5 months, amid widespread concerns war crimes were committed against the Tamil population. At a United Nations forum on R2P, the Responsibility to Protect doctrine established by the UN in 2005, Chomsky said:
Chomsky was responding to a question that referred to Jan Egeland, former head of the UN's Humanitarian Affairs' earlier statement that R2P was a failure in Sri Lanka.
Other comments
Views on 9/11 conspiracy theories
Chomsky has dismissed 9/11 conspiracy theories, stating that there is no credible evidence to support the claim that the United States government was responsible for the attacks.
In addition, Chomsky said he would not be surprised if the conspiracy theory movement is being fueled by the government establishment to distract the public from more pressing matters.
Reception
Marginalization in the mainstream media
Chomsky has, rarely, appeared in popular media outlets in the United States such as CNN, Time magazine, Foreign Policy, and others. However, his recorded lectures are regularly replayed by NPR stations in the United States that carry the broadcasts of Alternative Radio, a syndicator of progressive lectures. Critics of Chomsky have argued his mainstream media coverage is adequate, and not unusual considering the fact that academics in general often receive low priority in the American media.
When CNN presenter Jeff Greenfield was asked why Chomsky was never on his show, he claimed that Chomsky might "be one of the leading intellectuals who can't talk on television. ... If you['ve] got a 22-minute show, and a guy takes five minutes to warm up, ... he's out". Greenfield described this need to "say things between two commercials" as the media's requirement for "concision". Chomsky has elaborated on this, saying that "the beauty of [concision] is that you can only repeat conventional thoughts. If you repeat conventional thoughts, you require zero evidence, like saying Osama Bin Laden is a bad guy, no evidence is required. However, if you say something that is true, although not a conventional truth, like the United States attacked South Vietnam, people are going to rightfully want evidence, and a whole lot of it, as they should. The format of the shows do not allow this type of evidence which is one of the reasons concision is critical." He's continued that if the media were better propagandists they would invite dissidents to speak more often because the time restraint would stop them from properly explaining their radical views and they "would sound like they were from Neptune." For this reason, Chomsky rejects many offers to appear on TV, preferring the written medium.
Since his book 9-11 became a bestseller in the aftermath of the September 11, 2001 attacks, Chomsky has attracted more attention from the mainstream American media. For example, The New York Times published an article in May 2002 describing the popularity of 9-11. In January 2004, the Times published a highly critical review of Chomsky's Hegemony or Survival by Samantha Power, and in February, the Times published an op-ed by Chomsky himself, criticizing the Israeli West Bank Barrier for taking Palestinian land.
Worldwide audience
Despite his marginalization in the mainstream US media, Chomsky is one of the most globally famous figures of the left, especially among academics and university students, and frequently travels across the United States, Europe, and the Third World. He has a very large following of supporters worldwide as well as a dense speaking schedule, drawing large crowds wherever he goes. He is often booked up to two years in advance. He was one of the main speakers at the 2002 World Social Forum. He is interviewed at length in alternative media.
The 1992 film Manufacturing Consent, was shown widely on college campuses and broadcast on PBS. It is the highest-grossing Canadian-made documentary film in history.
Many of his books are bestsellers, including 9-11, which was published in 26 countries and translated into 23 languages; it was a bestseller in at least five countries, including Canada and Japan. Chomsky's views are often given coverage on public broadcasting networks around the world – in marked contrast to his rare appearances in the US media. In the UK, for example, he appears periodically on the BBC.
Venezuelan President Hugo Chávez was known to be an admirer of Chomsky's books. He held up Chomsky's book Hegemony or Survival during his speech to the United Nations General Assembly in September 2006.
Bibliography
See also
Cambodian genocide denial
Military Keynesianism
Operation Gladio
References
External links
Noam Chomsky homepage
Noam Chomsky at MIT
Noam Chomsky's page on Academia.edu
Noam Chomsky at Zmag
Talks by Noam Chomsky at A-Infos Radio Project
Chomsky media files at the Internet Archive
Articles and videos featuring Noam Chomsky at AnarchismToday.org
The Political Economy of the Mass Media Part 1 Part 2 (March 15, 1989) lecture
OneBigTorrent.org (formerly "Chomsky Torrents") Many links to Chomsky-related media
Chomsky on Obama's Foreign Policy, His Own History of Activism, and the Importance of Speaking Out – video by Democracy Now!
Political views
Opposition to United States involvement in the Vietnam War
Political views by person
de:Noam Chomsky#Politisches Engagement |
2105965 | https://en.wikipedia.org/wiki/The%20Arsenal%20Stadium%20Mystery | The Arsenal Stadium Mystery | The Arsenal Stadium Mystery is a 1939 British mystery film and one of the first feature films wherein football is a central element in the plot. The film was directed by Thorold Dickinson, and shot at Denham Film Studios and on location at Arsenal Stadium. It was written by Dickinson, Donald Bull, and Alan Hyman, adapted from a 1939 novel by Leonard Gribble.
Plot
The film is a murder mystery set, as the title suggests, at the Arsenal Stadium, Highbury, London, then the home of Arsenal Football Club, who were at the time one of the dominant teams in English football. The backdrop is a friendly match between Arsenal and The Trojans, a fictitious amateur side. One of the Trojans' players drops dead during the match and when it is revealed he has been poisoned, suspicion falls on his teammates as well as his former mistress. Detective Inspector Slade (Leslie Banks) is called in to solve the crime.
The victim has been poisoned by a powerful digitalis-based chemical. There is evidence that he was being blackmailed.
The investigation gets complicated when the girlfriend (a prime suspect) is also murdered by the same method.
The police set a trap by putting a chemical on top of the poison which turns the skin black after a few hours. The player responsible is then spotted whilst playing.
Cast
Leslie Banks as Insp. Anthony Slade
Greta Gynt as Gwen Lee
Ian McLean as Sgt. Clinton
Liane Linden as Inga Larson
Anthony Bushell as John Doyce
Esmond Knight as Raille
Brian Worth as Phillip Morring
Richard Norris as Setchley
Wyndham Goldie as Kindilett
Alastair Macintyre as Carter
E. V. H. Emmett as Himself
George Allison as Himself
Production
The film stars several Arsenal players and members of staff such as Cliff Bastin and Eddie Hapgood, although only manager George Allison has a speaking part. The Trojans' body doubles on the pitch were players from Brentford, filmed during the First Division fixture between the two sides on 6 May 1939; this was the last match of the 1938–39 season and Arsenal's last official league fixture before the outbreak of the Second World War.
Brentford’s players wore white shirts for the match because their first choice red and white stripes would have clashed with Arsenal's red and white jerseys. The Trojans’ players therefore wore similar white shirts in close up sequences which were then cut in with the match action.
Dickinson planned a follow-up, The Denham Studio Mystery, which was intended to incorporate footage from the abortive film I Claudius, but this fell through.
References
External links
1939 films
1939 mystery films
Arsenal F.C.
British black-and-white films
British films
British mystery films
British detective films
British association football films
Films shot at Denham Film Studios
English-language films
Films directed by Thorold Dickinson
Films set in London
Films shot in London
Films with screenplays by Patrick Kirwan
Films about murder |
10976682 | https://en.wikipedia.org/wiki/MakeHuman | MakeHuman | MakeHuman is a free and open source 3D computer graphics middleware designed for the prototyping of photorealistic humanoids. It is developed by a community of programmers, artists, and academics interested in 3D character modeling.
Technology
MakeHuman is developed using 3D morphing technology. Starting from a standard (unique) androgynous human base mesh, it can be transformed into a great variety of characters (male and female), mixing them with linear interpolation. For example, given the four main morphing targets (baby, teen, young, old), it is possible to obtain all the intermediate shapes.
Using this technology, with a large database of morphing targets, it's virtually possible to reproduce any character. It uses a very simple GUI in order to access and easily handle hundreds of morphings. The MakeHuman approach is to use sliders with common parameters like height, weight, gender, ethnicity and muscularity. In order to make it available on all major operating systems, beginning from 1.0 alpha 8 it's developed in Python using OpenGL and Qt, with an architecture fully realized with plugins.
The tool is specifically designed for the modeling of virtual 3D human models, with a simple and complete pose system that includes the simulation of muscular movement. The interface is easy to use, with fast and intuitive access to the numerous parameters required in modeling the human form.
The development of MakeHuman is derived from a detailed technical and artistic study of the morphological characteristics of the human body. The work deals with morphing, using linear interpolation of both translation and rotation. With these two methods, together with a simple calculation of a form factor and an algorithm of mesh relaxing, it is possible to achieve results such as the simulation of muscular movement that accompanies the rotation of the limbs.
License
MakeHuman is free and open-source, with the source code and database released under the GNU Affero GPL. Models exported from an official version are released under an exception to this, CC0, in order to be widely used in free and non-free projects. These projects may or may not be commercialised.
Awards
In 2004, MakeHuman won the Suzanne Award as best Blender Python script.
Software history
The ancestor of MakeHuman was MakeHead, a python script for Blender, written by Manuel Bastioni, artist and coder, in 1999. A year later, a team of developers had formed, and they released the first version of MakeHuman for Blender. The project evolved and, in 2003, it was officially recognized by the Blender Foundation and hosted on http://projects.blender.org. In 2004, the development stopped because it was difficult to write a Python script so big using only Blender API. In 2005, MH was moved outside Blender, hosted on SourceForge and rewritten from scratch in C. At this point, version counting restarted from zero. During successive years, the software gradually transitioned from C to C++.
While performant, it was too complex to develop and maintain. Hence, in 2009, the team decided to go back to the Python language (with a small C core) and to release MakeHuman as version 1.0 pre-alpha. Development continued at a pace of 2 releases per year. The stable version 1.0.0 was officially released March 14, 2014. MakeHuman 1.1.0 has been released May 14, 2016, around two years later. The most recent intermediate version is 1.1.1, as of March 5, 2017.
A community website was established June 2015 featuring a forum section, a wiki, and a repository for user contributed content for the program.
Evolution towards a universal model topology
The aim of the project is to develop an application capable of modeling a wide variety of human forms in the full range of natural poses from a single, universal mesh. For this purpose, the design of a 3D humanoid mesh that can readily be parametrically manipulated to represent anatomical characteristics has been pursued, the mesh includes a common skeleton structure that permits character posing. MakeHuman Team developed a model that combines different anatomical parameters to transition smoothly from the infant to the elderly, from man to woman and from fat to slim.
The initial mesh occupies a middle ground, being neither pronounced masculine, nor pronounced feminine, neither young nor old and having a medium muscular definition. Goal was to depict a fair-built androgynous form, named the HoMunculus.
The current MakeHuman mesh has evolved through successive steps of MakeHuman project, incorporating lessons learned, community feedback and the results of considerable amounts of studies and experimentation.
Evolution of the mesh for the human model:
A first universal mesh prototype (head only), done in 1999 using makeHead script, was adapted for the early MakeHuman in 2000.
The first professional mesh (HM01) for a human model was realized by Enrico Valenza in 2002.
A second remarkable mesh (K-Mesh or HM02) was modelled by Kaushik Pal in 2003.
The third model was created by Manuel Bastioni upon the Z-Mesh or HM03 in 2005.
With experience from preceding versions, a fourth mesh (Y-Mesh or HM04) was done by Gianluca Miragoli (aka Yashugan) in 2007.
The fifth mesh (HM05) was built on the previous one by Gianluca Miragoli and Manuel Bastioni in 2008.
A sixth mesh (HM06) was also created by Gianluca Miragoli in 2010.
Another mesh version was released in 2010 by Waldemar Perez Jr., André Richard, Manuel Bastioni.
The latest and state-of-the-art mesh, released in 2013, was modeled by Manuel Bastioni.
Since the first release of makeHead (1999) and MakeHuman (2000), a challenge had been to construct a universal topology that retained all of the capabilities but added ability to interactively adjust the mesh to accommodate anatomical variety found in the human population. This could have been addressed by dramatically increasing the number of vertices for the mesh, but the resultant, dense mesh would have limited performance on processing computers. Technically, the model developed for MakeHuman is:
Light and optimized for subdivision surfaces modelling (15,128 vertices).
Quads only. The human mesh itself is triangles free, using Catmull-Clark subdivision for extra resolution to base meshes, see also polygon mesh.
Only E(5) Pole and the N(3) Pole, without holes and without 6-edge poles.
Research usage
Because of the freedom of the license, MakeHuman software is widely used by researchers for scientific purposes:
MakeHuman mesh is used in industrial design, to verify the anthropometry of a project, and in virtual reality research, to quickly produce avatars from measures or camera views.
MakeHuman characters are used in biomechanics and biomedical engineering, to simulate the behaviour of the human body under certain conditions or treatments. The human character model for a project of the construction of artificial mirror neuron systems was also generated by MakeHuman.
The software was used for visuo-haptic surgical training system development. These simulations combine tactile sense with visual information and provide realistic training scenarios to gain, improve, and assess resident and expert surgeons' skills and knowledge.
Full-body 3D virtual reconstructions have been performed using MakeHuman, and 3D analysis of early Christian burials (archaeothanatology).
The tool has also been used to create characters to perform Sign Language movements.
MakeHuman can also be used for nonverbal behavior research, like facial expressions, which involve the use of Facial Action Coding System
See also
Facial Action Coding System
Blender software
Poser
Daz Studio
FaceGen
ManuelbastioniLAB, a free and open source plug-in for Blender for the parametric 3D modeling of photorealistic humanoid characters
References and Related Papers
External links
Free 3D graphics software
3D modeling software for Linux
Anatomical simulation
Free software programmed in Python
3D graphics software that uses Qt
Windows graphics-related software
MacOS graphics software
Software using the GNU AGPL license |
14619329 | https://en.wikipedia.org/wiki/Micro%20Bill%20Systems | Micro Bill Systems | Micro Bill Systems, also known as MicroBillSys, MBS and Platte Media, is an online collection service with offices in Leeds, England, considered to be malware. The company states that it is a professional billing company offering "software management solutions that can aid your business in reducing uncollectable payments."
The company's best-known clients are online gambling and pornography sites offering three-day free trials of their subscription-based services.
If users do not cancel during the trial period, the MBS software begins a repeating cycle of full-screen pop-up windows warning users that their account is overdue and demanding payment.
The eleven-page MBS end-user license agreement contains a clause stating that unless the bill is paid, the software will disrupt computer use longer each day, with up to four daily periods of 10 minutes when the pop-up payment demand is locked and cannot be closed or minimized.
Users have complained about the unexpected bills, feel victimized, and deny ever accessing the video sites they are being billed for.
MBS denies installing its software by stealthy means and says that the software is downloaded by consent.
Many consumers are unaware that they have agreed to the download.
Security software company Symantec describes MicroBillSys as a potentially unwanted application that uses aggressive billing and collection techniques to demand payment after a three-day trial period, and says that there are reports of these techniques leaving the computer unable to browse the Internet.
Operation
When a user first accesses an online service whose collections are managed by MBS, the sign-up software creates a unique identifier based on the user's computer configuration and IP address. This identifier permits MBS to maintain a history of user access to supported sites and to send billing notices directly to the user's computer without the consumer ever having entered a name, credit card number, or other personal information.
The billing notices take the form of repeating pop-up windows warning users that their account is overdue and demanding payment for a 30-day subscription. Typical amounts are £19.95 (US$35.00) or £29.95 (US$52.50).
The pop-ups cover a substantial area of the screen and often cannot be closed, effectively preventing use of the computer for up to ten minutes. Their number and frequency increases over time, and to stop them consumers must pay. According to the company's terms and conditions, the agreement can be canceled and the software uninstalled only when no balance is outstanding.
For some who don't pay, Platte sends letters addressed to "the computer owner" threatening legal action in small claims court. The letters, described by one recipient as a "sham county court notice", include a "pre submission" information form which could mislead the unwary into thinking it comes from "Issuing Court Northampton County Court". It is unclear how Platte derives street addresses from IP addresses for these mailings, as ISPs interviewed deny providing such information. By filling out the information form and returning it, users provide Platte with their full name in addition to their correct mailing address. Similarly, users who complain to Platte by email or telephone are asked for their names and addresses so that uninstall codes can be mailed out. Payment demands follow. Later Platte began using a debt collection agency to try to pressurise people into making payments. In these cases, a charge is added to the 'subscription'.
MBS clients
MBS's initial clients were two adult content web sites. After being acquired by Platte Media (Platte International) in early 2008, the company expanded to include the promise of access to Hollywood movies from Getfilmsnow. Film studios Warner Bros. and 20th Century Fox have sent Getfilmsnow a cease and desist order, and say they have not licensed the films Platte is advertising.
While Platte's website presents the company as a mainstream media distribution company, an interview on the Radio Four programme You and Yours with ex-managing director of MBS, Ashley Bateup, indicates that the bulk of the full videos on the site are either black and white, or of a pornographic nature.
Consumer complaints
The UK's Office of Fair Trading (OFT), charged with promoting and protecting consumer interests in the UK, received numerous complaints about the pop-up payment demands from consumers who said they had not realized they were agreeing to be billed. A number of them stated that the pop-up software had been downloaded without the computer having been used to access an MBS client site. The OFT said it was acting in the interests of those consumers whose access to MBS sites was confirmed, but it had no legal jurisdiction to deal with the issue of software being downloaded without consent.
MBS position
MBS denies installing its software by stealthy means, and says that the software is downloaded by consent when users visit an MBS client site. A malware researcher at computer security company Prevx found no evidence of surreptitious installations. A journalist investigating the complaints called the installation process "unmistakable", with "a download, clicking through screens, and entering a four-digit number." Among the required steps is acceptance of an eleven-page end-user license agreement that includes the clause:
The company says that when it looks into complaints, usually a member of the household has downloaded the software without reading the terms and conditions, and once the billing pop-ups begin they refuse to admit their use to the computer owner. The owner then assumes that the computer is somehow infected. The company says "Our customer service team's experience is that people seem to move into denial with their spouses or partners when pornography use is at question."
The software is difficult for non-technical users to remove, due in part to its use of mutually protective executable files.
The company says that if the software were easy to remove, many people would not pay for the services already consumed.
Undertakings
In response to the complaints, the Office of Fair Trading reviewed the MBS sign-up process and the fairness of its terms and conditions.
On 27 March 2008, the OFT announced MBS/Platte Media "undertakings", or pledges, to make the sign-up process more fair and setting limits on the amount of disruption the pop-up payment demands could cause.
The company promised to make clear in the sign-up process that the customer is entering into a contract, and that billing pop-ups will appear after the trial period ends. They also promised "to provide information about how consumers can have the 'pop-up' generating software uninstalled at any time".
The company promised
to not cause more than 20 pop-ups,
to not cause more than one pop-up in any 24-hour period, and
to not cause pop-ups "beyond the expiry of six weeks after payment has become due".
They also promised
to not cause more than ten locked-open pop-ups, and
to not cause locked-open pop-ups to remain locked for more than 60 seconds.
Payment demands delivered as other than pop-up windows are not restricted.
Statements by authorities
In announcing the MBS undertakings, the Office of Fair Trading's Head of Consumer Protection said "We believe that [the undertakings] achieve the right balance between protecting consumer interests without stifling innovation in the 'on-line' market place."
A local authority in the locale of the MBS Leeds office charged with preventing exploitation of vulnerable consumers, the West Yorkshire Trading Standards, has received hundreds of complaints about the pop-ups. A spokesman for the authority said "It is our opinion at this time that the company is operating within the bounds of existing legislation and as such it would be difficult to take any formal legal action against them."
One woman whose family computer was caught up in the pop-up cycle was interviewed in The Guardian. She wonders, if the company's activities are indeed legitimate as maintained by West Yorkshire Trading Standards, why hasn't pressure been put on the Office of Fair Trading to tighten up the law?
Shutdown in the UK
On 9 March 2009, and following a protracted letterwriting campaign conducted by the Platte/MBS Victims Forum Martin Horwood MP raised a question in the House of Commons about the activities of Platte and specifically about the number of complaints that had been received by the OFT and Trading Standards about its activities. In response, he was informed that Platte had ceased trading in the UK with effect from 25 February 2009. No specific reason was given for this withdrawal, but it is fair to assume that the continued resistance by British consumers to what they regarded as an unfair business model must have played a part in its decision, along with the threat of action by HM Revenue concerning possible non-payment of VAT. In an email to Michael Pollitt, the company said it had stopped operating in the UK, and that "Our reasons for this decision and our further intentions are simply related to our original marketing and business model", adding: "Obviously, and just like any other business should and would do, I am making sure that stopping our marketing to the UK Market, is done in such a sensible and orderly manner, that will best preserve the interests of our customers and of our own."
See also
Movieland — a similar business operating in the USA
Ransomware — a malware program that prevents access to files and/or computer unless paid.
References
External links
Is Micro Bill Systems legit or ransomware?
Removing Micro Bill Systems
The EasyPC Company UK - How to remove Platte Media
Malware |
371649 | https://en.wikipedia.org/wiki/Stellarium%20%28software%29 | Stellarium (software) | Stellarium is a free and open-source planetarium, licensed under the terms of the GNU General Public License version 2, available for Linux, Windows, and macOS. A port of Stellarium called Stellarium Mobile is available for Android, iOS, and Symbian as a paid version, being developed by Noctua Software. All versions use OpenGL to render a realistic projection of the night sky in real time.
Stellarium was featured on SourceForge in May 2006 as Project of the Month.
History
In 2006, Stellarium 0.7.1 won a gold award in the Education category of the Les Trophées du Libre free software competition.
A modified version of Stellarium has been used by the MeerKAT project as a virtual sky display showing where the antennae of the radiotelescope are pointed.
In December 2011, Stellarium was added as one of the "featured applications" in the Ubuntu Software Center.
Planetarium dome projection
The fisheye and spherical mirror distortion features allow Stellarium to be projected onto domes. Spherical mirror distortion is used in projection systems that use a digital video projector and a first surface convex spherical mirror to project images onto a dome. Such systems are generally cheaper than traditional planetarium projectors and fish-eye lens projectors and for that reason are used in budget and home planetarium setups where projection quality is less important.
Various companies which build and sell digital planetarium systems use Stellarium, such as e-Planetarium.
Digitalis Education Solutions, which helped develop Stellarium, created a fork called Nightshade which was specifically tailored to planetarium use.
VirGO
VirGO is a Stellarium plugin, a visual browser for the European Southern Observatory (ESO) Science Archive Facility which allows astronomers to browse professional astronomical data. It is no longer supported or maintained; the last version was 1.4.5, dated January 15, 2010.
Stellarium Mobile
Stellarium Mobile is a fork of Stellarium, developed by some of the Stellarium team members. It currently targets mobile devices running Symbian, Maemo, Android, and iOS. Some of the mobile optimisations have been integrated into the mainline Stellarium product.
Screenshots
See also
Space flight simulation game
List of space flight simulation games
Planetarium software
List of observatory software
References
External links
Stellarium Web Online Planetarium
2001 software
Cross-platform software
Free astronomy software
Free educational software
Free software programmed in C++
Planetarium software for Linux
Portable software
Science software for MacOS
Science software for Windows
Software that uses Qt |
25098455 | https://en.wikipedia.org/wiki/Anil%20Deshmukh | Anil Deshmukh | Anil Vasantrao Deshmukh is an Indian politician from the state of Maharashtra. He is a senior leader of the Nationalist Congress Party. Deshmukh served as the Minister for Home Affairs in Government of Maharashtra between 2019 and 2021.
Deshmukh resigned in 2021 as the Home Minister of Maharashtra due to allegations of extortion and laundering charges and is currently in judicial custody.
Deshmukh has been a member of 9th, 10th, 11th, 12th and 14th Maharashtra Legislative Assembly, representing Katol (Vidhan Sabha constituency).
He had previously served as Minister of Food & Civil supplies and Consumer affairs, Minister of Public Works Department, Minister of State for School Education, Information and Public Relations, Minister of State for Sports & Youth affairs and Minister of State for Education and Culture in the Government of Maharashtra.
Early life and family
Deshmukh hails from the village of Vad Vihira near Katol in Nagpur district. He attended Katol High school. Later, he attended the College of Agriculture, Nagpur and received a master of science degree in Agriculture awarded by Dr. Panjabrao Deshmukh Krishi Vidyapeeth.
Political career
Deshmukh served as the chairman of Nagpur Zilla Parishad during initial years of his political career. He first got elected to Maharashtra state assembly in 1995 from Katol as an independent candidate. He represented that constituency until 2014. He served as Minister of State in the BJP–Shiv Sena coalition government in 1995. His portfolios in that government included Education, and Culture. He later joined the Nationalist Congress Party, a party formed in 1999. When the NCP–Congress alliance came to power in Maharashtra in 1999, he initially served as Minister of State for School Education, Information and Public Relations, Sports & Youth Welfare. He was promoted as a Cabinet Minister in the state government. As a cabinet minister from 2001 to 2014, he was in-charge for the following departments:
Excise, Food & Drugs, Maharashtra State (2001 to March 2004)
Public Works (Public Undertakings), Maharashtra State (2004 to 2008)
Food, Civil Supplies and Consumer Protection, Maharashtra State (2009 -2014)
Deshmukh lost the Katol seat to his nephew, Ashish Deshmukh in 2014 assembly elections. However, he regained the Katol seat as the NCP candidate in the 2019 Maharashtra assembly elections. Deshmukh was appointed Home Minister of Maharashtra when the MVA alliance of NCP, Shiv sena and Congress led by Chief Minister Uddhav Thackeray came to power in November 2019. In addition to his portfolio, Deshmukh is also the Guardian Minister of Gondia district.
From 1995 to 1999
Anil Deshmukh held the portfolio of Minister of State for School Education, Higher & Technical Education, Cultural Affairs in the government of Maharashtra from 1995 to 1999.
As the Minister of State for School Education, he started the initiative to regulate the burgeoning private coaching classes, where the teachers employed by schools simultaneously ran private coaching institutions.
Deshmukh is said to have been involved in setting up the Maharashtra Bhushan award, the highest civilian honour in the state of Maharashtra. Famous personalities including Purushottam Laxman Deshpande, Lata Mangeshkar and Sunil Gavaskar have received this award since.
He was active in the drive to reduce the weight of bags carried by students enrolled in schools affiliated with Maharashtra State Board of Secondary and Higher Secondary Education.
From 1999 to 2001
Deshmukh held the portfolio of Minister of State for School Education, Information and Public Relations, Sports & Youth Welfare in the Maharashtra Government during this period.
Under his charge, the construction of India's second-largest air-conditioned indoor sports stadium began in Nagpur.
From 2001 to 2004
Deshmukh was a Cabinet Minister in the Maharashtra government during this period and held the State Excise, Food & Drugs portfolio.
During his tenure, the Maharashtra Legislative Assembly passed the law to ban gutka, a form of chewing tobacco which contains carcinogens in the state.
2004 to 2008
During this period, Deshmukh was the Minister of Public Works (Public Undertakings) in the Maharashtra government.
He administered the department during the final few phases of the development of the Bandra–Worli Sea Link, a project which had already suffered delays of multiple years. The project was finally completed in 2009.
2019 and later
Deshmukh took charge of the Home Ministry of Maharashtra in the MVA coalition government led by Uddhav Thackeray after the 2019 Maharashtra political crisis.
He proposed the establishment of a specialized treatment center for the police personnel infected during the COVID-19 pandemic.
As the Home Minister, he tabled the proposed Shakti Bill in the Maharashtra Legislative Assembly which sought to modify provisions pertaining to sexual offences against women and children. The bill was ultimately sent for review to a committee of the Legislative Assembly after an outcry from various women's rights groups and activists.
As a Home Minister, he introduced the ‘Self Balancing Electric Scooters’ (Segway) for Mumbai Police personnel that would help them while on patrolling duty.
He famously attended the complaint calls at the Pune city police control room on new year's eve and celebrated the same with police personnel.
During his tenure, the home ministry declared ex gratia to the families of police personnel who lost their lives to COVID-19.
He also proffered one-rank promotion to fourteen police officers for their historic valor and courage during the 26/11 Mumbai Terror Attacks.
It is claimed that he was the first ever Home Minister of the state to visit the Regional Forensic Sciences Laboratory in Pune and spent time interacting with the staff and discussed ways to equip the Laboratory with the latest technology.
2021 accusations and resignation as Home minister
Former commissioner of Mumbai Police Param Bir Singh, in a letter in March 2021 accused Deshmukh of bribery.
Deshmukh resigned from the post of the Home Minister of Maharashtra in the MVA coalition government led by CM Uddhav Thackeray after the Bombay high court directed the Central Bureau of Investigation to conduct a preliminary inquiry into allegations of corruption and misconduct leveled by former commissioner Param Bir Singh.
CBI and ED probe
Anil Deshmukh is currently under investigation by the Indian Central Bureau of Investigation and Enforcement Directorate, following accusations made by the former Mumbai Police commissioner Param Bir Singh. Deshmukh remained untraceable from July 2021 for a period of more than three months and failed to appear before enforcement directorate five times during that period. However, on November 1, 2021, he voluntarily came to the ED office in Mumbai after the Bombay High court on October 30 denied his plea to cancel enforcement directorate, and was formally arrested. The Bombay High court set aside the judicial custody and remanded Deshmukh into custody till Nov. 12. As of present he is trying for bail
The probe has been criticized by NCP chief, Sharad Pawar.
Ministerial positions held in Government of Maharashtra
See also
2021 Mukesh Ambani/Antilia bomb scare case
Footnotes
References
Marathi politicians
Living people
People from Nagpur district
Maharashtra MLAs 1995–1999
Maharashtra MLAs 1999–2004
Maharashtra MLAs 2004–2009
Maharashtra MLAs 2009–2014
Nationalist Congress Party politicians from Maharashtra
1950 births
mr:अनिल देशमुख |
42483250 | https://en.wikipedia.org/wiki/ZeroVM | ZeroVM | ZeroVM is an open source light-weight virtualization and sandboxing technology. It virtualizes a single process using the Google Native Client platform. Since only a single process is virtualized (instead of a full operating system), the startup overhead is in the order of 5 ms.
Sandboxing
ZeroVM creates a sandbox around a single process,
using technology based on Google Native Client (NaCl). The sandbox ensures that the application executed cannot access data in the host operating system, so it is safe to execute untrusted code. The programs executed in ZeroVM must first be cross-compiled to the NaCl platform. ZeroVM can only execute NaCl code compiled for the x86-64 platform, not the portable Native Client (PNaCl) format.
Code executed in ZeroVM cannot call normal system calls and initially cannot interact with the host environment. All communication with the outside world takes place over channels, which must be declared before the program starts. Outside the sandbox, a channel can be connected to a local file, to a pipe, or to another ZeroVM instance.
Inside the sandbox, the program sees the channel as a file descriptor. The sandboxed program can read/write data from/to the channel, but does not know where the channel is connected in the host.
Programs compiled for ZeroVM can optionally use the ZeroVM Runtime library called ZRT. This library aims to provide the program with a POSIX environment.
It does this by replacing parts of the C standard library. In particular, ZRT replaces C file input/output functions such as fopen and opendir with versions that operate on an in-memory filesystem. The root filesystem is provided as a tarball. This allows a program to "see" a normal Unix environment.
The ZRT also replaces C date and time functions such as time to give programs a fixed and deterministic environment. With fixed inputs, every execution is guaranteed to give the same result. Even non-functional programs become deterministic in this restricted environment.
This makes programs easier to debug since their behavior is fixed.
Integration with Swift
ZeroVM has been integrated with Swift, the distributed object storage component of OpenStack.
When the ZeroCloud middleware is installed into Swift, a client can make a request to Swift containing a ZeroVM program. The program is then executed directly on the storage nodes. This means that the program has direct access to the data.
History
ZeroVM was developed by LiteStack, an Israeli startup. The first commit in the zerovm Git repository was added in November 2011.
LiteStack was bought by Rackspace in October 2013.
ZeroVM participated in Techstars Cloud 2013 incubator program and got $500,000 in seed funding.
The first ZeroVM Design Summit was held in January 2014 at the University of Texas at San Antonio.
See also
Google Native Client
LXC (LinuX Containers)
seccomp
Docker (software)
References
External links
Stable Ubuntu packages
Latest Ubuntu packages
Free virtualization software
Virtualization-related software for Linux
Free software for cloud computing
Free software projects
Operating system security |
1027852 | https://en.wikipedia.org/wiki/Virgin%20Interactive | Virgin Interactive | Virgin Interactive Entertainment (later renamed Avalon Interactive) was the video game publishing division of British conglomerate the Virgin Group. It developed and published games for every major platform and was home to many talented developers, including Brett Sperry (co-founder of Westwood Studios, makers of the Command & Conquer series) and Earthworm Jim creators David Perry and Doug TenNapel. Other Virgin Interactive alumni include video game composer Tommy Tallarico and animators Bill Kroyer and Andy Luckey.
Formed as Virgin Games in 1983, and built around a small development team called the Gang of Five, the company grew significantly after purchasing budget label Mastertronic in 1987. As Virgin's video game division grew into a multimedia powerhouse, it crossed over to other industries from toys to film to education. To highlight its focus beyond video games and on multimedia, the publisher was renamed Virgin Interactive Entertainment in 1993.
As result of a growing trend throughout the 1990s of media companies, movie studios and telecom firms investing in video game makers to create new forms of entertainment, VIE became part of the entertainment industry after being acquired by media behemoths Blockbuster and Viacom, who were attracted by its edge in multimedia and CD-ROM-based software development.
Being centrally located in close proximity to the thirty-mile zone and having access to the media content of its parent companies drew Virgin Interactive's U.S. division closer to Hollywood as it began developing sophisticated interactive games, leading to partnerships with Disney and other major studios on motion picture-based games such as The Lion King, Aladdin, RoboCop and The Terminator, in addition to being the publisher of popular titles from other companies like Capcom's Resident Evil series and Street Fighter Collection and id Software's Doom II in the European market.
VIE ceased to exist in mid-2003 after being acquired by French publisher Titus Software who rebranded them to Avalon Interactive in July of that year. The VIE library and intellectual properties are owned by Interplay Entertainment as a result of its acquisition of Titus. A close affiliate and successor of Spanish origin, Virgin Play, was formed in 2002 from the ashes of former Virgin Interactive's Spanish division and kept operating until it folded in 2009.
History
Early History (1983-1987)
Nick Alexander started Virgin Games in 1983 after leaving Thorn EMI. It was headquartered in Portobello Road, London. The firm initially relied on submissions by freelancer developers, but set up its own in-house development team in 1984, known as the Gang of Five. Early successes included Sorcery and Dan Dare. The company expanded with the acquisition of several smaller publishers, Rabbit Software, New Generation Software and Leisure Genius (publishers of the first officially licensed computer versions of Scrabble, Monopoly and Cluedo).
Purchase of Mastertronic and rebranding to Virgin Mastertronic (1987-1991)
1987 marked a turning point for Virgin after its acquisition of struggling distributor Mastertronic. Mastertronic had opened its North American headquarters in Irvine, California just a year earlier to build on its success at home, though growth exhausted its resources after expanding in Europe and acquiring publisher Melbourne House. Richard Branson stepped in and offered to buy 45 percent of Mastertronic stake, in exchange Mastertronic joined the Virgin Group.
The subsequent merger created Virgin Mastertronic Ltd. in 1988 with Alper as its president which enabled Virgin to expand its business reach overseas. Mastertronic had been the distributor of the Master System in the United Kingdom and is credited with introducing Sega to the European market, where they expanded rapidly. The Mastertronic acquisition enabled Virgin to compete with Nintendo in the growing home console market.
Return to Publishing (1991-1993)
To gain a foothold in its newly established market, Sega Enterprises, Ltd. acquired Mastertronic in 1991 while Virgin retained a small publishing unit, which was renamed Virgin Interactive Entertainment in 1993.
Hasbro, who had previously licensed some of its properties to Virgin, bought 15 percent—later increased to 16.2 percent—stake in VIE in August 1993. Hasbro wanted to create titles based on its brands, which included Transformers, G.I. Joe and Monopoly. The deal cut off competitors like Mattel and Fisher-Price who were interested in a similar partnership.
In late 1993, Virgin Interactive spun off a new company, Virgin Sound And Vision, to focus exclusively on CD-based children entertainment.
Purchase by Blockbuster Entertainment and Spelling Entertainment (1994-1998)
As more media companies became interested in interactive entertainment, Blockbuster Entertainment, then the world's largest video-store chain, acquired 20 percent of Virgin Interactive Entertainment in January 1994. It acquired 75 percent of VIE's stock later in 1994 and purchased the remaining shares held by Hasbro in an effort to expand beyond its video store base. Hasbro went on to found their own game company, Hasbro Interactive the following year. The partnership with Blockbuster ended a year later when Blockbuster sold its stake to Spelling Entertainment, at the time being a subsidiary of Viacom. Viacom is the owner of Paramount Pictures and MTV, which made Virgin Interactive part of one of the world's largest entertainment companies. Viacom had planned to sell Spelling and buy Virgin Interactive out of Spelling before the sale. While it abandoned the Spelling sale some time ago, the collapse in the games market appears to have killed off any interest in buying Virgin.
Blockbuster and Viacom invested heavily in the production of CD-based interactive multimedia—video games featuring sophisticated motion-picture video, stereo sound and computer animation. VIE's headquarters were expanded to include 17 production studios where expensive SGI “graphics supercomputers” were used to build increasingly complicated games, eventually becoming one of the five largest U.S.-based video game companies.
In 1995, VIE signed a deal with Capcom to publish its titles in Europe, supplanting Acclaim Entertainment as Capcom's designated European distributor. VIE later published titles released by other companies, such as Hudson Soft.
Re-independence and purchase of US operations by Electronic Arts (1998-1999)
Spelling put its ownership of Virgin up for sale as a public stock offering in 1997, stating that Virgin's financial performance had been disappointing. Since Spelling's purchase of the company, Virgin had lost $14 million in 1995 and was expected to post similar losses for 1996.
In 1998, Virgin Interactive's US operations were divested to Electronic Arts as part of its $122.5 million (£75 million) acquisition of Westwood Studios that same year. Electronic Arts also acquired the Burst Studios development studio, which was renamed to Westwood Pacific by its new owners.
The European division though was put out in a majority stake buyout backed by Mark Dyne, who became its Chief Executive Officer in the same year. Tim Chaney, the former Managing Director was named president.
Purchase by Interplay and Titus (1999-2002)
On February 17, 1999, Interplay Entertainment purchased a 49.9% minority interest in the company, allowing Interplay to distribute Virgin's titles in North America and Virgin distributing Interplay's titles in Europe. In October of that year, Titus Interactive acquired a 50.1% majority interest in VIE after the company acquired a majority interest in Interplay.
In 2001, Titus Software Corporation, the North American division of Titus Interactive, announced a new line of games to be branded under the Virgin Interactive name in North America, which were to be sold at a budget price of $20. These games would be Screamer 4x4, Codename: Outbreak, Original War, Jimmy White's Cueball World and Nightstone. This would be the first time since 1998 that the Virgin Interactive name would be used for publishing in the country, excluding the North American release of Jimmy White's 2: Cueball, which was handled by Bay Area Multimedia.
Full purchase by Titus, sale of Spanish operations, Rebranding, and Fate (2002-2006)
In early 2002, as part of Titus Interactive's buyout of Interplay's European operations, Interplay's shares in Virgin Interactive were sold to Titus, which made the company a 100% owned subsidiary of Titus Software. Virgin Interactive ceased publishing their own games soon afterwards, and become solely a video game distributor for Titus and Interplay's titles.
In June 2002, Titus accepted the MBO (management buyout) of Virgin Interactive's Spanish operations by Tim Chaney but would continue to distribute Titus' titles in the region. With this, the company was out of Titus' hands and was rebranded as Virgin Play in October of that year.
On July 1, 2003, Virgin Interactive's British and French operations were renamed to Avalon Interactive and Avalon France by Titus, respectively.
In January 2005, Titus Interactive filed for bankruptcy with €33 million ($43.8 million) debt. Avalon France and all of Titus' French operations were closed down immediately, while the UK branch continued to trade as Titus’ non-French operations were unaffected. Avalon Interactive was eventually closed by May 2006.
Games
Falcon Patrol (1983)
Falcon Patrol II (1984)
Sorcery (1984)
The Biz (1984)
Strangeloop (1985)
Doriath (1985)
Gates of Dawn (1985)
Hunter Patrol (1985)
Now Games compilation series (1985–1988)
Dan Dare: Pilot of the Future (1986)
Shogun (1986)
Action Force (1987)
Action Force II (1988)
Clue: Master Detective (1989)
Double Dragon II (European computer versions) (1989)
Risk: The World Conquest Game, The Computer Edition of (1989)
Silkworm (1989)
Golden Axe (European computer versions) (1990)
Conflict: Middle East Political Simulator (1990)
Supremacy: Your Will Be Done (Overlord) (1990)
Spot: The Video Game (1990)
Wonderland (1990)
Chuck Rock (1991)
Robin Hood: Prince of Thieves (1991)
Corporation (1991)
Jimmy White's Whirlwind Snooker (1991)
Realms (1991)
Alien 3 (American Amiga version) (1992)
Prince of Persia (American NES version) (1992)
Dune (1992)
Dune II (1992)
Archer McLean's Pool (1992)
European Club Soccer (1992)
Floor 13 (1992)
Global Gladiators (1992)
The Terminator (1992)
M.C. Kids (1992)
Monopoly Deluxe (1992)
Jeep Jamboree: Off Road Adventure (1992)
Cannon Fodder (1993)
Chuck Rock II: Son of Chuck (1993)
Superman: The Man of Steel (Europe only) (1993)
Dino Dini's Goal (1993)
Dragon: The Bruce Lee Story (1993)
Lands of Lore: The Throne of Chaos (1993)
Reach for the Skies (1993)
The 7th Guest (1993)
Cool Spot (1993)
Chi Chi's Pro Challenge Golf (1993)
Super Slam Dunk (1993)
Super Caesars Palace (1993)
Super Slap Shot (1993)
Disney's Aladdin (1993)
RoboCop Versus The Terminator (1993/1994)
The Terminator (Sega CD version) (1993)
Cannon Fodder 2 (1994)
Doom II: Hell on Earth (European PC version only) (1994)
Earthworm Jim (Europe only) (1994)
Jammit (America only) (1994)
Super Dany (Europe only) (1994)
Beneath a Steel Sky (1994)
Walt Disney's The Jungle Book (1994)
Dynamaite: The Las Vegas (1994)
The Lion King (1994)
Demolition Man (1994)
Battle Jockey (1994)
The 11th Hour (1995)
Creature Shock (1995)
Earthworm Jim 2 (Europe only) (1995)
Spot Goes To Hollywood (American Mega Drive/Genesis version published by Acclaim Entertainment) (1995)
Cyberia 2: Resurrection (1995)
The Daedalus Encounter (1995)
F1 Challenge (1995)
Flight Unlimited (1995)
Hyper 3-D Pinball (1995)
SuperKarts (1995)
Zone Raiders (1995)
Sensible Golf (1995)
Lost Eden (1995)
Kyle Petty's No Fear Racing (1995)
Command & Conquer (1995)
Gurume Sentai Barayarō (1995)
World Masters Golf (1995)
Rendering Ranger: R2 (1995)
Agile Warrior F-111X (1995)
Lone Soldier (Japan only) (1996)
The Mask (Japan only) (1996)
Resident Evil (Europe and PC versions only) (1996)
Ghen War (Europe/Japan) (1996)
NHL Powerplay '96 (1996)
Street Fighter Alpha 2 (Europe only) (1996)
Time Commando (Japan only) (1996)
Broken Sword: The Shadow of the Templars (1996)
Command & Conquer: Red Alert (1996)
Disney's Pinocchio (1996)
Queensrÿche's Promised Land (1996)
Toonstruck (1996)
Golden Nugget (1997)
Grand Slam (1997)
Subspace (1997)
Agent Armstrong (1997)
Black Dawn (1997)
Blam! Machinehead (Japan only) (1997)
CrimeWave (Japan only) (1997)
Marvel Super Heroes (Europe only) (1997)
NanoTek Warrior (1997)
Lands of Lore: Guardians of Destiny (1997)
Broken Sword II: The Smoking Mirror (1997)
Mega Man X3 (PS1 and Saturn versions, Europe only) (1997)
NHL Powerplay '98 (1997)
Sabre Ace: Conflict Over Korea (1997)
Ignition (1997)
Bloody Roar (Europe only) (1998)
Magic & Mayhem (Europe only) (1998)
R-Types (Europe only) (1998)
Rival Schools: United by Fate (Europe only) (1998)
Resident Evil 2 (Europe only) (1998)
Street Fighter Collection 2 (European publishing rights only) (1999)
Bloody Roar 2 (European publishing rights only) (1999)
Bomberman (European publishing rights only) (1999)
Bomberman Quest (European publishing rights only) (1999)
Capcom Generations (Europe only) (1999)
Kagero: Deception II (European publishing rights only) (1999)
Dino Crisis (European publishing rights only) (1999)
Holy Magic Century (European publishing rights only) (1999)
Street Fighter EX2 Plus (European publishing rights only) (1999)
Marvel Super Heroes vs. Street Fighter (European publishing rights only) (1999)
Street Fighter Alpha: Warriors' Dreams (European publishing rights only) (1999)
Marvel vs. Capcom: Clash of Super Heroes (European publishing rights only) (2000)
Tech Romancer (European publishing rights only) (2000)
Operation WinBack (European publishing rights only) (2000)
Marvel vs. Capcom 2: New Age of Heroes (European publishing rights only) (2000)
Bomberman Fantasy Race (European publishing rights only) (2000)
Plasma Sword: Nightmare of Bilstein (European publishing rights only) (2000)
Street Fighter III: Double Impact (European publishing rights only) (2000)
Street Fighter Alpha 3 (European publishing rights only) (2000)
Dino Crisis 2 (European publishing rights only) (2000)
Gunlok (Europe only) (2000)
Super Runabout: The Golden State (European publishing rights only) (2000)
Strider 2 (European publishing rights only) (2000)
Giga Wing (European publishing rights only) (2000)
Capcom vs. SNK (European publishing rights only) (2000)
Resident Evil 3: Nemesis (European Dreamcast version only) (2000)
Trick'N Snowboarder (European publishing rights only) (2000)
Jimmy White's 2: Cueball (Distributed in North America by BAM! Entertainment) (2000)
Pocket Racing (European publishing rights only) (2000)
Mr. Driller (European GBC and Dreamcast versions) (2000)
JoJo's Bizarre Adventure (European publishing rights only) (2000)
Street Fighter III: 3rd Strike (European publishing rights only) (2000)
Evolva (European publishing rights only) (2000)
Project Justice (European publishing rights only) (2000)
Heist (titled as Raub in Germany) (2001)
Gunbird 2 (European publishing rights only) (2001)
European Super League (Europe Only) (2001)
3D Pocket Pool (Europe Only) (2001)
Project Justice: Rival Schools 2 (European publishing rights only) (2001)
Bloody Roar III (European publishing rights only) (2001)
Original War (2001)
Screamer 4x4 (2001)
Codename: Outbreak (2001)
Lotus Challenge (European PS2 version) (2001)
Magic & Mayhem: The Art of Magic (European publishing rights only) (2001)
Jimmy White's Cueball World (Europe exclusive game) (2001)
Resident Evil: Gaiden (European publishing rights only) (2001)
NightStone (2002)
Guilty Gear X (European publishing rights only) (2002)
Notes
References
External links
Official website (archived through 2003)
Avalon Interactive Portal (offline)
Virgin Interactive profile on MobyGames
I
Video game companies established in 1983
Video game companies disestablished in 2006
Defunct video game companies of the United Kingdom
Companies based in Orange County, California |
11980309 | https://en.wikipedia.org/wiki/JasperReports | JasperReports | JasperReports is an open source Java reporting tool that can write to a variety of targets, such as: screen, a printer, into PDF, HTML, Microsoft Excel, RTF, ODT, comma-separated values (CSV) or XML files.
It can be used in Java-enabled applications, including Java EE or web applications, to generate dynamic content. It reads its instructions from an XML or .jasper file.
JasperReports is part of the Lisog open source stack initiative.
Features
JasperReports is an open source reporting library that can be embedded into any Java application. Features include:
Scriptlets may accompany the report definition, which the report definition can invoke at any point to perform additional processing. The scriptlet is built using Java, and has many hooks that can be invoked before or after stages of the report generation, such as Report, Page, Column or Group.
Sub-reports
For users with more sophisticated report management requirements, reports designed for JasperReports can be easily imported into the JasperServer—the interactive report server.
Jaspersoft
Teodor Danciu began work on JasperReports in June 2001, the sf.net project was registered in September 2001 and JasperReports 0.1.5 was released on November 3, 2001.
Jaspersoft was founded as Panscopic by Al Campa, CEO, and Raj Bhargava, VP of Products in 2001. Panscopic raised $23M from Doll Capital, Discovery Ventures, Morgenthaler Ventures, and Partech. In 2004, Panscopic teamed up with Teodor Danciu, acquired the intellectual property of JasperReports, and changed the name of the company to Jaspersoft. Brian Gentile became CEO in 2007.
JasperReports Version 1.0 was released on July 21, 2005. The code was originally licensed under a copyleft JasperReports License and later moved to LGPL.
Jaspersoft's main related product is JasperReports Server, a Java EE web application that provides advanced report server capabilities such as report scheduling and permissions. It is available under an open source license for use in conjunction with open source infrastructure such as MySQL and JBoss, or a commercial license for enterprise deployments involving commercial databases and application servers.
Jaspersoft provides commercial software around the JasperReports product, and negotiate contracts with software developers that wish to embed the JasperReports engine into a closed source product. Jaspersoft is a gold partner with MySQL, and JasperReports was included in the PostgreSQL distribution Bizgres version 0.7.
On April 28, 2014, TIBCO announced its acquisition of Jaspersoft for approximately $185 million.
JRXML
JasperReports reports are defined in an XML file format, called JRXML, which can be hand-coded, generated, or designed using a tool. The file format is defined by a Document Type Definition (DTD) or XML schema for newer versions, providing limited interoperability. JRXML files have the filename extension .jrxml.
A .jasper file is a compiled version of a .jrxml file. does the compilation on the fly, but the compilation can also get achieved at runtime using the JasperCompileManager class.
IDE integration
Several Java IDEs, such as NetBeans, Eclipse and IBM Websphere Studio Application Developer provide instructions for users wishing to integrate JasperReports into a project.
See also
Crystal Reports
References
Further reading
Code refactoring
JasperReports has been the focus of several academic papers on code refactoring
External links
Java platform software
Reporting software
Free reporting software
Free software programmed in Java (programming language)
Business intelligence |
63379032 | https://en.wikipedia.org/wiki/Nissim%20Francez | Nissim Francez | Nissim Francez (Hebrew: נסים פרנסיז; born: 19 January 1944) is an Israeli professor, emeritus in the computer science faculty at the Technion, and former head of computational linguistics laboratory in the faculty.
Early life and education
Nissim Francez was born in Bulgaria. His family emigrated to Israel in 1949. He received his B.Sc. in mathematics and philosophy from the Hebrew University, Jerusalem in 1965. After his military service in the IDF, he studied at the Department of Applied Mathematics at the Weizmann Institute, Rehovot, and received his M.sc. in 1971.
He continued his studies there and received his Ph.D. degree in 1976. Francez under the supervision of Prof. Amir Pnueli.
Career
Francez was a Research Associate at Queen's University Belfast, Northern Ireland in 1976. A year later he joined the Computer Science Department of the University of Southern California (USC), as an assistant professor.
In 1978 He returned to Israel as a lecturer in the Computer Science Department in the Technion, Haifa. A year later he was promoted to senior lecturer, and in 1984 to Associate Professor. In 1991 he became a full Professor at the Computer Science Faculty in the Technion, and in 1996-2006 he was the head of the Computational Linguistics Laboratory at the faculty.
Francez held the Bank Leumi chair in Computer Science in the faculty from 2000 until 2010, when he retired from the Technion as professor emeritus.
In his sabbaticals and summer leaves, Francez has been a Research Associate at Aiken Computation Lab. at Harvard University in the summers of 1981 and 1982. He was also a Visiting Scientist at Abo Academy, Turku, Finland (1988) and at the Department of Computer Science, University of Utrecht, The Netherlands (1992). Francez was an Honorary Visiting Professor at the Department of CS, Manchester University (1996-1997), and a Senior Academic Visitor at HCRC, Department of Informatics, Edinburgh University (2002)
and at the School of Computer Science, St Andrews University (2007).
Professional work
Francez was working in IBM Scientific Center, Haifa in 1981 to 1982, and a year later at IBM-T.J.Watson Research Center, Yorktown Heights, New York, United States as a visiting scientist.
In 1983-85 he was working on design and implementation of a Prolog programming environment at IBM Scientific Center, Haifa.
He was a visiting scientist at Microelectronics and Computer Technology Corporation (MCC), Austin, Texas, US in the summers of 1986 and 1987 and 1989-1990
In 1997 he was a Visiting Scientist at Centrum Wiskunde & Informatica (CWI), Amsterdam.
Research
Francez's current research focuses on proof-theoretic semantics for logic and natural language.
He has also carried out work in formal semantics of natural language, type-logical grammar, computational linguistics, unification-based grammar formalisms (LFG, HPSG). In the past he was interested in semantics of programming languages, program verification, concurrent and distributed programming and logic programming.
Membership in professional societies
Francez was a member of the following associations: Association for Computing Machinery (SIGPLAN), IEEE Computer Society, Association for Computational Linguistics (ACL), Association for Logic Programming, International association for Logic, Language and Information (FoLLI), European Association for Theoretical Computer Science (EATCS), Israeli association for theoretical linguistics (IATL).
He was also a Guest Editor (with Ian Pratt-Hartmann) of a special issue of Studia Logica Logic and Natural Language, 2012.
Selected Bibliography
Books
Articles
External links
Nissim Francez, Google Scholar
Nissim Francez, at DBLP Bibliography Server
References
Bulgarian emigrants to Israel
Israeli Jews
Bulgarian Jews in Israel
Theoretical computer scientists
1944 births
Israeli people of Bulgarian-Jewish descent
Israeli computer scientists
Living people
Technion – Israel Institute of Technology faculty |
28085755 | https://en.wikipedia.org/wiki/OpenStack | OpenStack | OpenStack is a free, open standard cloud computing platform. It is mostly deployed as infrastructure-as-a-service (IaaS) in both public and private clouds where virtual servers and other resources are made available to users. The software platform consists of interrelated components that control diverse, multi-vendor hardware pools of processing, storage, and networking resources throughout a data center. Users manage it either through a web-based dashboard, through command-line tools, or through RESTful web services.
OpenStack began in 2010 as a joint project of Rackspace Hosting and NASA. , it was managed by the OpenStack Foundation, a non-profit corporate entity established in September 2012 to promote OpenStack software and its community. By 2018, more than 500 companies had joined the project. In 2020 the foundation announced it would be renamed the Open Infrastructure Foundation in 2021.
History
In July 2010, Rackspace Hosting and NASA announced an open-source cloud-software initiative known as OpenStack. The mission statement was "to produce the ubiquitous Open Source Cloud Computing platform that will meet the needs of public and private clouds regardless of size, by being simple to implement and massively scalable".
The project intended to help organizations offer cloud-computing services running on standard hardware. The community's first official release, code-named Austin, appeared three months later on , with plans to release regular updates of the software every few months. The early code came from NASA's Nebula platform as well as from Rackspace's Cloud Files platform. The cloud stack and open stack modules were merged and released as open source by the NASA Nebula team in concert with Rackspace.
In 2011, developers of the Ubuntu Linux distribution adopted OpenStack with an unsupported technology preview of the OpenStack "Bexar" release for Ubuntu 11.04 "Natty Narwhal". Ubuntu's sponsor Canonical then introduced full support for OpenStack clouds, starting with OpenStack's Cactus release.
OpenStack became available in Debian Sid from the Openstack "Cactus" release in 2011, and the first release of Debian including OpenStack was Debian 7.0 (code name "Wheezy"), including OpenStack 2012.1 (code name: "Essex").
In October 2011, SUSE announced the public preview of the industry's first fully configured OpenStack powered appliance based on the "Diablo" OpenStack release. In August 2012, SUSE announced its commercially supported enterprise OpenStack distribution based on the "Essex" release.
In November 2012, The UK's Government Digital Service (GDS) launched Inside Government based on the OpenNASA v2.0 Government as a Platform (GaaP) model.
In 2012, Red Hat announced a preview of their OpenStack distribution, beginning with the "Essex" release. After another preview release, Red Hat introduced commercial support for OpenStack with the "Grizzly" release, in July 2013.
The OpenStack organization has grown rapidly and is supported by more than 540 companies.
In 2012 NASA withdrew from OpenStack as an active contributor, and instead made the strategic decision to use Amazon Web Services for cloud-based services. In July 2013, NASA released an internal audit citing lack of technical progress and other factors as the agency's primary reason for dropping out as an active developer of the project and instead focus on the use of public clouds. This report is contradicted in part by remarks made by Ames Research Center CIO, Ray O'Brien.
In December 2013, Oracle announced it had joined OpenStack as a Sponsor and planned to bring OpenStack to Oracle Solaris, Oracle Linux, and many of its products. It followed by announcing Oracle OpenStack distributions for Oracle Solaris and for Oracle Linux using Icehouse on 24 September 2014.
In May 2014, HP announced HP Helion and released a preview of HP Helion OpenStack Community, beginning with the IceHouse release. HP has operated HP Helion Public Cloud on OpenStack since 2012.
At the 2014 Interop and Tech Field Day, software-defined networking was demonstrated by Avaya using Shortest path bridging and OpenStack as an automated campus, extending automation from the data center to the end device, removing manual provisioning from service delivery.
As of March 2015, NASA still makes use of OpenStack private cloud and has RFPs out for OpenStack public cloud support.
Historical names
Several OpenStack projects changed names due to trademark issues.
Neutron was formerly known as Quantum.
Sahara used to be called Savanna.
Designate was previously known as Moniker.
Trove was formerly known as RedDwarf.
Zaqar was formerly known as Marconi.
Release history
OpenStack development
The OpenStack community collaborates around a six-month, time-based release cycle with frequent development milestones.
During the planning phase of each release, the community would gather for an OpenStack Design Summit to facilitate developer working sessions and to assemble plans. These Design Summits would coincide with the OpenStack Summit conference.
Starting with the Pike development cycle the design meetup activity has been separated out into a separate Project Teams Gathering (PTG) event. This was done to avoid the developer distractions caused by presentations and customer meetings that were happening at the OpenStack Summit and to allow the design discussions to happen ahead of the start of the next cycle.
Recent OpenStack Summits have taken place in Shanghai on 4–6 November 2019, Denver on 29 April-1 May 2019, Berlin on 13–19 November 2018, Vancouver on 21–25 May 2018, Sydney on 6–8 November 2017, Boston on 8–11 May 2017, Austin on 25–29 April 2016, and Barcelona on 25–28 October 2016. Earlier OpenStack Summits have taken place also in Tokyo in October 2015, Vancouver in May 2015, and Paris in November 2014. The summit in May 2014 in Atlanta drew 4,500 attendees — a 50% increase from the Hong Kong summit six months earlier.
Components
OpenStack has a modular architecture with various code names for its components.
Compute (Nova)
Nova is the OpenStack project that provides a way to provision compute instances as virtual machines, real hardware servers (through the use of ironic), and has limited support for system containers. Nova runs as a set of daemons on top of existing Linux servers to provide that service.
Nova is written in Python. It uses many external Python libraries such as Eventlet (concurrent networking library), Kombu (AMQP messaging framework), and SQLAlchemy (SQL toolkit and Object Relational Mapper). Nova is designed to be horizontally scalable. Rather than switching to larger servers, you procure more servers and simply install identically configured services.
Due to its widespread integration into enterprise-level infrastructures, monitoring OpenStack performance in general, and Nova performance in particular, scaling became an increasingly important issue. Monitoring end-to-end performance requires tracking metrics from Nova, Keystone, Neutron, Cinder, Swift and other services, in addition to monitoring RabbitMQ which is used by OpenStack services for message passing. All these services generate their own log files, which, especially in enterprise-level infrastructures, also should be monitored.
Networking (Neutron)
Neutron is an OpenStack project to provide “network connectivity as a service” between interface devices (e.g., vNICs) managed by other OpenStack services (e.g., nova). It implements the OpenStack Networking API.
It manages all networking facets for the Virtual Networking Infrastructure (VNI) and the access layer aspects of the Physical Networking Infrastructure (PNI) in the OpenStack environment. OpenStack Networking enables projects to create advanced virtual network topologies which may include services such as a firewall, and a virtual private network (VPN).
Neutron allows dedicated static IP addresses or DHCP. It also allows Floating IP addresses to let traffic be dynamically rerouted.
Users can use software-defined networking (SDN) technologies like OpenFlow to support multi-tenancy and scale. OpenStack networking can deploy and manage additional network services—such as intrusion detection systems (IDS), load balancing, firewalls, and virtual private networks (VPN).
Block storage (Cinder)
Cinder is the OpenStack Block Storage service for providing volumes to Nova virtual machines, Ironic bare metal hosts, containers and more. Some of the goals of Cinder are to be/have:
Component based architecture: Quickly add new behaviors
Highly available: Scale to very serious workloads
Fault-Tolerant: Isolated processes avoid cascading failures
Recoverable: Failures should be easy to diagnose, debug, and rectify
Open Standards: Be a reference implementation for a community-driven api
Cinder volumes provide persistent storage to guest virtual machines - known as instances, that are managed by OpenStack Compute software. Cinder can also be used independent of other OpenStack services as stand-alone software-defined storage. The block storage system manages the creation,replication, snapshot management, attaching and detaching of the block devices to servers.
Identity (Keystone)
Keystone is an OpenStack service that provides API client authentication, service discovery, and distributed multi-tenant authorization by implementing OpenStack's Identity API. It is the common authentication system across the cloud operating system. Keystone can integrate with directory services like LDAP. It supports standard username and password credentials, token-based systems and AWS-style (i.e. Amazon Web Services) logins. The OpenStack keystone service catalog allows API clients to dynamically discover and navigate to cloud services.
Image (Glance)
The Image service (glance) project provides a service where users can upload and discover data assets that are meant to be used with other services. This currently includes images and metadata definitions.
Images
Glance image services include discovering, registering, and retrieving virtual machine (VM) images. Glance has a RESTful API that allows querying of VM image metadata as well as retrieval of the actual image. VM images made available through Glance can be stored in a variety of locations from simple filesystems to object-storage systems like the OpenStack Swift project.
Metadata Definitions
Glance hosts a metadefs catalog. This provides the OpenStack community with a way to programmatically determine various metadata key names and valid values that can be applied to OpenStack resources.
Object storage (Swift)
Swift is a distributed, eventually consistent object/blob store. The OpenStack Object Store project, known as Swift, offers cloud storage software so that you can store and retrieve lots of data with a simple API. It's built for scale and optimized for durability, availability, and concurrency across the entire data set. Swift is ideal for storing unstructured data that can grow without bound.
In August 2009, Rackspace started the development of the precursor to OpenStack Object Storage, as a complete replacement for the Cloud Files product. The initial development team consisted of nine developers. SwiftStack, an object storage software company, is currently the leading developer for Swift with significant contributions from Intel, Red Hat, NTT, HP, IBM, and more.
Dashboard (Horizon)
Horizon is the canonical implementation of OpenStack's Dashboard, which provides a web based user interface to OpenStack services including Nova, Swift, Keystone, etc.
Horizon ships with three central dashboards, a “User Dashboard”, a “System Dashboard”, and a “Settings” dashboard. Between these three they cover the core OpenStack applications and deliver on Core Support.
The Horizon application also ships with a set of API abstractions for the core OpenStack projects in order to provide a consistent, stable set of reusable methods for developers. Using these abstractions, developers working on Horizon don't need to be intimately familiar with the APIs of each OpenStack project.
Orchestration (Heat)
Heat is a service to orchestrate multiple composite cloud applications using templates, through both an OpenStack-native REST API and a CloudFormation-compatible Query API.
Workflow (Mistral)
Mistral is a service that manages workflows. User typically writes a workflow using workflow language based on YAML and uploads the workflow definition to Mistral via its REST API. Then user can start this workflow manually via the same API or configure a trigger to start the workflow on some event.
Telemetry (Ceilometer)
OpenStack Telemetry (Ceilometer) provides a Single Point Of Contact for billing systems, providing all the counters they need to establish customer billing, across all current and future OpenStack components. The delivery of counters is traceable and auditable, the counters must be easily extensible to support new projects, and agents doing data collections should be independent of the overall system.
Database (Trove)
Trove is a database-as-a-service provisioning relational and a non-relational database engine.
Elastic map reduce (Sahara)
Sahara is a component to easily and rapidly provision Hadoop clusters. Users will specify several parameters like the Hadoop version number, the cluster topology type, node flavor details (defining disk space, CPU and RAM settings), and others. After a user provides all of the parameters, Sahara deploys the cluster in a few minutes. Sahara also provides means to scale a preexisting Hadoop cluster by adding and removing worker nodes on demand.
Bare metal (Ironic)
Ironic is an OpenStack project that provisions bare metal machines instead of virtual machines. It was initially forked from the Nova Baremetal driver and has evolved into a separate project. It is best thought of as a bare-metal hypervisor API and a set of plugins that interact with the bare-metal machines managed by Ironic. By default, it will use PXE and IPMI or Redfish in concert to provision and manage physical machines, but Ironic supports and can be extended with vendor-specific plugins to implement additional functionality.
Since the inception of Ironic, it has spawned several sub-projects to help support additional use cases and capabilities. Some of the more commonly leveraged of these projects include Ironic-Inspector, Bifrost, Sushy, and networking-generic-switch. Ironic-inspector supplies hardware information collection and hardware discovery. Bifrost focuses on the use case of operating without other OpenStack components, and is highlighted on the website ironicbaremetal.org. Sushy is a lightweight Redfish API client library. Networking-generic-switch is a plugin which supports managing switchport configuration for bare metal machines.
Messaging (Zaqar)
Zaqar is a multi-tenant cloud messaging service for Web developers. The service features a fully RESTful API, which developers can use to send messages between various components of their SaaS and mobile applications by using a variety of communication patterns. Underlying this API is an efficient messaging engine designed with scalability and security in mind. Other OpenStack components can integrate with Zaqar to surface events to end users and to communicate with guest agents that run in the "over-cloud" layer.
Shared file system (Manila)
OpenStack Shared File System (Manila) provides an open API to manage shares in a vendor agnostic framework. Standard primitives include ability to create, delete, and give/deny access to a share and can be used standalone or in a variety of different network environments. Commercial storage appliances from EMC, NetApp, HP, IBM, Oracle, Quobyte, INFINIDAT and Hitachi Data Systems are supported as well as filesystem technologies such as Red Hat GlusterFS or Ceph.
DNS (Designate)
Designate is a multi-tenant REST API for managing DNS. This component provides DNS as a Service and is compatible with many backend technologies, including PowerDNS and BIND. It doesn't provide a DNS service as such as its purpose is to interface with existing DNS servers to manage DNS zones on a per tenant basis.
Search (Searchlight)
The project is no longer actively maintained.
Searchlight provides advanced and consistent search capabilities across various OpenStack cloud services. It accomplishes this by offloading user search queries from other OpenStack API servers by indexing their data into ElasticSearch. Searchlight is being integrated into Horizon and also provides a Command-line interface.
Key manager (Barbican)
Barbican is a REST API designed for the secure storage, provisioning and management of secrets. It is aimed at being useful for all environments, including large ephemeral Clouds.
Container orchestration (Magnum)
Magnum is an OpenStack API service developed by the OpenStack Containers Team making container orchestration engines such as Docker Swarm, Kubernetes, and Apache Mesos available as first class resources in OpenStack. Magnum uses Heat to orchestrate an OS image which contains Docker and Kubernetes and runs that image in either virtual machines or bare metal in a cluster configuration.
Root Cause Analysis (Vitrage)
Vitrage is the OpenStack RCA (Root Cause Analysis) service for organizing, analyzing and expanding OpenStack alarms & events, yielding insights regarding the root cause of problems and deducing their existence before they are directly detected.
Rule-based alarm actions (Aodh)
This alarming service enables the ability to trigger actions based on defined rules against metric or event data collected by Ceilometer or Gnocchi.
Compatibility with other cloud APIs
OpenStack does not strive for compatibility with other clouds' APIs. However, there is some amount of compatibility driven by various members of the OpenStack community for whom such things are important.
The EC2 API project aims to provide compatibility with Amazon EC2
The GCE API project aims to provide compatibility with Google Compute Engine
Governance
OpenStack is governed by the OpenInfra foundation and its board of directors. The board of directors is made up of Platinum sponsors, members of the Gold sponsors and members elected by the Foundation individual members. The OpenStack Technical Committee is the governing body of the OpenStack open source project. It is an elected group that represents the contributors to the project, and has oversight on all technical matters. This includes developers, operators and end users of the software.
Appliances
An OpenStack Appliance is the name given to software that can support the OpenStack cloud computing platform on either physical devices such as servers or virtual machines or a combination of the two. Typically a software appliance is a set of software capabilities that can
function without an operating system. Thus, they must contain enough of the essential underlying operating system components to work. Therefore, a strict definition might be: an application that is designed to offer OpenStack capability without the necessity of an underlying operating system. However, applying this strict definition may not be helpful, as there is not really a clear distinction between an appliance and a distribution. It could be argued that the term appliance is something of a misnomer because OpenStack itself is referred to as a cloud operating system so using the term OpenStack appliance could be a misnomer if one is being pedantic.
If we look at the range of Appliances and Distributions one could make the distinction that distributions are those toolsets which attempt to provide a wide coverage of the OpenStack project scope, whereas an Appliance will have a more narrow focus, concentrating on fewer projects. Vendors have been heavily involved in OpenStack since its inception, and have since developed and are marketing a wide range of appliances, applications and distributions.
Vendors
A large number of vendors offer OpenStack solutions, meaning that an organization wishing to deploy the technology has a complex task in
selecting the vendor offer that best matches its business requirements. Barb Darrow offered this overview in Fortune on 27 May 2015, pointing out that there may be some consolidation in the market that will clarify those decisions.
There are other aspects that users need to consider, for example, the real costs involved. Some vendors will make an offer which encompasses most of the OpenStack projects; others will only offer certain components. Other considerations include the extent of proprietary code used to manage a lack of maturity in an OpenStack component, and to what extent that encourages vendor lock-in.
The most authoritative information on vendor products is at the Open Infrastructure Foundation website.
Challenges to implementation
OpenStack is a complex entity, and adopters face a range of challenges when trying to implement OpenStack in an organisation. For many organisations trying to implement their own projects, a key issue is the lack of skills available. In an article on The New Stack, Atul JHA identifies five challenges any organization wishing to deploy OpenStack will face.
Installation challenges
OpenStack is a suite of projects rather than a single product, and because each of the various applications needs to be configured to
suit the user's requirements, installation is complex and requires a range of complementary skill-sets for an optimum set-up. One obvious solution would be to take a complete vendor supplied package containing hardware and software, although due diligence is essential.
Documentation
This is more a function of the nature of documentation with open source products than OpenStack per se, but with more than 25 projects, managing document quality is always going to be challenging.
Upgrading OpenStack
One of the main objectives of using cloud type infrastructure is to offers users not only high reliability but also high availability, something that public cloud suppliers will offer in Service Level Agreements.
Due to OpenStack's multi-project development approach, the complexity involved in synchronising the different projects during an upgrade may mean that downtime is unavoidable.
Long term support
It's quite common for a business to keep using an earlier release of software for some time after it has been upgraded. The reasons for this are pretty obvious and referred to above. However, there is little incentive for developers in an open source project to provide support for superseded code. In addition, OpenStack itself has formally discontinued support for some old releases.
Given the above challenges the most appropriate route for an organization wishing to implement OpenStack would be to go with a vendor, and source an OpenStack appliance or distribution.
Deployment models
As the OpenStack project has matured, vendors have pioneered multiple ways for customers to deploy OpenStack:
OpenStack-based Public Cloud A vendor provides a public cloud computing system based on the OpenStack project.
On-premises distribution In this model, a customer downloads and installs an OpenStack distribution in their internal network. See Distributions.
Hosted OpenStack Private Cloud A vendor hosts an OpenStack-based private cloud: including the underlying hardware and the OpenStack software.
OpenStack-as-a-Service A vendor hosts OpenStack management software (without any hardware) as a service. Customers sign up for the service and pair it with their internal servers, storage and networks to get a fully operational private cloud.
Appliance based OpenStack Nebula was a vendor that sold appliances that could be plugged into a network which spawned an OpenStack deployment.
Distributions
Bright Computing
Canonical (Ubuntu)
Debian
HPE (which was spin-merged to Micro Focus/Suse)
IBM
Mirantis
Oracle OpenStack for Oracle Linux, or O3L
Oracle OpenStack for Oracle Solaris
Red Hat
Stratoscale
VMware Integrated OpenStack (VIO)
See also
Cloud-computing comparison
Cloud Foundry
OpenShift
References
External links
2010 software
Cloud infrastructure
Free software for cloud computing
Free software programmed in Python
Virtualization-related software for Linux |
41616 | https://en.wikipedia.org/wiki/Queuing%20delay | Queuing delay | In telecommunication and computer engineering, the queuing delay or queueing delay is the time a job waits in a queue until it can be executed. It is a key component of network delay. In a switched network, queuing delay is the time between the completion of signaling by the call originator and the arrival of a ringing signal at the call receiver. Queuing delay may be caused by delays at the originating switch, intermediate switches, or the call receiver servicing switch. In a data network, queuing delay is the sum of the delays between the request for service and the establishment of a circuit to the called data terminal equipment (DTE). In a packet-switched network, queuing delay is the sum of the delays encountered by a packet between the time of insertion into the network and the time of delivery to the address.
This term is most often used in reference to routers. When packets arrive at a router, they have to be processed and transmitted. A router can only process one packet at a time. If packets arrive faster than the router can process them (such as in a burst transmission) the router puts them into the queue (also called the buffer) until it can get around to transmitting them. Delay can also vary from packet to packet so averages and statistics are usually generated when measuring and evaluating queuing delay.
As a queue begins to fill up due to traffic arriving faster than it can be processed, the amount of delay a packet experiences going through the queue increases. The speed at which the contents of a queue can be processed is a function of the transmission rate of the facility. This leads to the classic delay curve. The average delay any given packet is likely to experience is given by the formula 1/(μ-λ) where μ is the number of packets per second the facility can sustain and λ is the average rate at which packets are arriving to be serviced. This formula can be used when no packets are dropped from the queue.
The maximum queuing delay is proportional to buffer size. The longer the line of packets waiting to be transmitted, the longer the average waiting time is. The router queue of packets waiting to be sent also introduces a potential cause of packet loss. Since the router has a finite amount of buffer memory to hold the queue, a router which receives packets at too high a rate may experience a full queue. In this case, the router has no other option than to simply discard excess packets.
When the transmission protocol uses the dropped-packets symptom of filled buffers to regulate its transmit rate, as the Internet's TCP does, bandwidth is fairly shared at near theoretical capacity with minimal network congestion delays. Absent this feedback mechanism the delays become both unpredictable and rise sharply, a symptom also seen as freeways approach capacity; metered onramps are the most effective solution there, just as TCP's self-regulation is the most effective solution when the traffic is packets instead of cars). This result is both hard to model mathematically and quite counterintuitive to people who lack experience with mathematics or real networks. Failing to drop packets, choosing instead to buffer an ever-increasing number of them, produces bufferbloat.
In Kendall's notation, the M/M/1/K queuing model, where K is the size of the buffer, may be used to analyze the queuing delay in a specific system. Kendall's notation should be used to calculate the queuing delay when packets are dropped from the queue. The M/M/1/K queuing model is the most basic and important queuing model for network analysis.
See also
Broadcast delay
Delay encoding
End-to-end delay
Latency (engineering)
Little's law – queueing formula
Network delay
Packet loss
Processing delay
Queueing theory
Transmission delay
References
Wireless communications; Theodore S.Rpappaport
Computer networking
Telecommunications engineering
Computer engineering
Queueing theory |
12892261 | https://en.wikipedia.org/wiki/Museum%20Madness | Museum Madness | Museum Madness is an educational computer game for MS-DOS and Macintosh developed by Novotrade for MECC, and was released in 1994. The game is based in an American natural history museum and aims to teach the player many aspects of history such as technology, geology, space, American history, and prehistory. PC Magazine described the game as having kids learn about educational topics (i.e. ecology) while making logical deductions in a series sequence and solving puzzles.
Plot
The game starts in the bedroom of an unnamed American high school teenage boy who is seated at his computer, attempting to access the National Museum Interactive Service System, only to see that it is offline for repair. An interactive robot from the museum named MICK (Museum Interactive Computer Kiosk) appears onscreen and talks to the boy, explaining that the museum is in danger of losing its secrets forever.
The boy appears to have an extraordinary relationship with MICK as he alone understands that MICK can talk back to him, which he uses to learn more about the contents of the museum. MICK recognizes this understanding and thus asks the boy for help to save the museum. MICK explains that the exhibits have come to life and are acting very strangely. He announces his suspicion that a virus has infected the system while the museum was being converted to complete autonomous computer control.
The player takes the role of the boy and enters the museum. Through the game, the boy visits each of the exhibits, solving mysteries and puzzles by talking to the historical characters, rearranging objects, trading objects with characters and generally putting things back the way they were.
The game is educationally-based, and the player learns both from the many museum-like information cards placed throughout the exhibits, as well as from solving the problems in the exhibits themselves. Along the way, the boy is aided by MICK, who follows him through the exhibits, instructs him and gives additional help and advice on request.
Once the 25 exhibits are restored, the virus itself must be destroyed, which is the final puzzle to be solved.
Exhibits and objectives
The exhibits in the museum, which can be entered by clicking a box on the map in the Main Hall, and can be revisited if not completed (the user can exit an exhibit at any time and return later), are shown by the map to be split into five sections:
Robots: An out-of-control robot has built itself with stolen parts from the machines in the exhibit, which must be returned.
Computer Technology: The exhibit computer's circuits are messed up, and must be repaired.
Discovery of Radio: The protagonist must help Guglielmo Marconi, Heinrich Hertz, Alexander Graham Bell and Reginald Fessenden with their experiments so they can share their devices and invent the radio.
Energy Technology: The energy sources (polluting and non-polluting) are out of balance, and must be restored to their right values to save the exhibit's ecology.
Simple Machines: An animatronic kangaroo is stuck on a high shelf after destroying the exhibit's machines, and the protagonist must fix them in order to rescue the kangaroo.
How Big is the Universe? - The exhibit's computer simulation screens are jumbled up, so the protagonist must rearrange them.
The Solar System: Five stars in the exhibit are out of place, and must be returned to their right positions.
Rockets and Computers - The protagonist must fix a computer showing the history of rockets, then help a rocket connect with a space station, and navigate a space probe through an asteroid field.
Air-Powered Flight: The giant fan in the exhibit has blown all the aircraft away, including the airship, so the protagonist must clean up the mess and put everything in their right places.
Wright Brothers: The Wright Brothers are having trouble inventing the airplane, so the protagonist must help them.
Transcontinental Railroad - The protagonist must decide whether to help the Central Pacific Railroad or Union Pacific Railroad reach Promontory Point, Utah by unscrambling a railroad map.
Salem Witch Trials: The virus has deleted the proof that Sarah Good, who has been convicted of witchcraft, did not create three specters the other women of Salem Village have seen, so the protagonist must prove it himself.
American Revolutionary War: The virus has scrambled the animatronic George Washington's memories, causing him to support the British oppression instead. The protagonist must find the various documents to convince him otherwise.
Ellis Island: The protagonist is put into the role of the immigrants and must successfully reach the U.S. as well as pass his inspection at Ellis island.
Louisiana Purchase - The protagonist is tasked with helping Thomas Jefferson negotiate with Napoleon Bonaparte to buy the Louisiana Territory for the United States.
Hall of Dinosaurs: The virus has infected the exhibit's assembly computer, scrambling the dinosaur skeletons, so the protagonist must re-assemble them properly.
Ocean Life: The protagonist must fix a leaking sewage pipe to restore the exhibit's marine life.
Hall of Animal Habitats: The animals are missing from their respective habitats and must be returned.
Hall of Ecology - The virus has disrupted the exhibit's food web, and the protagonist must restore balance to it.
The Earth's Geology - The protagonist must unscramble the exhibit's mixed-up geological system.
Prehistoric People: The animatronic woolly mammoth has escaped from its pen, and the protagonist must get the cave people to help lure it back.
The Development of Writing: the protagonist must collect notes from all the writing in the exhibit to help a scribe translate a message left to him by his master.
Knights, Heraldry and Jousting - A medieval jousting tournament is underway, and the protagonist must recruit a knight that can defeat the king's champion to end it.
Galileo's Telescope: Galileo is missing the components of his telescope, so the protagonist has to gather them from the rest of the exhibit.
Industrial Revolution - The protagonist must rebuild the machines in the exhibit.
Order of gameplay
The player begins by entering the museum through the basement, working out the basement door's passcode. The player navigates the way into the Main Hall of the museum by a series of numbered doors with corresponding keys which are to be located in the maze of the basement (this introductory location can be skipped if desired).
Once in the Main Hall, the player must locate some batteries to power MICK, which can be found in one of the museum tour tape players. Additionally, the map of the museum on the wall in the Main Hall is found to be in pieces and needs to be reconstructed in order to continue.
Then the player must choose an exhibit to try to repair, using the museum's map to select one. After attempting to repair an exhibit, the player is returned to the Main Hall to select another exhibit. Not until every single one of the exhibits has been returned to normal can the player progress in the game.
When each of the exhibits have been restored, the player returns to the Main Hall to find MICK missing and a cassette tape in his place on the floor. Using one of the museum tape players from which the batteries used to power MICK were borrowed, the player can listen to the tape (shown as on-screen text), which consists of a message from MICK warning the player to go home; the player decides to follow MICK into the basement and finds him in pieces on a bench in the workshop. Upon reassembling MICK, the player must then access the computer to try to stop the virus by answering general knowledge questions. If the player is successful, the virus self-destructs and the museum is saved, and the game is complete.
References
External links
Museum Madness Hint Sheet
1994 video games
DOS games
Classic Mac OS games
Children's educational video games
Video games set in museums
Museum educational materials
Video games developed in Hungary
The Learning Company games |
57524596 | https://en.wikipedia.org/wiki/Anne%20Evans%20%28poet%29 | Anne Evans (poet) | Anne Evans (4 June 1820–1870) was an English poet and composer. She has been described as "a witty poet and skilled composer of dance songs". Her Poems and Music were published posthumously in 1880.
Background
Born on 4 June 1820, either at Britwell, Berkshire, or Sandhurst, Anne Evans was the eldest daughter of Arthur Benoni Evans (1781–1854) and his wife Ann [sic], née Norman. Her father was a noted linguist and numismatist, who was a professor of classics and history at the Royal Military Academy Sandhurst, and her grandfather the Welsh-born mathematician and astronomer Lewis Evans (1755–1827). Of her brothers and sisters, Sebastian Evans (1830–1909), also wrote poetry.
The family moved to Britwell, and in 1829 to Market Bosworth, Leicestershire, where her father became headmaster of Dixie Grammar School. Anne was educated at home. After her father's death in 1854, she moved to 16 Kensington Square in London. In the 1850s she spent some time as a companion, in England and abroad, to the two daughters of the novelist William Makepeace Thackeray, the elder being the future novelist Anne Thackeray Ritchie.
Evans stayed single. Her health declined in 1867 and she remained an invalid until her death on 19 February 1870. She was survived by her mother, who died in 1883 aged 91.
Writings
Evans's earliest extant poem was "Flora's Lesson". She became an emotional poet by conviction. As she once remarked, "If anyone expects to find poetry without susceptibility, let him look at the sky for a rainbow without rain."
Her works included sonnets, a verse drama called Maurice Clifton, and two ballads, "Sir Ralph Duguay" and "Orinda". She was noted also for her epigrams and witty definitions. Her work was eventually published in an 1880 edition of her Poems and Music. This included a memorial preface by Anne Thackeray Ritchie, in which she described her as a "diffident woman, who... unconsciously touched and influenced us all by her intense sincerity of heart and purpose."
References
1820 births
1870 deaths
English women poets
English-language poets
Victorian women writers
Writers from Berkshire |
6479641 | https://en.wikipedia.org/wiki/Arthur%20Burks | Arthur Burks | Arthur Walter Burks (October 13, 1915 – May 14, 2008) was an American mathematician who worked in the 1940s as a senior engineer on the project that contributed to the design of the ENIAC, the first general-purpose electronic digital computer. Decades later, Burks and his wife Alice Burks outlined their case for the subject matter of the ENIAC having been derived from John Vincent Atanasoff. Burks was also for several decades a faculty member at the University of Michigan in Ann Arbor.
Early life and education
Burks was born in Duluth, Minnesota. He earned his B.A. in mathematics and physics from DePauw University in Greencastle, Indiana in 1936 and his M.A. and Ph.D. in philosophy from the University of Michigan in Ann Arbor in 1937 and 1941, respectively.
The Moore School
The summer after obtaining his Ph.D., the young Dr. Burks moved to Philadelphia, Pennsylvania and enrolled in the national defense electronics course offered by the University of Pennsylvania's Moore School of Electrical Engineering; his laboratory teaching assistant was J. Presper Eckert, a graduate student at the Moore School; a fellow student was John Mauchly, the chairman of the physics department at Ursinus College in nearby Collegeville, Pennsylvania. Both Burks and Mauchly sought and obtained teaching positions at the Moore School the following fall, and roomed together throughout the academic year.
The ENIAC
When Mauchly and Eckert's proposed concept for an electronic digital computer was funded by the U.S. Army's Ballistics Research Laboratory in June 1943, Burks was added to the design team. Among his principal contributions to the project was the design of the high-speed multiplier unit. (Also during this time, Burks met and married Alice Rowe, a human computer employed at the Moore School.)
In April 1945, with John Grist Brainerd, Burks was charged with writing the technical reports on the ENIAC for publication. Also during 1945 Burks assisted with the preliminary logical design of the EDVAC in meetings attended by Mauchly, Eckert, John von Neumann, and others.
Burks also took a part-time position as a philosophy instructor at Swarthmore College during 1945–1946.
The IAS
On March 8, 1946 Burks accepted an offer by von Neumann to join the computer project at the Institute for Advanced Study in Princeton, New Jersey, and joined full-time the following summer. (Already on the project was another member of the ENIAC team, Herman Goldstine. Together, Goldstine and Burks gave nine of the Moore School Lectures in Summer 1946.) During his time at the IAS, Burks worked to expand von Neumann's theory of automata.
The University of Michigan
After working on this project, Burks relocated to Ann Arbor, Michigan in 1946 to join the faculty of the University of Michigan, first as an assistant professor of philosophy, and as a full professor by 1954. With Irving Copi he sketched the necessary design for general purpose computing.
Burks helped found the university's computer science department, first as the Logic of Computers group in 1956, of which he was the director, then as a graduate program in 1957, and then as an undergraduate program within the new Department of Computer and Communication in 1967, which he chaired until 1971. He declined a position heading up a different university's computing center, citing his primary interest as the purely theoretical aspects of computing machines. He was awarded the Louis E. Levi Medal in 1956.
Burks' doctoral students include John Holland, who in 1959 was the first student to receive a Ph.D. in computer science from Michigan, and possibly the first in the world.
Burks served as president of the Charles S. Peirce Society in 1954–1955. He edited the final two volumes (VII–VIII), published 1958, of the Collected Papers of Charles Sanders Peirce and, over the years, wrote published articles on Peirce.
Restoration of parts of the ENIAC
In the 1960s he was presented with the opportunity to acquire four units of the original ENIAC, which had been rusting in a storage Quonset hut in Aberdeen, Maryland. He ran the units through a car wash before restoring them and donating them to the University of Michigan. They are currently on display in the entryway of the Computer Science Building.
Patent dispute
In 1964 Burks was approached by attorney Sy Yuter and asked to join T. Kite Sharpless and Robert F. Shaw in litigation that would add their names as inventors to the ENIAC patent, which would allow them to profit from the sale of licenses to the premiere electronic digital computer apart from Sperry Rand, the company that owned the Eckert-Mauchly interest in the patent and was at that time seeking royalties from other computer manufacturers. This endeavor was never successful; in the 1973 decision to Honeywell v. Sperry Rand, U.S. District Judge Earl R. Larson ruled—even as he invalidated the patent—that only Mauchly and Eckert had invented the ENIAC, and that Burks, Sharpless, and Shaw could not be added as inventors.
The BACH Group
In the 1970s Burks began meeting with Bob Axelrod, Michael Cohen, and John Holland, researchers with interests in interdisciplinary approaches to studying complex adaptive systems. Known as the BACH group (an acronym of their surnames), it came to include, among others, Pulitzer Prize winner Douglas Hofstadter, evolutionary biologist William Hamilton, microbiologist Michael Savageau, mathematician Carl Simon and computer scientists Reiko Tanese, Melanie Mitchell and Rick Riolo. The BACH group continues to meet irregularly as part of the University of Michigan's Center for the Study of Complex Systems (CSCS).
In the 1970s and 1980s Burks, working with his wife Alice, authored a number of articles on the ENIAC, and a book on the Atanasoff–Berry Computer.
As professor emeritus
In 1990, Burks donated a portion of his papers to the university's Bentley Historical Library, where they are accessible to researchers.
Burks died May 14, 2008 in an Ann Arbor, Michigan nursing home from Alzheimer's disease.
See also
Reverse Polish notation
Mantissa (floating point number)
Bibliography
Burks, Arthur W., Goldstine, Herman H., and von Neumann, John (1946), Preliminary discussion of the logical design of an electronic computing instrument, 42 pages, Institute for Advanced Study, Princeton, New Jersey, June 1946, 2nd edition 1947. Eprint.
Burks, Arthur W. and Wright, Jesse Bowdle (1952), Theory of Logical Nets. Amazon says: published by Burroughs Adding Machine Co.; Google Books says: published by University of Michigan Engineering Research Institute; 52 pages. Deep Blue Eprint.
Burks, Arthur W. and Copi, Irving M. (1954), The logical design of an idealized general-purpose computer, Amazon says: published by Burroughs Corporation Research Center; Google Books says: published by University of Michigan Engineering Research Institute, 154 pages. Deep Blue Eprint.
Burks, Arthur W. (1956), The logic of fixed and growing automata, Engineering Research Institute, University of Michigan, 34 pages.
Burks, Arthur W. and Wang, Hao (1956), The logic of automata, Amazon says: published by Air Research and Development Command; Google Books says: published by University of Michigan Engineering Research Institute; 60 pages. Deep Blue Eprint.
Peirce, Charles Sanders and Burks, Arthur W., ed. (1958), the Collected Papers of Charles Sanders Peirce Volumes 7 and 8, Harvard University Press, Cambridge, MA, also Belknap Press (of Harvard University Press) edition, vols. 7-8 bound together, 798 pages, online via InteLex, reprinted in 1998 Thoemmes Continuum.
Burks, Arthur W. (1971), Essays on Cellular Automata, University of Illinois Press, 375 pages.
Burks, Arthur W. (1978), Review of The New Elements of Mathematics by Charles S. Peirce, Carolyn Eisele, ed., in the Bulletin of the American Mathematical Society, vol. 84, no. 5, September 1978, Project Euclid Eprint PDF 791KB.
Burks, Arthur W. and Burks, Alice R. (1981), "The ENIAC: First General-Purpose Electronic Computer" in Annals of the History of Computing, vol. 3, no. 4, October 1981, pp. 310–399.
Burks, Arthur W. (1986), Robots and free minds, College of Literature, Science, and the Arts, University of Michigan, 97 pages.
Salmon, Merrilee H., ed. (1990), The Philosophy of Logical Mechanism: Essays in honor of Arthur W. Burks with his responses, Kluwer Academic, Dordrecht, Holland 1990, 552 pages.
Burks, Arthur W. (1996), "Peirce's evolutionary pragmatic idealism", Synthese, Volume 106, Number 3, 323–372.
Burks, Alice R. (2003), Who Invented the Computer?: The Legal Battle That Changed History, foreword by Douglas R. Hofstadter, Prometheus Books, Amherst, NY, 415 pages, hardcover, Prometheus catalog page.
A number of articles by Arthur W. Burks are listed on page 599 in index of Studies in the Logic of Charles Sanders Peirce by Nathan Houser, Don D. Roberts, James Van Evraof, Google Book Search Beta page 599.
References
External links
Oral history interview with Alice R. Burks and Arthur W. Burks. Charles Babbage Institute, University of Minnesota.
20th-century American mathematicians
21st-century American mathematicians
Institute for Advanced Study visiting scholars
1915 births
2008 deaths
Cellular automatists
DePauw University alumni
University of Michigan College of Literature, Science, and the Arts alumni
University of Michigan faculty |
52169976 | https://en.wikipedia.org/wiki/Cloudbric | Cloudbric | Cloudbric is a cloud-based web security provider based in Seoul, South Korea. It offers a WAF, DDoS protection, and SSL solution and protects websites from SQL injection, cross-site scripting, identity theft, website defacement, and application layer DDoS attacks.
History
In early 2015, Cloudbric launched as an in-house venture of Penta Security Systems Inc. with the idea of creating a cloud web security service/web application firewall accessible to all. What first started as an in-house project grew to a global service, and on December 1, 2017, Cloudbric Corporation became its own company. At the time of the spin-off, Cloudbric had acquired 27 IDCs (Internet Data Centers), 50 partnerships, and 8,000 user sign-ups.
In November 2017, Cloudbric launched Cloudbric Labs, a collection of free web security resources and services, available for use and integration for all users across the web. It currently consists of BlackIPedia (an IP reputation service), Threat Index (a database of web vulnerabilities), and WAFER (a WAF evaluator that tests for performance and accuracy).
In May 2018, Cloudbric unveiled plans to launch a reverse ICO initiative. With a working product, Cloudbric plans to build on its web security offerings through the introduction of a decentralized, user-led security platform.
Components
Cloudbric’s pricing model is based on monthly website traffic rather than premium service features, meaning all users have access to Cloudbric’s comprehensive suite of web security services by default. Cloudbric’s WAF operates as a proxy to detect and filter malicious attacks, requiring its customers to change their website's Domain Name System (DNS).
WAF – Protection against OWASP Top 10 vulnerabilities and more; uses a logic-based detection engine
DDoS Protection – Layer 3, 4 and 7 DDoS protection
SSL – Free SSL to all users, ensuring secure web communication between the web and web server
Awards and recognition
Awarded Best SME Security Solution at the 2016 SC Magazine Awards in Europe.
Awarded Gold for Cybersecurity Project of the Year (Asia-Pacific) for Cloudbric Labs at 2018 Cybersecurity Excellence Awards.
Awarded Bronze for Website Security at the 2018 Cybersecurity Excellence Awards.
Awarded Silver for Security Startup of the Year at the 14th Annual Info Security PG’s 2018 Global Excellence Awards.
Awarded Silver Stevie® Award winner for the category of Innovation in Technology Development at the fifth annual Asia-Pacific Stevie Awards.
References
External links
ICO homepage
2015 establishments in South Korea
Computer security companies
Computer security software companies
Software companies of South Korea
South Korean brands |
94217 | https://en.wikipedia.org/wiki/AK-74 | AK-74 | The AK-74 (Russian: or "Kalashnikov automatic rifle model 1974") is an assault rifle designed by Soviet small arms designer Mikhail Kalashnikov in 1974. It is chambered for the 5.45×39mm cartridge, which replaced the 7.62×39mm cartridge of Kalashnikov's earlier automatic weapons for the Soviet armed forces.
The rifle first saw service with Soviet forces in the 1979 Afghanistan conflict. The head of the Afghan bureau of the Pakistani Inter-Services Intelligence claimed that the CIA paid $5,000 for the first AK-74 captured by the Afghan mujahideen during the Soviet–Afghan War.
, most countries of the former Soviet Union use the rifle. Licensed copies were produced in Bulgaria (AK-74, AKS-74 and AKS-74U), and in the former East Germany (MPi-AK-74N, MPi-AKS-74N, MPi-AKS-74NK).
Design details
The AK-74 was designed by А. D. Kryakushin's group under the design supervision of Mikhail Kalashnikov. It is an adaptation of the 7.62×39mm AKM assault rifle and features several important design improvements. These improvements were primarily the result of converting the rifle to the intermediate-calibre high velocity 5.45×39mm cartridge. In fact, some early models are reported to have been converted AKMs, re-barreled to 5.45×39mm. Compared with the preceding AKM, the AK-74 has better effective range, firing accuracy (a main development goal), and reliability. The AK-74 and AKM share an approximate 50% parts commonality (interchangeable most often are pins, springs and screws).
Operating mechanism
The rifle's operation during firing and reloading is identical to that of the AKM. After ignition of the cartridge primer and propellant, rapidly expanding propellant gases are diverted into the gas cylinder above the barrel through a vent near the muzzle. The build-up of gases inside the gas cylinder drives the long-stroke piston and bolt carrier rearward and a cam guide machined into the underside of the bolt carrier along with an ejector spur on the bolt carrier rail guide, rotates the bolt approximately 35° and unlocks it from the barrel extension via a camming pin on the bolt. The moving assembly has about of free travel which creates a delay between the initial recoil impulse of the piston and the bolt unlocking sequence, allowing gas pressures to drop to a safe level before the seal between the chamber and the bolt is broken. Like previous Kalashnikov-pattern rifles, the AK-74 does not have a gas valve; excess gases are ventilated through a series of radial ports in the gas cylinder. Since the Kalashnikov operating system offers no primary extraction upon bolt rotation, the 5.45×39mm AK-74 bolt has a larger extractor claw than the 7.62×39mm AKM for increased extraction reliability. Other minor modifications were made to the bolt and carrier assembly.
Barrel
The rifle received a new barrel with a chrome-lined bore and 4 right-hand grooves at a 200 mm (1:7.87 in) or 37 calibres rifling twist rate. The front sight base and gas block were redesigned. The gas block contains a gas channel that is installed at a 90° angle in relation to the bore axis to reduce bullet shear at the port hole. A pair of support brackets are cast into the gas block assembly and are used to attach a BG-15c or GP-25 underslung 40 mm grenade launcher. Like the AK-47 and AKM, the muzzle is threaded for the installation of various muzzle devices such as the standard muzzle brake or a blank-firing adaptor, while a spring-loaded detent pin held in the front sight post prevents them from unscrewing while firing. However the muzzle threads have been relocated to the front sight base for both easier and more economic replacement in case of thread damage. The distinctive standard-issue muzzle brake features a large expansion chamber, two symmetrical vertical cuts at the forward end of the brake and three non symmetrical positioned vent holes to counteract muzzle rise and climb as well as lateral shift to the right much like the AKM's offset muzzle brake. A flat plate near the end of the brake produces a forward thrust when emerging exhaust gases strike its surface, greatly reducing recoil. The muzzle brake prevents backblast from reaching the firer, although it is reported to be harsh on bystanders as the muzzle gases are dispersed to the sides. The standard-issue AK-74 muzzle brake has been subtly revised several times since the 1970s.
Sights
Iron sights
The AK-74 uses an adjustable notched rear tangent iron sight calibrated in increments from . The front sight is a post adjustable for elevation in the field. Horizontal adjustment requires a special drift tool and is done by the armoury before issue or if the need arises by an armourer after issue. The sight line elements are approximately over the bore axis. The "point-blank range" battle zero setting "П" standing for постоянная (constant/consistent/permanent) the 5.45×39mm AK-74 rear tangent sight element corresponds to a zero, compared with the zero for 7.62×39mm AKs. For the AK-74 combined with the 7N6 or 7N10 service cartridges the 400 m battle zero setting point-blank range limits the apparent "bullet rise" within approximately under the line of sight. At the corresponding maximum point-blank range the bullet will have dropped to approximately relative to the line of sight. Soldiers are instructed to fire at any target within this range by simply placing the sights on the center of mass (the belt buckle, according to Russian and former Soviet doctrine) of the enemy target. Any errors in range estimation are tactically irrelevant, as a well-aimed shot will hit the torso of the enemy soldier.
Optical sights
While most Russian and CIS armed forces use the AK-74 in its basic configuration with iron sights, many magnified and non-magnified optical sights are available for designated marksmen and other special purpose troops in their respective militaries.
For the 5.45×39mm AK-74, the East German Zeiss ZFK 4×25, 1P29, Belorussian BelOMO PO 3.5×21P, PO 4×24P and the 1P78 Kashtan dedicated side rail mounted optical sights were developed. These optical sights are primarily designed for rapid target acquisition and first round hits out to 400 m, but by various means these optical sights also offer bullet drop compensation (BDC) (sometimes referred to as ballistic elevation) for aiming at more distant targets. The BDC feature compensates for the effect of gravity on the bullet at given distances (referred to as "bullet drop") in flat fire scenarios. The feature must be tuned for the particular ballistic trajectory of a particular combination of gun and cartridge at a predefined muzzle velocity and air density. Since the usage of standardized ammunition is an important prerequisite to match the BDC feature to the external ballistic behaviour of the employed projectiles, these military optical sights are intended to assist with field shooting at varying medium to longer ranges rather than precise long range shots.
The standard Russian side rail mounted optical sight was the 4×26 1P29 Universal sight for small arms. It was copied from and hence similar to the British SUIT (Sight Unit Infantry, Trilux). When mounted the 1P29 sight is positioned centered above the receiver at a height that allows the use of the iron sights. It weighs 0.8 kg, offers 4× magnification with a field of view of 8° and 35 mm eye relief. The 1P29 is issued with a canvas pouch, a lens cleaning cloth, combination tool, two rubber eyecups, two eyecup clamps and three different bullet drop compensation (BDC) cams for the AK-74/AN-94, RPK-74 and PK machine gun. The 1P29 is intended for quickly engaging point and area targets at various ranges and is zeroed for both windage and elevation at . On the right side of the field of view a stadiametric rangefinder is incorporated that can be used to determine the distance from a tall object from . The reticle is an inverted aiming post in the top half of the field of view and is tritium-illuminated for low-light condition aiming.
The current Russian standard side rail mounted optical sight for the AK-74M is the 2.8×17 1P78 Kashtan, an aiming optic more similar to the American ACOG. When mounted the 1P78 sight is positioned centered above the receiver. It weighs 0.5 kg, offers 2.8× magnification with a field of view of 13° and 32 mm eye relief. The 1P78 comes in several versions for the AK-74 (1P78-1), RPK-74 (1P78-2), AKM (1P78) and RPK (1P78-3). The 1P78 is intended for quickly engaging point and area targets at various ranges and is zeroed for both windage and elevation at . A stadiametric rangefinder is incorporated that can be used to determine the distance for a soldier sized target from . The reticle consist of a main 400 m "chevron" (^), a holdover dot and smaller additional holdover chevrons for and and is tritium-illuminated for low-light condition aiming.
New features
The AK-74 was equipped with a new buttstock, handguard (which retained the AKM-type finger swells) and gas cylinder. The stock has a shoulder pad different from that on the AKM, which is rubber and serrated for improved seating against the shooter. In addition, there are lightening cuts on each side of the buttstock. The buttstock, lower handguard and upper heatguard were first manufactured from laminated wood, this later changed to a synthetic, plum or dark brown colored fiberglass.
The AK-74 gas tube has a spring washer attached to its rear end designed to retain the gas tube more securely. The lower handguard is fitted with a leaf spring that reduces play in the rifle's lateral axis by keeping the wood tensioned between the receiver and the handguard retainer. The receiver remains nearly identical to that of the AKM; it is a U-shaped thick sheet steel pressing supported extensively by pins and rivets. The internal guide rails on which the bolt carrier travels are stamped and spot welded to the inside of the receiver housing. Minor changes were made to the front barrel and rear stock trunnions as well as the magazine well. All external metal surfaces are coated with a glossy black enamel paint.
Accessories
Accessories supplied with the military version of the rifle include a 6H4 or 6H5 type bayonet, a quick-loading device, three spare magazines, four 15-round stripper clips, maintenance kit, cleaning rod and sling. The bayonet is installed by slipping the muzzle ring around the flash hider and latching the handle down on the bayonet lug under the front sight base. The 6H5 AK-74 bayonet introduced in 1983 represents a further refinement of the 6H4 AKM bayonet. It introduced a radical blade cross-section, that has a flat milled on one side near the edge and a corresponding flat milled on the opposite side near the false edge. The blade has a new spear point and an improved one-piece molded plastic grip making it a more effective fighting knife. It also has saw-teeth on the false edge and the usual hole for use as a wire-cutter.
5.45×39mm cartridge
Relatively small sized, light weight, high velocity military service cartridges like the 5.45×39mm allow a soldier to carry more ammunition for the same weight compared with their larger and heavier predecessor cartridges, have favourable maximum point-blank range or "battle zero" characteristics and produce relatively low bolt thrust and free recoil impulse, favouring light weight arms design and automatic fire accuracy. Tests measured the free recoil energy delivered by the 5.45×39mm AK-74 rifle at , compared with delivered by the 7.62×39mm in the AKM.
Early 5.45×39mm ballistics tests demonstrated a pronounced tumbling effect with high speed cameras. Some Western authorities believed this bullet was designed to tumble in flesh to increase wounding potential. At the time, it was believed that yawing and cavitation of projectiles were primarily responsible for tissue damage. Martin Fackler conducted a study with an AK-74 assault rifle using live pigs and ballistic gelatin; "The result of our preset test indicate that the AK-74 bullet acts in the manner expected of a full-metal-cased military ammunition – it does not deform or fragment when striking soft tissues". Most organs and tissue were too flexible to be severely damaged by the temporary cavity effect caused by yaw and cavitation of a projectile. With the 5.45 mm bullet, tumbling produced a temporary cavity twice, at depths of and . This is similar to (but more rapid than) modern 7.62×39mm ammunition and to (non-fragmenting) 5.56×45mm NATO ammunition.
Magazines
The original steel-reinforced 30-round AK-74 detachable box magazine was similar to that of the AKM, except for minor dimensional changes required by the 5.45×39mm cartridge.
These rust-colored magazines are often mistakenly identified as being made of Bakelite (a phenolic resin), but were actually fabricated from two-parts of AG-S4 molding compound (a glass-reinforced phenol-formaldehyde binder impregnated composite), assembled using an epoxy resin adhesive. Noted for their durability, these magazines did however compromise the rifle's camouflage and lacked the small horizontal reinforcing ribs running down both sides of the magazine body near the front that were added on all later AK-74 magazine generations. A second generation steel-reinforced dark-brown (color shades vary from maroon to plum to near black) 30-round magazine was introduced in the early 1980s, fabricated from ABS plastic. The third generation steel-reinforced 30-round AK-74 magazine is similar to the second generation, but is darker colored and has a matte nonreflective surface finish. With the introduction of the AK-74M the fourth generation of steel-reinforced matte true black nonreflective surface finished 30-round AK-74 magazines was introduced. All AK-74 magazines have a raised horizontal rib on each side of the rear lug to prevent their use in a 7.62×39mm AK. The magazines can be quickly recharged from stripper clips. The empty weight of a 30-round AK-74 box magazine is . The 45-round plastic box magazine of the RPK-74 light machine gun is also interchangeable with that of the AK-74. The empty weight of a 45-round RPK-74 box magazine is . Further 60-round and later 50-round quad-stack 5.45×39mm casket magazines were developed.
The transition to mainly plastic magazines and the relatively small sized, light weight, high velocity 5.45×39mm cartridge yielded a significant weight reduction and allow a soldier to carry considerably more rounds for the same weight compared with the previous Soviet AK-47 and AKM and later 7.62×39mm chambered AK platform assault rifles.
Note: All, 7.62×39mm AK magazines are backwards compatible with older AK variants.Note *: 10.12 kg (22.3 lb) is the maximum amount of ammo that the average soldier can comfortably carry. It also allows for best comparison of the three most common 7.62×39mm AK platform magazines and the 5.45×39mm AK-74 magazine.
Variants
The AK-74 series is also available in several "night-fighting" configurations, equipped with a side dovetail rail for mounting optical sights. These variants, the AK-74N, AKS-74N and AKS-74UN can be used in conjunction with NSPU and NSPU-3 (1PN51) night sights, as well as optical sights such as the USP-1 (1P29). The variants designated AK-74N2 and AKS-74N2 can use the multi-model night vision sight NSPUM (1PN58).
AKS-74
The AKS-74 ("S"—Russian: ; Skladnoy, or "folding"), is a variant of the AK-74 equipped with a side-folding metal shoulder stock, designed primarily for use with air assault infantry and developed alongside the basic AK-74. Unlike the AKMS's somewhat fragile underfolding stock (modeled after the MP 40 submachine gun stock), the AKS-74 stock is fabricated from stamped sheet metal struts, machine pressed into a "U" shape and assembled by punch fit and welding. The stock has a triangular shape; it lacks the folding shoulder pad found on the AKMS stock and is folded to the left side of the receiver. The hinged stock is securely locked in its extended position by a spring-loaded button catch located at the rear of the receiver. When folded, the stock is held closed by a spring-loaded capture hook situated on the left side at the front of the receiver housing. A rear-mounted sling swivel is also provided on the right side at the beginning of the stock frame. It retains the pistol grip reinforcement plate the AKMS used, though due to the less complex rear trunnion, only has one riveting hole in place of the three on the AKMS.
AK-74M
In 1991 the Izhmash factory in the city of Izhevsk began full-scale production of a modernised variant of the AK-74—the AK-74M ("М"—) assault rifle that offers more versatility compared with its predecessor. Apart from several minor improvements, such as a lightened bolt and carrier assembly to reduce the impulse of the gas piston and bolt carrier during firing, the rifle features a new glass-filled polyamide stock that retains the shape of the original AK-74 fixed laminated wood stock, but side-folds to the left like the skeletonised AKS-74 buttstock. As a result, pistol grip reinforcement plates that were once exclusively used on the folding stock variants are standard on all AK-74Ms. Additionally the AK-74M features an improved muzzle device with extended collar and threads to reduce play and a machine cut beneath to allow easier cleaning rod removal, a reinforced smooth dust cover and a redesigned guide rod return spring retainer that allows firing the GP-25, GP-30 and GP-34 underslung grenade launchers without having to use the previously necessary additional receiver cover fastener. To reduce production costs, barrel hardware, such as the front sight base and gas block, are dimple pressed on to the barrel instead of pinned on (commercial semi-auto variants are still pinned on to maintain user serviceability). Other economic changes include omission of lightening cuts on the front sight block and gas piston as well as a stamped gas tube release lever, replacing the milled one. The bullet guide and bolt guide were also separated, with the bolt guide becoming a simple bump held in place on the left side of the receiver with an additional rivet (often called a "bump rivet" because of this) making it easier to replace in case of wear. Each AK-74M is fitted with a side-rail bracket for mounting optics that is a simplified version of the 74N mount with less machining cuts. The AK-74M would have been adopted by the Soviet Union as the standard service rifle, and has been accepted as the new service rifle of the Russian Federation.
AK-74MR UUK (Universal Upgrade Kit)
An AK-74M universal upgrade kit consisting of a new safety, dust cover and furniture featuring improved ergonomics and rails to attach accessories like aiming optics, optoelectronic sights, laser sights, weapon lights and vertical fore grips and a new muzzle device had its official debut on 9 May 2015 in Moscow as part of the 2015 Moscow Victory Day Parade.
The Kalashnikov Concern has further developed three sets of additional equipment for the modernization of 5.45×39mm and 7.62×39mm chambered AK-pattern assault rifles for normal military units, reconnaissance units, and special forces units. The Kalashnikov Concern announced it has a contract with the Russian Ministry of Defence to deliver upgrade kits for their AK-74M assault rifles.
AKS-74U
In 1973, a design competition (codenamed "Modern"—Модерн) was started for the adoption of a fully automatic carbine.
Soviet planners drew from the unsolicited design AO-46 built in 1969 by Peter Andreevich Tkachev, which weighed only 1.9 kg. The TTT specifications required a weight no greater than , a length of / with the stock unfolded/folded, and an effective firing range of . The competition was joined by designs of M.T. Kalashnikov (PP1), I.Y. Stechkin (TKB-0116), S.G. Simonov (AG-043), A.S. Konstantinov (AEK-958), and Yevgeny Dragunov (who called his model "MA"). Kalashnikov also presented an additional design (A1-75) which differed from PP1 by having a modified muzzle for flash and noise suppression.
In 1977, the GRAU decided to adopt Kalashnikov's model, which was largely a shortened AKS-74, because its performance was no worse than the competition, and promised significant production cost savings by utilizing existing equipment for the AK-74 line. A final round of large scale testing with Kalashnikov's model was performed by airborne divisions in the Transcaucasian Military District in March 1977. The AKS-74U ("U"—Russian: ; Ukorochenniy, or "shortened") was officially adopted in 1979, and given the official, but seldom used GRAU designation 6P26. In 1993 production stopped.
The AKS-74U bridges the tactical deployment gap between a submachine gun and an assault rifle. It was intended for use mainly with special forces, airborne infantry, rear-echelon support units, helicopter and armored vehicle crews. It has been augmented and replaced by various submachine guns, and the less compact AK-105 carbine in Russian military service. It is commonly used by law enforcement; for example, each urban police foot patrol is issued at least one.
The rifle's compact dimensions, compared with the AKS-74, were achieved by using a short barrel (this forced designers to simultaneously reduce the gas piston operating rod to an appropriate length). Due to the shortening of the operating mechanism the cyclic rate of fire rose slightly to around 700-735 rounds per minute. In order to effectively stabilize projectiles, the barrel's twist rate was increased from 200 mm (1:7.87 in) or 37 calibers rifling twist rate to 160 mm (1:6.3 in) or 29.6 calibers rifling twist rate to adapt the AKS-74U for muzzle velocities of and higher. A new gas block was installed at the muzzle end of the barrel with a muzzle booster, which features an internal expansion chamber inside the cylindrical section of the booster while the conical end acts as a nozzle to increase net pressure inside the gas chamber by supplying an increased amount of propellant gasses from the barrel. The chrome-lined muzzle booster also burns any remaining propellant, which would normally reduce muzzle blast. However, due to the extremely short barrel and conical end of the booster, the muzzle blast is nevertheless extremely large and visible. The muzzle device locks into the gas block with a spring-loaded detent pin and features two parallel notches cut into the edge of the flash hider cone, used for unscrewing it using the cleaning rod. Unlike most Kalashnikov variants there is no provision to store the cleaning rod under the barrel. The front sight was integrated into the gas block/forward sling loop.
The sight height above the bore axis is also approximately higher than the AK-74, due to the combined front sight/gas block, rear sight configuration. The AKS-74U has a different rear sight composed of a U-shaped flip sight on the top cover instead of the standard sliding notch tangent rear sight. This rear sight has two settings: "П" standing for постоянная (constant) corresponding to a "point-blank range" battle zero setting and "4-5" (used for firing at distances between ). The rear sight is housed in a semi-shrouded protective enclosure that is riveted to the receiver's spring-loaded top cover. This top cover hinges from a barrel trunnion (hinging where the rear sight on a normal AK74 is located), pivoting forward when opened, which also works to unlock the gas tube cover. Both the gas tube and handguard are also of a new type and are wider and shorter than the analogous parts in the AKS-74.
For the AKS-74s combined with the 7N6 or 7N10 service cartridges the 350 m battle zero setting limits the apparent "bullet rise" within approximately relative to the line of sight. Soldiers are instructed to fire at any target within this range by simply placing the sights on the center of mass (the belt buckle) of the enemy target. Any errors in range estimation are tactically irrelevant, as a well-aimed shot will hit the torso of the enemy soldier.
The AKS-74U is significantly more maneuverable in tight quarters than the AKS-74; however, the significant decline in muzzle velocity to resulted in a reduction in effective range to (the effective hitting distance for a "running"-type silhouette target was reduced from to ). The AKS-74U cannot mount a bayonet or standard under-barrel grenade launcher. However, a suppressed 30 mm BS-1 grenade launcher was developed specifically for that platform that fires a high-explosive dual purpose (HEDP) grenade. The grenades for the BS-1 are launched by special blank cartridges that are inserted into the grenade launcher via a detachable magazine. The majority of AKS-74U carbines were manufactured at the Tula Arms Factory rather than Izhmash. There were some accessories produced for the AKS-74U including a plastic thigh holster and (shorter than standard) 20-round AK-74 type magazines. The rifle utilizes a proprietary 25 mm wide sling that differs from the standard 35 mm AK sling also in construction. The AKS-74U also exists in a version featuring modernized synthetic furniture made from a black, glass-filled polyamide. The AKS-74U was also used as the basis for several other unique weapons, including the bullpup OTs-14 Groza specialist carbine which is now in limited service in the Russian military, and the Gepard series of multi-calibre submachine guns (none of which evolved past prototype stage).
In the United States, the AKS-74U is called a "Krinkov". The origin of this term is uncertain. A hypothesis was circulating that the name came from the mujahideen who supposedly had captured a high-ranking Soviet officer armed with an AKS-74U, and that they had named it after him. However, investigation by Patrick Sweeney could not confirm this hypothesis, for no Soviet officer with a resembling name was captured in Afghanistan. US journalist C. J. Chivers reported that the gun was nicknamed "the Osama" in jihadist circles, after Osama bin Laden was photographed next to an AKS-74U. Research by The Firearm Blog published in 2016 suggests that the name "Krinkov" is a Pashtun invention that came to the United States with accounts of the Mujahideen.
The AKS-74U is approximately lighter than the NATO equivalent XM177, and shorter with the stock folded. Due to the fact that the AKS-74U is moderately concealable with its stock folded and capable of easily defeating IIIa soft body armour, it continues to be able to perform the role of a modern Personal Defense Weapon, despite being designed in the 1970s.
AKS-74UB
The AKS-74UB ("B"—Russian: ; Besshumniy or "silent") is a sound-suppressed variant of the AKS-74U adapted for use with the PBS-4 suppressor (used in combination with subsonic 5.45×39mm Russian ammunition).
Post AK-74M developments and successors
AK-100 series
The modernised variant of the AK-74 — the AK-74M — was used as the technical basis for the new Russian AK-100 family of Kalashnikov firearms:
Even with the differences in the above table all of these firearms are made to similar specifications.
These original AK-100 series firearms were introduced in 1994 and are categorized by all having black polymer handguards, folding polymer stocks, and use of AK-74M internal systems. Parts are highly interchangeable. The AK-101, AK-102, AK-103 and AK-104 are destined primarily for export, while the AK-105 was developed for replacing the shorter barreled AKS-74U. The AK-105 is used by the Russian Army and Ministry of Internal Affairs.
Additionally, the 5.45×39mm AK-107, 5.56×45mm NATO AK-108 and 7.62×39mm AK-109 assault rifles were developed. These have a technically differing balanced recoil system to reduce felt recoil and muzzle rise. This balanced recoil system designed by Yuriy K. Alexandrov for Kalashnikov-pattern rifles is a significant change to the Kalashnikov operating system of the 1940s. The operating system of these new rifles was derived from the AL-7 experimental rifle of the early 1970s. Since their development, these rifles met little commercial success.
AK-12
In 2010, the AK-12 series of proposed prototype models were unveiled. They differed in weight, introduced a new recoil compensation technology and improved ergonomics. The rear iron sight element was rail-mounted and moved to the back of the upper receiver to lengthen the sight line, and the full length of the weapon featured a Picatinny rail for mounting accessories such as aiming optics on top. The hand guard features Picatinny rails on both sides and its underside for mounting accessories like tactical lights, laser sights and grenade launchers. Throughout its development and evaluation stage the multiple modifications were applied to meet Russian military standards, as well as to improve upon the "range of defects" that were discovered on prototype models and to addresses concerns regarding the cost of earlier prototypes. In September 2016 the prototype models were replaced by the final production models of the AK-12 (chambered in 5.45×39mm) and AK-15 (chambered in 7.62×39mm) assault rifles.
Parallel developments are the RPK-16 light machine gun and the AM-17 compact assault rifle (both chambered in 5.45×39mm). The AK-12, AK-15 and RPK-16 technically stronger resemble the AK-74M, AK-100 series and RPK-74M than the earlier prototypes and the arms manufacturer Kalashnikov concern hopes they will replace these Russian service guns.
In late 2016 it was reported the AK-12 production model was undergoing troop trials with the Russian Army, where it competes against the Degtyarov A-545 balanced action assault rifle in Ratnik program trials. The AK-12 completed its operational testing and passed military field tests in June 2017, paving the way to Russian Army adoption, potentially under the Ratnik program. Both AK-12 and AK-15 completed testing in December 2017. In January 2018 it was announced that the AK-12 and AK-15 have been adopted by the Russian military.
Gallery
Accuracy potential
The following table represents the Russian method for determining accuracy and it is far more complex than Western methods. In the West, one fires a group of shots into the target and then simply measure the overall diameter of the group. The Russians on the other hand, fire a group of shots into the target. They then draw two circles on the target: one for the maximum vertical dispersion of hits and one for the maximum horizontal dispersion of hits. They then disregard the hits on the outer part of the target and only count half of the hits (50% or R50) on the inner part of the circles. This dramatically reduces the overall diameter of the groups. They then use both the vertical and horizontal measurements of the reduced groups to measure accuracy. This circular error probable method used by the Russian and other European militaries cannot be converted and is not comparable to US military methods for determining rifle accuracy. When the R50 results are doubled the hit probability increases to 93.7%.
R50 means the closest 50 percent of the shot group will all be within a circle of the mentioned diameter.
In general, this is more accurate than the AK-47 and the AKM. The vertical and horizontal mean (R50) deviations with service ammunition at for four Russian rifles are:
The single-shot hit-probability on the NATO E-type Silhouette Target (a human upper body half and head silhouette) of the AK-47, AK-74 and M16A1 and M16A2 assault rifles were measured by the US military under ideal proving ground conditions in the 1980s as follows:
Under worst field exercise circumstances, due to range estimation and aiming errors, the hit probabilities for the tested assault rifles were drastically reduced with differences without operational significance.
AKS-74U
R50 means the closest 50 percent of the shot group will all be within a circle of the mentioned diameter.
Users
: The Mujihadeen nicknamed it the "Kalakov".
:
: AK-74M manufactured under license by the Ministry of Defence Industry of Azerbaijan.
: AR-M1 (variation of AK-74) and AKS-74U are manufactured locally.
: Burundian rebels
: The AK-74M is used by the Cypriot National Guard
: In use alongside the M4 carbine in service in Georgia. Being phased out by AR-15 platform rifles.
:
: Used by police.
: Some received from Russia, possibly supplied for trials
: Manufactured locally as the Type-88. Sources suggest that it was made with technical assistance from China.
: Manufactured locally as the PA md. 86.
: AK-74M is currently the main service rifle in the Russian Army. Being supplemented by the improved AK-12 in the Russian Army.
: AK-74
: AK-74M, AKS-74U, AKS-74 and AK-74. Most AK-74s given to pro-government troops by Russian forces deployed in Syria.
Lord's Resistance Army
: Bulgarian AR-M9s
Russian separatist forces in Donbas
Former users
: MPi-AKS-74N used by Croatian Armed Forces.
: Manufactured locally as the MPi-AK-74N, MPi-AKS-74N, and MPi-AKS-74NK. 171,925 AK-74s in 1991.
: First used during the Soviet–Afghan War in 1979.
Non-state users
:
Liberation Tigers of Tamil Eelam: During the Sri Lankan Civil War between 1983 and 2009.
; Used by Islamic State terrorists (also seen in many Islamic State Propaganda videos)
See also
AK-47
AKM
RPK
M16 rifle
Comparison of the AK-74 vs. M16A2
Notes
References
External links
Kalashnikov Concern/Izhmash—manufacturer's website 5.45 mm Assault Rifle AK74M
Tula Arms Plant—makers of the AKS-74U carbine
Modern Firearms – AK-74/AKS-74/AK-74M
Modern Firearm – AKS-74U
Zastava M92
Technical data, instructional images and diagrams of the AK-47M
russianguns.ru
Weapons and ammunition introduced in 1974
5.45×39mm assault rifles
Infantry weapons of the Cold War
Kalashnikov derivatives
Military equipment introduced in the 1970s
Rifles of the Cold War
Assault rifles of the Soviet Union
Kalashnikov Concern products |
43342 | https://en.wikipedia.org/wiki/IPsec | IPsec | In computing, Internet Protocol Security (IPsec) is a secure network protocol suite that authenticates and encrypts the packets of data to provide secure encrypted communication between two computers over an Internet Protocol network. It is used in virtual private networks (VPNs).
IPsec includes protocols for establishing mutual authentication between agents at the beginning of a session and negotiation of cryptographic keys to use during the session. IPsec can protect data flows between a pair of hosts (host-to-host), between a pair of security gateways (network-to-network), or between a security gateway and a host (network-to-host).
IPsec uses cryptographic security services to protect communications over Internet Protocol (IP) networks. It supports network-level peer authentication, data origin authentication, data integrity, data confidentiality (encryption), and replay protection.
The initial IPv4 suite was developed with few security provisions. As a part of the IPv4 enhancement, IPsec is a layer 3 OSI model or internet layer end-to-end security scheme. In contrast, while some other Internet security systems in widespread use operate above the network layer, such as Transport Layer Security (TLS) that operates above the transport layer and Secure Shell (SSH) that operates at the application layer, IPsec can automatically secure applications at the internet layer.
History
Starting in the early 1970s, the Advanced Research Projects Agency sponsored a series of experimental ARPANET encryption devices, at first for native ARPANET packet encryption and subsequently for TCP/IP packet encryption; some of these were certified and fielded. From 1986 to 1991, the NSA sponsored the development of security protocols for the Internet under its Secure Data Network Systems (SDNS) program. This brought together various vendors including Motorola who produced a network encryption device in 1988. The work was openly published from about 1988 by NIST and, of these, Security Protocol at Layer 3 (SP3) would eventually morph into the ISO standard Network Layer Security Protocol (NLSP).
From 1992 to 1995, various groups conducted research into IP-layer encryption.
1. In 1992, the US Naval Research Laboratory (NRL) began the Simple Internet Protocol Plus (SIPP) project to research and implement IP encryption.
2. In 1993, at Columbia University and AT&T Bell Labs, John Ioannidis and others researched the software experimental Software IP Encryption Protocol (swIPe) on SunOS.
3. In 1993, Sponsored by Whitehouse internet service project, Wei Xu at Trusted Information Systems (TIS) further researched the Software IP Security Protocols and developed the hardware support for the triple DES Data Encryption Standard, which was coded in the BSD 4.1 kernel and supported both x86 and SUNOS architectures. By December 1994, TIS released their DARPA-sponsored open-source Gauntlet Firewall product with the integrated 3DES hardware encryption at over T1 speeds. It was the first-time using IPSec VPN connections between the east and west coast of the States, known as the first commercial IPSec VPN product.
4. Under NRL's DARPA-funded research effort, NRL developed the IETF standards-track specifications (RFC 1825 through RFC 1827) for IPsec, which was coded in the BSD 4.4 kernel and supported both x86 and SPARC CPU architectures. NRL's IPsec implementation was described in their paper in the 1996 USENIX Conference Proceedings. NRL's open-source IPsec implementation was made available online by MIT and became the basis for most initial commercial implementations.
The Internet Engineering Task Force (IETF) formed the IP Security Working Group in 1992 to standardize openly specified security extensions to IP, called IPsec. In 1995, the working group organized a few of the workshops with members from the five companies (TIS, CISCO, FTP, Checkpoint, etc.). During the IPSec workshops, the NRL's standards and Cisco and TIS' software are standardized as the public references, published as RFC-1825 through RFC-1827.
Security architecture
The IPsec is an open standard as a part of the IPv4 suite. IPsec uses the following protocols to perform various functions:
Authentication Headers (AH) provides connectionless data integrity and data origin authentication for IP datagrams and provides protection against replay attacks.
Encapsulating Security Payloads (ESP) provides confidentiality, connectionless data integrity, data origin authentication, an anti-replay service (a form of partial sequence integrity), and limited traffic-flow confidentiality.
Internet Security Association and Key Management Protocol (ISAKMP) provides a framework for authentication and key exchange, with actual authenticated keying material provided either by manual configuration with pre-shared keys, Internet Key Exchange (IKE and IKEv2), Kerberized Internet Negotiation of Keys (KINK), or IPSECKEY DNS records. The purpose is to generate the Security Associations (SA) with the bundle of algorithms and parameters necessary for AH and/or ESP operations.
Authentication Header
The Security Authentication Header (AH) was developed at the US Naval Research Laboratory in the early 1990s and is derived in part from previous IETF standards' work for authentication of the Simple Network Management Protocol (SNMP) version 2. Authentication Header (AH) is a member of the IPsec protocol suite. AH ensures connectionless integrity by using a hash function and a secret shared key in the AH algorithm. AH also guarantees the data origin by authenticating IP packets. Optionally a sequence number can protect the IPsec packet's contents against replay attacks, using the sliding window technique and discarding old packets.
In IPv4, AH prevents option-insertion attacks. In IPv6, AH protects both against header insertion attacks and option insertion attacks.
In IPv4, the AH protects the IP payload and all header fields of an IP datagram except for mutable fields (i.e. those that might be altered in transit), and also IP options such as the IP Security Option (RFC 1108). Mutable (and therefore unauthenticated) IPv4 header fields are DSCP/ToS, ECN, Flags, Fragment Offset, TTL and Header Checksum.
In IPv6, the AH protects most of the IPv6 base header, AH itself, non-mutable extension headers after the AH, and the IP payload. Protection for the IPv6 header excludes the mutable fields: DSCP, ECN, Flow Label, and Hop Limit.
AH operates directly on top of IP, using IP protocol number 51.
The following AH packet diagram shows how an AH packet is constructed and interpreted:
Next Header (8 bits) Type of the next header, indicating what upper-layer protocol was protected. The value is taken from the list of IP protocol numbers.
Payload Len (8 bits) The length of this Authentication Header in 4-octet units, minus 2. For example, an AH value of 4 equals 3×(32-bit fixed-length AH fields) + 3×(32-bit ICV fields) − 2 and thus an AH value of 4 means 24 octets. Although the size is measured in 4-octet units, the length of this header needs to be a multiple of 8 octets if carried in an IPv6 packet. This restriction does not apply to an Authentication Header carried in an IPv4 packet.
Reserved (16 bits) Reserved for future use (all zeroes until then).
Security Parameters Index (32 bits) Arbitrary value which is used (together with the destination IP address) to identify the security association of the receiving party.
Sequence Number (32 bits) A monotonic strictly increasing sequence number (incremented by 1 for every packet sent) to prevent replay attacks. When replay detection is enabled, sequence numbers are never reused, because a new security association must be renegotiated before an attempt to increment the sequence number beyond its maximum value.
Integrity Check Value (multiple of 32 bits) Variable length check value. It may contain padding to align the field to an 8-octet boundary for IPv6, or a 4-octet boundary for IPv4.
Encapsulating Security Payload
The IP Encapsulating Security Payload (ESP) was developed at the Naval Research Laboratory starting in 1992 as part of a DARPA-sponsored research project, and was openly published by IETF SIPP Working Group drafted in December 1993 as a security extension for SIPP. This ESP was originally derived from the US Department of Defense SP3D protocol, rather than being derived from the ISO Network-Layer Security Protocol (NLSP). The SP3D protocol specification was published by NIST in the late 1980s, but designed by the Secure Data Network System project of the US Department of Defense.
Encapsulating Security Payload (ESP) is a member of the IPsec protocol suite. It provides origin authenticity through source authentication, data integrity through hash functions and confidentiality through encryption protection for IP packets. ESP also supports encryption-only and authentication-only configurations, but using encryption without authentication is strongly discouraged because it is insecure.
Unlike Authentication Header (AH), ESP in transport mode does not provide integrity and authentication for the entire IP packet. However, in Tunnel Mode, where the entire original IP packet is encapsulated with a new packet header added, ESP protection is afforded to the whole inner IP packet (including the inner header) while the outer header (including any outer IPv4 options or IPv6 extension headers) remains unprotected. ESP operates directly on top of IP, using IP protocol number 50.
The following ESP packet diagram shows how an ESP packet is constructed and interpreted:
Security Parameters Index (32 bits) Arbitrary value used (together with the destination IP address) to identify the security association of the receiving party.
Sequence Number (32 bits) A monotonically increasing sequence number (incremented by 1 for every packet sent) to protect against replay attacks. There is a separate counter kept for every security association.
Payload data (variable) The protected contents of the original IP packet, including any data used to protect the contents (e.g. an Initialisation Vector for the cryptographic algorithm). The type of content that was protected is indicated by the Next Header field.
Padding (0-255 octets) Padding for encryption, to extend the payload data to a size that fits the encryption's cipher block size, and to align the next field.
Pad Length (8 bits) Size of the padding (in octets).
Next Header (8 bits) Type of the next header. The value is taken from the list of IP protocol numbers.
Integrity Check Value (multiple of 32 bits) Variable length check value. It may contain padding to align the field to an 8-octet boundary for IPv6, or a 4-octet boundary for IPv4.
Security association
The IPsec protocols use a security association, where the communicating parties establish shared security attributes such as algorithms and keys. As such IPsec provides a range of options once it has been determined whether AH or ESP is used. Before exchanging data the two hosts agree on which symmetric encryption algorithm is used to encrypt the IP packet, for example AES or ChaCha20, and which hash function is used to ensure the integrity of the data, such as BLAKE2 or SHA256. These parameters are agreed for the particular session, for which a lifetime must be agreed and a session key.
The algorithm for authentication is also agreed before the data transfer takes place and IPsec supports a range of methods. Authentication is possible through pre-shared key, where a symmetric key is already in the possession of both hosts, and the hosts send each other hashes of the shared key to prove that they are in possession of the same key. IPsec also supports public key encryption, where each host has a public and a private key, they exchange their public keys and each host sends the other a nonce encrypted with the other host's public key. Alternatively if both hosts hold a public key certificate from a certificate authority, this can be used for IPsec authentication.
The security associations of IPsec are established using the Internet Security Association and Key Management Protocol (ISAKMP). ISAKMP is implemented by manual configuration with pre-shared secrets, Internet Key Exchange (IKE and IKEv2), Kerberized Internet Negotiation of Keys (KINK), and the use of IPSECKEY DNS records. RFC 5386 defines Better-Than-Nothing Security (BTNS) as an unauthenticated mode of IPsec using an extended IKE protocol. C. Meadows, C. Cremers, and others have used Formal Methods to identify various anomalies which exist in IKEv1 and also in IKEv2.
In order to decide what protection is to be provided for an outgoing packet, IPsec uses the Security Parameter Index (SPI), an index to the security association database (SADB), along with the destination address in a packet header, which together uniquely identifies a security association for that packet. A similar procedure is performed for an incoming packet, where IPsec gathers decryption and verification keys from the security association database.
For IP multicast a security association is provided for the group, and is duplicated across all authorized receivers of the group. There may be more than one security association for a group, using different SPIs, thereby allowing multiple levels and sets of security within a group. Indeed, each sender can have multiple security associations, allowing authentication, since a receiver can only know that someone knowing the keys sent the data. Note that the relevant standard does not describe how the association is chosen and duplicated across the group; it is assumed that a responsible party will have made the choice.
Modes of operation
The IPsec protocols AH and ESP can be implemented in a host-to-host transport mode, as well as in a network tunneling mode.
Transport mode
In transport mode, only the payload of the IP packet is usually encrypted or authenticated. The routing is intact, since the IP header is neither modified nor encrypted; however, when the authentication header is used, the IP addresses cannot be modified by network address translation, as this always invalidates the hash value. The transport and application layers are always secured by a hash, so they cannot be modified in any way, for example by translating the port numbers.
A means to encapsulate IPsec messages for NAT traversal has been defined by RFC documents describing the NAT-T mechanism.
Tunnel mode
In tunnel mode, the entire IP packet is encrypted and authenticated. It is then encapsulated into a new IP packet with a new IP header. Tunnel mode is used to create virtual private networks for network-to-network communications (e.g. between routers to link sites), host-to-network communications (e.g. remote user access) and host-to-host communications (e.g. private chat).
Tunnel mode supports NAT traversal.
Algorithms
Symmetric encryption algorithms
Cryptographic algorithms defined for use with IPsec include:
HMAC-SHA1/SHA2 for integrity protection and authenticity.
TripleDES-CBC for confidentiality
AES-CBC and AES-CTR for confidentiality.
AES-GCM and ChaCha20-Poly1305 providing confidentiality and authentication together efficiently.
Refer to RFC 8221 for details.
Key exchange algorithms
Diffie–Hellman (RFC 3526)
ECDH (RFC 4753)
Authentication algorithms
RSA
ECDSA (RFC 4754)
PSK (RFC 6617)
Implementations
The IPsec can be implemented in the IP stack of an operating system, which requires modification of the source code. This method of implementation is done for hosts and security gateways. Various IPsec capable IP stacks are available from companies, such as HP or IBM. An alternative is so called bump-in-the-stack (BITS) implementation, where the operating system source code does not have to be modified. Here IPsec is installed between the IP stack and the network drivers. This way operating systems can be retrofitted with IPsec. This method of implementation is also used for both hosts and gateways. However, when retrofitting IPsec the encapsulation of IP packets may cause problems for the automatic path MTU discovery, where the maximum transmission unit (MTU) size on the network path between two IP hosts is established. If a host or gateway has a separate cryptoprocessor, which is common in the military and can also be found in commercial systems, a so-called bump-in-the-wire (BITW) implementation of IPsec is possible.
When IPsec is implemented in the kernel, the key management and ISAKMP/IKE negotiation is carried out from user space. The NRL-developed and openly specified "PF_KEY Key Management API, Version 2" is often used to enable the application-space key management application to update the IPsec Security Associations stored within the kernel-space IPsec implementation. Existing IPsec implementations usually include ESP, AH, and IKE version 2. Existing IPsec implementations on UNIX-like operating systems, for example, Solaris or Linux, usually include PF_KEY version 2.
Embedded IPsec can be used to ensure the secure communication among applications running over constrained resource systems with a small overhead.
Standards status
IPsec was developed in conjunction with IPv6 and was originally required to be supported by all standards-compliant implementations of IPv6 before RFC 6434 made it only a recommendation. IPsec is also optional for IPv4 implementations. IPsec is most commonly used to secure IPv4 traffic.
IPsec protocols were originally defined in RFC 1825 through RFC 1829, which were published in 1995. In 1998, these documents were superseded by RFC 2401 and RFC 2412 with a few incompatible engineering details, although they were conceptually identical. In addition, a mutual authentication and key exchange protocol Internet Key Exchange (IKE) was defined to create and manage security associations. In December 2005, new standards were defined in RFC 4301 and RFC 4309 which are largely a superset of the previous editions with a second version of the Internet Key Exchange standard IKEv2. These third-generation documents standardized the abbreviation of IPsec to uppercase “IP” and lowercase “sec”. “ESP” generally refers to RFC 4303, which is the most recent version of the specification.
Since mid-2008, an IPsec Maintenance and Extensions (ipsecme) working group is active at the IETF.
Alleged NSA interference
In 2013, as part of Snowden leaks, it was revealed that the US National Security Agency had been actively working to "Insert vulnerabilities into commercial encryption systems, IT systems, networks, and endpoint communications devices used by targets" as part of the Bullrun program. There are allegations that IPsec was a targeted encryption system.
The OpenBSD IPsec stack came later on and also was widely copied. In a letter which OpenBSD lead developer Theo de Raadt received on 11 Dec 2010 from Gregory Perry, it is alleged that Jason Wright and others, working for the FBI, inserted "a number of backdoors and side channel key leaking mechanisms" into the OpenBSD crypto code. In the forwarded email from 2010, Theo de Raadt did not at first express an official position on the validity of the claims, apart from the implicit endorsement from forwarding the email. Jason Wright's response to the allegations: "Every urban legend is made more real by the inclusion of real names, dates, and times. Gregory Perry's email falls into this category. … I will state clearly that I did not add backdoors to the OpenBSD operating system or the OpenBSD crypto framework (OCF)." Some days later, de Raadt commented that "I believe that NETSEC was probably contracted to write backdoors as alleged. … If those were written, I don't believe they made it into our tree." This was published before the Snowden leaks.
An alternative explanation put forward by the authors of the Logjam attack suggests that the NSA compromised IPsec VPNs by undermining the Diffie-Hellman algorithm used in the key exchange. In their paper, they allege the NSA specially built a computing cluster to precompute multiplicative subgroups for specific primes and generators, such as for the second Oakley group defined in RFC 2409. As of May 2015, 90% of addressable IPsec VPNs supported the second Oakley group as part of IKE. If an organization were to precompute this group, they could derive the keys being exchanged and decrypt traffic without inserting any software backdoors.
A second alternative explanation that was put forward was that the Equation Group used zero-day exploits against several manufacturers' VPN equipment which were validated by Kaspersky Lab as being tied to the Equation Group and validated by those manufacturers as being real exploits, some of which were zero-day exploits at the time of their exposure. The Cisco PIX and ASA firewalls had vulnerabilities that were used for wiretapping by the NSA.
Furthermore, IPsec VPNs using "Aggressive Mode" settings send a hash of the PSK in the clear. This can be and apparently is targeted by the NSA using offline dictionary attacks.
IETF documentation
Standards track
: The ESP DES-CBC Transform
: The Use of HMAC-MD5-96 within ESP and AH
: The Use of HMAC-SHA-1-96 within ESP and AH
: The ESP DES-CBC Cipher Algorithm With Explicit IV
: The NULL Encryption Algorithm and Its Use With IPsec
: The ESP CBC-Mode Cipher Algorithms
: The Use of HMAC-RIPEMD-160-96 within ESP and AH
: More Modular Exponential (MODP) Diffie-Hellman groups for Internet Key Exchange (IKE)
: The AES-CBC Cipher Algorithm and Its Use with IPsec
: Using Advanced Encryption Standard (AES) Counter Mode With IPsec Encapsulating Security Payload (ESP)
: Negotiation of NAT-Traversal in the IKE
: UDP Encapsulation of IPsec ESP Packets
: The Use of Galois/Counter Mode (GCM) in IPsec Encapsulating Security Payload (ESP)
: Security Architecture for the Internet Protocol
: IP Authentication Header
: IP Encapsulating Security Payload
: Extended Sequence Number (ESN) Addendum to IPsec Domain of Interpretation (DOI) for Internet Security Association and Key Management Protocol (ISAKMP)
: Cryptographic Algorithms for Use in the Internet Key Exchange Version 2 (IKEv2)
: Cryptographic Suites for IPsec
: Using Advanced Encryption Standard (AES) CCM Mode with IPsec Encapsulating Security Payload (ESP)
: The Use of Galois Message Authentication Code (GMAC) in IPsec ESP and AH
: IKEv2 Mobility and Multihoming Protocol (MOBIKE)
: Online Certificate Status Protocol (OCSP) Extensions to IKEv2
: Using HMAC-SHA-256, HMAC-SHA-384, and HMAC-SHA-512 with IPsec
: The Internet IP Security PKI Profile of IKEv1/ISAKMP, IKEv2, and PKIX
: Internet X.509 Public Key Infrastructure Certificate and Certificate Revocation List (CRL) Profile
: Using Authenticated Encryption Algorithms with the Encrypted Payload of the Internet Key Exchange version 2 (IKEv2) Protocol
: Better-Than-Nothing Security: An Unauthenticated Mode of IPsec
: Modes of Operation for Camellia for Use with IPsec
: Redirect Mechanism for the Internet Key Exchange Protocol Version 2 (IKEv2)
: Internet Key Exchange Protocol Version 2 (IKEv2) Session Resumption
: IKEv2 Extensions to Support Robust Header Compression over IPsec
: IPsec Extensions to Support Robust Header Compression over IPsec
: Internet Key Exchange Protocol Version 2 (IKEv2)
: Cryptographic Algorithm Implementation Requirements and Usage Guidance for Encapsulating Security Payload (ESP) and Authentication Header (AH)
: Internet Key Exchange Protocol Version 2 (IKEv2) Message Fragmentation
: Signature Authentication in the Internet Key Exchange Version 2 (IKEv2)
: ChaCha20, Poly1305, and Their Use in the Internet Key Exchange Protocol (IKE) and IPsec
Experimental RFCs
: Repeated Authentication in Internet Key Exchange (IKEv2) Protocol
Informational RFCs
: PF_KEY Interface
: The OAKLEY Key Determination Protocol
: A Traffic-Based Method of Detecting Dead Internet Key Exchange (IKE) Peers
: IPsec-Network Address Translation (NAT) Compatibility Requirements
: Design of the IKEv2 Mobility and Multihoming (MOBIKE) Protocol
: Requirements for an IPsec Certificate Management Profile
: Problem and Applicability Statement for Better-Than-Nothing Security (BTNS)
: Integration of Robust Header Compression over IPsec Security Associations
: Using Advanced Encryption Standard Counter Mode (AES-CTR) with the Internet Key Exchange version 02 (IKEv2) Protocol
: IPsec Cluster Problem Statement
: IPsec and IKE Document Roadmap
: Suite B Cryptographic Suites for IPsec
: Suite B Profile for Internet Protocol Security (IPsec)
: Secure Password Framework for Internet Key Exchange Version 2 (IKEv2)
Best current practice RFCs
: Guidelines for Specifying the Use of IPsec Version 2
Obsolete/historic RFCs
: Security Architecture for the Internet Protocol (obsoleted by RFC 2401)
: IP Authentication Header (obsoleted by RFC 2402)
: IP Encapsulating Security Payload (ESP) (obsoleted by RFC 2406)
: IP Authentication using Keyed MD5 (historic)
: Security Architecture for the Internet Protocol (IPsec overview) (obsoleted by RFC 4301)
: IP Encapsulating Security Payload (ESP) (obsoleted by RFC 4303 and RFC 4305)
: The Internet IP Security Domain of Interpretation for ISAKMP (obsoleted by RFC 4306)
: The Internet Key Exchange (obsoleted by RFC 4306)
: Cryptographic Algorithm Implementation Requirements for Encapsulating Security Payload (ESP) and Authentication Header (AH) (obsoleted by RFC 4835)
: Internet Key Exchange (IKEv2) Protocol (obsoleted by RFC 5996)
: IKEv2 Clarifications and Implementation Guidelines (obsoleted by RFC 7296)
: Cryptographic Algorithm Implementation Requirements for Encapsulating Security Payload (ESP) and Authentication Header (AH) (obsoleted by RFC 7321)
: Internet Key Exchange Protocol Version 2 (IKEv2) (obsoleted by RFC 7296)
See also
Dynamic Multipoint Virtual Private Network
Information security
NAT traversal
Opportunistic encryption
tcpcrypt
References
External links
All IETF active security WGs
IETF ipsecme WG ("IP Security Maintenance and Extensions" Working Group)
IETF btns WG ("Better-Than-Nothing Security" Working Group) (chartered to work on unauthenticated IPsec, IPsec APIs, connection latching)]
Securing Data in Transit with IPsec WindowsSecurity.com article by Deb Shinder
IPsec on Microsoft TechNet
Microsoft IPsec Diagnostic Tool on Microsoft Download Center
An Illustrated Guide to IPsec by Steve Friedl
Security Architecture for IP (IPsec) Data Communication Lectures by Manfred Lindner Part IPsec
Creating VPNs with IPsec and SSL/TLS Linux Journal article by Rami Rosen
Cryptographic protocols
Internet protocols
Network layer protocols
Tunneling protocols |
8977310 | https://en.wikipedia.org/wiki/ManyCam | ManyCam | ManyCam is an application program that allows users to use their webcam with multiple different video chat and video streaming applications simultaneously for Windows and MacOS computers. Users can also add live graphics effects and filters to video feeds. ManyCam is available for annual or lifetime licensing in different versions. It was previously published as freeware. ManyCam also publishes mobile apps.
ManyCam uses a webcam or video camera as input for the software itself and then replicates itself as an alternative source of input. Because of this, ManyCam works with nearly all chat software that can use alternative video sources.
On October 2, 2013, ManyCam LLC was acquired by Visicom Media, a developer of Internet applications.
See also
Comparison of webcam software
Comparison of screencasting software
References
External links
Freemium
Video software
Webcams
Screencasting_software |
15481427 | https://en.wikipedia.org/wiki/Cisco%20NX-OS | Cisco NX-OS | NX-OS is a network operating system for the Nexus-series Ethernet switches and MDS-series Fibre Channel storage area network switches made by Cisco Systems. It evolved from the Cisco operating system SAN-OS, originally developed for its MDS switches.
It is based on Wind River Linux and is inter-operable with other Cisco operating systems. The command-line interface of NX-OS is similar to that of Cisco IOS.
Recent NX-OS has both Cisco-style CLI and Bash shell available. On NX-OS 7.0(3)I3, the output from uname with the -a command line argument might look like the text below:
$ uname -a
Linux version 3.4.91-WR5.0.1.13_standard+ (divvenka@sjc-ads-7035) (gcc version 4.6.3 (Wind River Linux Sourcery CodeBench 4.6-60) ) #1 SMP Tue Feb 6 12:43:13 PST 2018
Core features
System Manager (sysmgr)
Persistent Storage Service (PSS)
Message & Transaction Services (MTS)
Additional features
Fibre Channel and FICON
FCIP
FCoE (Nexus 5000/7000 linecards)
iSCSI
IPsec
Scheduling
NPIV NX Port ID Virtualization
Inter–VSAN Routing
VSAN
Zoning (Hard zoning)
Callhome
Cisco Fabric Services (distributed configuration)
SSH and Telnet
Storage Media Encryption
Port Channels
Cisco Data Mobility Manager
Fibre Channel Write Acceleration
Switches running NX-OS
Nexus B22 (HP, Dell, Fujitsu)
Nexus 9000 series
Nexus 7700 series
Nexus 7000 series
Nexus 6000 series
Nexus 5000 series
Nexus 4000 (for IBM BladeCenter)
Nexus 2000 series
Nexus 3000
Nexus 1000V
MDS 9700 FC Directors
MDS 9500 FC Directors
MDS 9250i FC Switch
MDS 9222i FC Switch
MDS 9100 FC Switches
Differences between IOS and NX-OS
NX-OS does not support the login command to switch users.
NX-OS does not distinguish between standard or extended access lists, all lists are named and "extended" in functionality.
NX-OS did not support scp server prior to 5.1(1) release.
In NX-OS, there is no "write" command to save the configuration like on IOS (one uses the "copy" command, instead). Instead, command aliases can be created to provide the "write" command.
When accessing NX-OS, users authenticate directly to their assigned privilege level.
SSH server is enabled while Telnet server is disabled by default in NX-OS.
Releases
4.1, 4.2, 5.0, 5.1, 5.2, 6.0, 6.1, 6.2, 7.0, 9.2, 9.3, 10.1
See also
Cisco IOS
Cisco IOS XE
Cisco IOS XR
FTOS – competitor Force10's operating system FTOS
References
External links
intro
data sheet
Proprietary operating systems
Network operating systems
Cisco software |
127510 | https://en.wikipedia.org/wiki/Music%20sequencer | Music sequencer | A music sequencer (or audio sequencer or simply sequencer) is a device or application software that can record, edit, or play back music, by handling note and performance information in several forms, typically CV/Gate, MIDI, or Open Sound Control (OSC), and possibly audio and automation data for DAWs and plug-ins.
Overview
Modern sequencers
The advent of Musical Instrument Digital Interface (MIDI) and the Atari ST home computer in the 1980s gave programmers the opportunity to design software that could more easily record and play back sequences of notes played or programmed by a musician. This software also improved on the quality of the earlier sequencers which tended to be mechanical sounding and were only able to play back notes of exactly equal duration. Software-based sequencers allowed musicians to program performances that were more expressive and more human. These new sequencers could also be used to control external synthesizers, especially rackmounted sound modules, and it was no longer necessary for each synthesizer to have its own devoted keyboard.
As the technology matured, sequencers gained more features, such as the ability to record multitrack audio. Sequencers used for audio recording are called digital audio workstations (or DAWs).
Many modern sequencers can be used to control virtual instruments implemented as software plug-ins. This allows musicians to replace expensive and cumbersome standalone synthesizers with their software equivalents.
Today the term "sequencer" is often used to describe software. However, hardware sequencers still exist. Workstation keyboards have their own proprietary built-in MIDI sequencers. Drum machines and some older synthesizers have their own step sequencer built in. There are still also standalone hardware MIDI sequencers, although the market demand for those has diminished greatly due to the greater feature set of their software counterparts.
Types of music sequencer
Music sequencers can be categorized by handling data types, such as:
MIDI data on the MIDI sequencers (implemented as hardware or software)
CV/Gate data on the analog sequencers and possibly others (via CV/Gate interfaces)
Automation data for mixing-automation on the DAWs, and the software effect / instrument plug-ins on the DAWs with sequencing features
Audio data on the audio sequencers including DAW, loop-based music software, etc.; or, the phrase samplers including Groove machines, etc.
Alternative subsets of audio sequencers include:
Also, music sequencer can be categorized by its construction and supporting modes.
Realtime sequencer (realtime recording mode)
Realtime sequencers record the musical notes in real-time as on audio recorders, and play back musical notes with designated tempo, quantizations, and pitch. For editing, usually "punch in/punch out" features originated in the tape recording are provided, although it requires sufficient skills to obtain the desired result. For detailed editing, possibly another visual editing mode under graphical user interface may be more suitable. Anyway, this mode provides usability similar to audio recorders already familiar to musicians, and it is widely supported on software sequencers, DAWs, and built-in hardware sequencers.
Analog sequencer
Analog sequencers are typically implemented with analog electronics, and play the musical notes designated by a series of knobs or sliders corresponding to each musical note (step). It is designed for both composition and live performance; users can change the musical notes at any time without regarding recording mode. And also possibly, the time-interval between each musical note (length of each step) can be independently adjustable. Typically, analog sequencers are used to generate the repeated minimalistic phrases which may be reminiscent of Tangerine Dream, Giorgio Moroder or trance music.
Step sequencer (step recording mode)
On step sequencers, musical notes are rounded into steps of equal time-intervals, and users can enter each musical note without exact timing; instead, the timing and duration of each step can be designated in several different ways:
On the drum machines: select a trigger timing from a row of step-buttons.
On the bass machines: select a step note (or rest) from a chromatic keypads, then select a step duration (or tie) from a group of length-buttons, sequentially.
On the several home keyboards:in addition to the realtime sequencer, a pair of step trigger button is provided; using it, notes on the pre-recorded sequence can be triggered in arbitrary timings for the timing dedicated recordings or performances. (See List of music sequencers#Step sequencers (supported on).)
In general, step mode, along with roughly quantized semi-realtime mode, is often supported on the drum machines, bass machines and several groove machines.
Software sequencer
Software sequencer is a class of application software providing a functionality of music sequencer, and often provided as one feature of the DAW or the integrated music authoring environments. The features provided as sequencers vary widely depending on the software; even an analog sequencer can be simulated. The user may control the software sequencer either by using the graphical user interfaces or a specialized input devices, such as a MIDI controller.
Typical features on software sequencers
History
Early sequencers
The early music sequencers were sound producing devices such as automatic musical instruments, music boxes, mechanical organs, player pianos, and Orchestrions. Player pianos, for example, had much in common with contemporary sequencers. Composers or arrangers transmitted music to piano rolls which were subsequently edited by technicians who prepared the rolls for mass duplication. Eventually consumers were able to purchase these rolls and play them back on their own player pianos.
The origin of automatic musical instruments seems remarkably old. As early as the 9th century, the Persian (Iranian) Banū Mūsā brothers invented a hydropowered organ using exchangeable cylinders with pins, and also an automatic flute playing machine using steam power, as described in their Book of Ingenious Devices. The Banu Musa brothers' automatic flute player was the first programmable music sequencer device, and the first example of repetitive music technology, powered by hydraulics.
In 1206, Al-Jazari, an Arab engineer, invented programmable musical automata, a "robot band" which performed "more than fifty facial and body actions during each musical selection." It was notably the first programmable drum machine. Among the four automaton musicians were two drummers. It was a drum machine where pegs (cams) bump into little levers that operated the percussion. The drummers could be made to play different rhythms and different drum patterns if the pegs were moved around.
In the 14th century, rotating cylinders with pins were used to play a carillon (steam organ) in Flanders, and at least in the 15th century, barrel organs were seen in the Netherlands.
In the late-18th or early-19th century, with technological advances of the Industrial Revolution various automatic musical instruments were invented. Some examples: music boxes, barrel organs and barrel pianos consisting of a barrel or cylinder with pins or a flat metal disc with punched holes; or mechanical organs, player pianos and orchestrions using book music / music rolls (piano rolls) with punched holes, etc. These instruments were disseminated widely as popular entertainment devices prior to the inventions of phonographs, radios, and sound films which eventually eclipsed all such home music production devices.
Of them all, punched-paper-tape media had been used until the mid-20th century. The earliest programmable music synthesizers including the RCA Mark II Sound Synthesizer in 1957, and the Siemens Synthesizer in 1959, were also controlled via punch tapes similar to piano rolls.
Additional inventions grew out of sound film audio technology. The drawn sound technique which appeared in the late 1920s, is notable as a precursor of today's intuitive graphical user interfaces. In this technique, notes and various sound parameters are triggered by hand-drawn black ink waveforms directly upon the film substrate, hence they resemble piano rolls (or the 'strip charts' of the modern sequencers/DAWs). Drawn soundtrack was often used in early experimental electronic music, including the Variophone developed by Yevgeny Sholpo in 1930, and the Oramics designed by Daphne Oram in 1957, and so forth.
Analog sequencers
During the 1940s–1960s, Raymond Scott, an American composer of electronic music, invented various kind of music sequencers for his electric compositions. The "Wall of Sound", once covered on the wall of his studio in New York during the 1940s–1950s, was an electro-mechanical sequencer to produce rhythmic patterns, consisting of stepping relays (used on dial pulse telephone exchange), solenoids, control switches, and tone circuits with 16 individual oscillators. Later, Robert Moog would explain it in such terms as "the whole room would go 'clack - clack - clack', and the sounds would come out all over the place".
The Circle Machine, developed in 1959, had incandescent bulbs each with its own rheostat, arranged in a ring, and a rotating arm with photocell scanning over the ring, to generate an arbitrary waveform. Also, the rotating speed of the arm was controlled via the brightness of lights, and as a result, arbitrary rhythms were generated.
The first electronic sequencer was invented by Raymond Scott, using thyratrons and relays.
Clavivox, developed since 1952, was a kind of keyboard synthesizer with sequencer. On its prototype, a theremin manufactured by young Robert Moog was utilized to enable portamento over 3-octave range, and on later version, it was replaced by a pair of photographic film and photocell for controlling the pitch by voltage.
In 1968 Ralph Lundsten and Leo Nilsson had a polyphonic synthesizer with sequencer called Andromatic built for them by Erkki Kurenniemi.
Step sequencers
The step sequencers played rigid patterns of notes using a grid of (usually) 16 buttons, or steps, each step being 1/16 of a measure. These patterns of notes were then chained together to form longer compositions. Sequencers of this kind are still in use, mostly built into drum machines and grooveboxes. They are monophonic by nature, although some are multi-timbral, meaning that they can control several different sounds but only play one note on each of those sounds.
Early computers
On the other hand, software sequencers were continuously utilized since the 1950s in the context of computer music, including computer-played music (software sequencer), computer-composed music (music synthesis), and computer sound generation (sound synthesis). In June 1951, the first computer music Colonel Bogey was played on CSIRAC, Australia's first digital computer. In 1956, Lejaren Hiller at the University of Illinois at Urbana–Champaign wrote one of the earliest programs for computer music composition on ILLIAC, and collaborated on the first piece, Illiac Suite for String Quartet, with Leonard Issaction. In 1957 Max Mathews at Bell Labs wrote MUSIC, the first widely used program for sound generation, and a 17-second composition was performed by the IBM 704 computer. Subsequently, computer music was mainly researched on the expensive mainframe computers in computer centers, until the 1970s when minicomputers and then microcomputers became available in this field.
In Japan, experiments in computer music date back to 1962, when Keio University professor Sekine and Toshiba engineer Hayashi experimented with the TOSBAC computer. This resulted in a piece entitled TOSBAC Suite.
In 1965, Mathews and L. Rosler developed Graphic 1, an interactive graphical sound system (that implies sequencer) on which one could draw figures using a light-pen that would be converted into sound, simplifying the process of composing computer generated music. It used PDP-5 minicomputer for data input, and IBM 7094 mainframe computer for rendering sound. Also in 1970, Mathews and F. R. Moore developed the GROOVE (Generated Real-time Output Operations on Voltage-controlled Equipment) system, a first fully developed music synthesis system for interactive composition (that implies sequencer) and realtime performance, using 3C/Honeywell DDP-24 (or DDP-224) minicomputers. It used a CRT display to simplify the management of music synthesis in realtime, 12bit D/A for realtime sound playback, an interface for analog devices, and even several controllers including a musical keyboard, knobs, and rotating joysticks to capture realtime performance.
Digital sequencers
In 1971, Electronic Music Studios (EMS) released one of the first digital sequencer products as a module of Synthi 100, and its derivation, Synthi Sequencer series.
After then, Oberheim released the DS-2 Digital Sequencer in 1974, and Sequential Circuits released Model 800 in 1977
In 1975, New England Digital (NED) released ABLE computer (microcomputer) as a dedicated data processing unit for Dartmouth Digital Synthesizer (1973), and based on it, later Synclavier series were developed.
The Synclavier I, released in September 1977, was one of the earliest digital music workstation product with multitrack sequencer. Synclavier series evolved throughout the late-1970s to the mid-1980s, and they also established integration of digital-audio and music-sequencer, on their Direct-to-Disk option in 1984, and later Tapeless Studio system.
In 1982, renewed the Fairlight CMI Series II and added new sequencer software "Page R", which combined step sequencing with sample playback.
Yamaha's GS-1, their first FM digital synthesizer, was released in 1980. To program the synthesizer, Yamaha built a custom computer workstation . It was only available at Yamaha's headquarters in Japan (Hamamatsu) and the United States (Buena Park).
While there were earlier microprocessor-based sequencers for digital polyphonic synthesizers, their early products tended to prefer the newer internal digital buses than the old-style analogue CV/Gate interface once used on their prototype system. Then in the early-1980s, they also re-recognized the needs of CV/Gate interface, and supported it along with MIDI as options.
In 1977, Roland Corporation released the MC-8 Microcomposer, also called computer music composer by Roland. It was an early stand-alone, microprocessor-based, digital CV/Gate sequencer, and an early polyphonic sequencer. It equipped a keypad to enter notes as numeric codes, 16 KB of RAM for a maximum of 5200 notes (large for the time), and a polyphony function which allocated multiple pitch CVs to a single Gate. It was capable of eight-channel polyphony, allowing the creation of polyrhythmic sequences. The MC-8 had a significant impact on popular electronic music, with the MC-8 and its descendants (such as the Roland MC-4 Microcomposer) impacting popular electronic music production in the 1970s and 1980s more than any other family of sequencers. The MC-8's earliest known users were Yellow Magic Orchestra in 1978.
In June 1981, Roland Corporation founder Ikutaro Kakehashi proposed the concept of standardization between different manufacturers' instruments as well as computers, to Oberheim Electronics founder Tom Oberheim and Sequential Circuits president Dave Smith. In October 1981, Kakehashi, Oberheim and Smith discussed the concept with representatives from Yamaha, Korg and Kawai. In 1983, the MIDI standard was unveiled by Kakehashi and Smith. The first MIDI sequencer was the Roland MSQ-700, released in 1983.
It was not until the advent of MIDI that general-purpose computers started to play a role as sequencers. Following the widespread adoption of MIDI, computer-based MIDI sequencers were developed. MIDI-to-CV/Gate converters were then used to enable analogue synthesizers to be controlled by a MIDI sequencer. Since its introduction, MIDI has remained the musical instrument industry standard interface through to the present day.
In 1978, Japanese personal computers such as the Hitachi Basic Master equipped the low-bit D/A converter to generate sound which can be sequenced using Music Macro Language (MML). This was used to produce chiptune video game music.
It was not until the advent of MIDI, introduced to the public in 1983, that general-purpose computers really started to play a role as software sequencers. NEC's personal computers, the PC-88 and PC-98, added support for MIDI sequencing with MML programming in 1982. In 1983, Yamaha modules for the MSX featured music production capabilities, real-time FM synthesis with sequencing, MIDI sequencing, and a graphical user interface for the software sequencer. Also in 1983, Roland Corporation's CMU-800 sound module introduced music synthesis and sequencing to the PC, Apple II, and Commodore 64.
The spread of MIDI on personal computers was facilitated by Roland's MPU-401, released in 1984. It was the first MIDI-equipped PC sound card, capable of MIDI sound processing and sequencing. After Roland sold MPU sound chips to other sound card manufacturers, it established a universal standard MIDI-to-PC interface. Following the widespread adoption of MIDI, computer-based MIDI software sequencers were developed.
In 1987, software sequencers called trackers were developed to realize the low-cost integration of sampling sound and interactive digital sequencer as seen on Fairlight CMI II "Page R". They became popular in the 1980s and 1990s as simple sequencers for creating computer game music, and remain popular in the demoscene and chiptune music.
Visual timeline of rhythm sequencers
See also
List of music sequencers – related article split from this article
List of music software
Tracker (music software)
Music workstation
Groovebox
Combination action#Sequencers (for organs)
Notes
References
Further reading
List of papers sharing a similar perspective with this Wikipedia article:
Note: although this conference paper emphasized the "Ace Tone FR-1 Rhythm Ace", it is not a music sequencer nor first drum machine product.
External links
(1974 newspaper article about digital sequencer)
Electronic musical instruments
MIDI
Music software
Sound production technology
Synthesiser modules
Articles containing video clips
Iranian inventions |
12352696 | https://en.wikipedia.org/wiki/Channel%20router | Channel router | A channel router is a specific variety of router for integrated circuits. Normally using two layers of interconnect, it must connect the specified pins on the top and bottom of the channel. Specified nets must also be brought out to the left and right of the channel, but may be brought out in any order. The height of the channel is not specified - the router computes what height is needed.
The density of a channel, defined for every x within the channel, is the number of nets that appear on both the left and right of a vertical line at that x. The maximum density is a lower bound on the height of the channel. A "cyclic constraint" occurs when two pins occur in the same column (but with different orders) in at least two columns. In the example shown, nets 1 and 3 suffer from cyclic constraints. This can only be solved by "doglegs" as shown on net 1 of the example.
Channel routers were one of the first forms of routers for integrated circuits, and were heavily used for many years, with YACR perhaps the best known program. However, modern chips have many more than 2 interconnect layers. Although the effort was made to extend channel routers to more layers, this approach was never very popular, since it did not work well with over-the-cell routing where pins are not movable. In recent years, area routers have in general taken over.
References
Digital electronics
Autorouters |
27576340 | https://en.wikipedia.org/wiki/Michael%20Luck%20%28computer%20scientist%29 | Michael Luck (computer scientist) | Prof. Michael Luck is a computer scientist based at the Department of Computer Science, King's College London, in central London, England. He leads the Agents and Intelligent Systems (AIS) section.
From 1993 to 2000, Michael Luck was based in the Department of Computer Science at the University of Warwick. From 2000 to 2006, Luck was a professor in the School of Electronics and Computer Science at the University of Southampton. He has led the AgentLink European Co-ordination Action for Agent-Based Computing.
Luck has undertaken research in the area of intelligent agents. He is the co-author of the books Understanding Agent Systems and Agent-Based Software Development.
See also
AgentSpeak, an agent-oriented programming language
Distributed Multi-Agent Reasoning System (dMARS), a platform for intelligent agents
References
External links
Michael Luck home page
Year of birth missing (living people)
Living people
British computer scientists
Computer science writers
Academics of the University of Warwick
Academics of the University of Southampton
Academics of King's College London |
23225253 | https://en.wikipedia.org/wiki/Release%20%28Sister%20Hazel%20album%29 | Release (Sister Hazel album) | Release is Sister Hazel's seventh studio album. It was released on August 18, 2009 through Croakin' Poets/Rock Ridge.
Unlike previous Sister Hazel albums, all of the band members contributed to the songwriting. According to Ryan Newell, the album got its name because they "Took a different approach on this record and 'released' the past method."
Track listing
"Release" (Ryan Newell, Emerson Hart, Pat McGee) - 3:51
"Take a Bow" (Newell, Mike Daly, McGee) - 3:00
"I Believe in You" (Andrew Copeland, Stan Lynch) - 2:51
"Run for the Hills" (Copeland, Britton Cameron) - 3:41
"Better Way" (Ken Kelly, Lindsey Kelly, Mark Trojanowski) - 3:56
"Walls and Cannonballs" (Ken Block) - 3:14
"Vacation Rain" (Jett Beres) - 3:51
"See Me Beautiful" (Block) - 4:06
"One Life" (Cameron, Copeland, Lynch) - 5:13
"Take It Back" (L. Kelly, K. Kelly, Trojanowski) - 3:49
"Fade" (Newell, Chuck Carrier) - 3:29
"Ghost in the Crowd" (Beres) - 5:07
Personnel
Ken Block - lead vocals, acoustic guitar
Jett Beres - bass, harmony vocals
Andrew Copeland - rhythm guitar, vocals
Ryan Newell - lead and slide guitar, harmony vocals
Mark Trojanowski - drums
References
2009 albums
Sister Hazel albums |
349634 | https://en.wikipedia.org/wiki/Hinckley%20and%20Bosworth | Hinckley and Bosworth | Hinckley and Bosworth is a local government district with borough status in south-western Leicestershire, England, administered by Hinckley and Bosworth Borough Council. Its only towns are Hinckley, Earl Shilton and Market Bosworth. Villages include Barwell, Burbage, Stoke Golding, Groby, Shackerstone and Twycross. The population of the Borough at the 2011 census was 105,078.
As of the 2019 local election, the council is controlled by the Liberal Democrats.
The district is broadly coterminous to the Bosworth parliamentary constituency, which is represented in Parliament by Luke Evans (Conservative).
The Borough was formed in 1974 by the merger of the Hinckley Urban District and the Market Bosworth Rural District less Ibstock. It was originally to be known as Bosworth, but the council changed its name on 20 November 1973, before it came into its powers. It was granted borough status in 1974.
Geography
There are a number of geographical features which shape the landscape of Hinckley & Bosworth.
Two large neighbouring urban areas lie to the south of the borough: Hinckley and Burbage and Barwell and Earl Shilton. A narrow green wedge separates the two conurbations, which is increasingly being occupied by leisure facilities such as the Marston's Stadium and a new leisure centre. To the east of the wedge lies Burbage Common and Woods, a large popular green recreational area.
The west of the borough is largely flat in nature, dominated by the River Sence flood plain. This area of the borough is largely rural, consisting of a number of very small villages and hamlets.
At the northern and eastern edges of the borough lie several settlements (including Bagworth, Desford, Groby, Markfield, Ratby and Thornton) which largely relate to Leicester; in particular the most northern villages have little to do with the main administrative centre of Hinckley. The northern area of the borough also forms part of Charnwood Forest, an area which it is hoped can be enhanced to provide an attractive natural resource.
Places of interest
The Geographical Centre of England is in the northwest of the borough at Lindley Hall Farm, near Fenny Drayton
Burbage Common and Woods is one of the largest recreation areas in the borough consisting of 80 hectares of fields, meadows and woodland
Hinckley Museum is in a range of 17th century timber-framed framework knitters' cottages.
The Ashby Canal, the longest contour canal in England, passes through the borough from Hinckley in the south of the borough through Stoke Golding, Dadlington, Market Bosworth and Shackerstone before heading north to its current terminus at Snarestone.
There is a large mill in Sheepy Magna to the west of the borough located on the River Sence
Stoke Golding has one of the most beautiful medieval churches in Leicestershire, with an exquisitely carved arcade and very fine 13th century window tracery.
The site of the Battle of Bosworth, administered by Leicestershire County Council, includes an interpretation centre at Ambion Hill, where Richard III encamped the night before the battle. St. James's Church at Dadlington is the place where many of the dead were buried and where a chantry was founded on their behalf.
The Battlefield Line is a preserved railway which runs over part of the alignment of the former railway from Nuneaton to Ashby-de-la-Zouch. It is home to the Shackerstone Diesel Group.
Twycross Zoo is notable for having the largest collection of primates in the world.
Thornton Reservoir is a former drinking water reservoir that is no longer in use.
A large collection of tropical birds is on display at Tropical Birdland near to Desford.
Railways
The only railway station in the borough on the National Rail network is Hinckley railway station on the South Leicestershire Line opened by the LNWR between 1862 and 1864. Currently there are direct services to Birmingham New Street and Leicester only with additional services to/from Cambridge and Stansted Airport in the peak.
There was also a branch line serving the market town of Market Bosworth which connected both Nuneaton and Hinckley to both Coalville and Ashby. The line closed to regular traffic in 1970 and is now part of the Battlefield Line. There was also a small stub to Hinckley but was never opened or used. There was also a stub to Nuneaton via Stoke Golding.
The last line that runs through part the borough is the Leicester to Burton Line which had a station in Desford, the station closed in 1964 but the line remains open for traffic. The station also served as a junction for the branch line to Leicester West Bridge on the now defunct Swannington and Leicester Railway. Although the section from Desford to Swannington remains open for freight traffic.
Demographics
Hinckley and Bosworth is the second largest borough by population in Leicestershire and has seen significant population growth over recent decades; a trend forecast to continue at least into the short-medium term.
Political control
Like many other shire districts, authority over Hinckley and Bosworth is shared between the district council and the county council. Areas of responsibility of the district council include local planning, building control, council housing, refuse collection, recycling, and some leisure services and parks.
The district council is made up of 34 councillors who are elected every four years; the last election took place in May 2019. The council is currently under control of the Liberal Democrats who took control from the Conservatives at that election.
The current composition of the council is as follows:
Parishes
Bagworth and Thornton, Barlestone, Barwell (Re-created in 2007), Burbage
Cadeby, Carlton
Desford
Earl Shilton (a town council)
Groby
Higham on the Hill
Market Bosworth, Markfield
Nailstone, Newbold Verdon
Osbaston
Peckleton (including the villages of Kirkby Mallory and Stapleton)
Ratby
Shackerstone, Sheepy, Stanton-under-Bardon, Stoke Golding, Sutton Cheney
Twycross
Witherley
Arms
References
External links
Hinckley and Bosworth Borough Council's Website
Hinckley Past and Present
The Burbage carnival supports local charities every year
Non-metropolitan districts of Leicestershire |
3420710 | https://en.wikipedia.org/wiki/Windows%20Media%20Encoder | Windows Media Encoder | Windows Media Encoder (WME) is a discontinued, freeware media encoder developed by Microsoft which enables content developers to convert or capture both live and prerecorded audio, video, and computer screen images to Windows Media formats for live and on-demand delivery. It is the successor of NetShow Encoder. The download page reports that it is not supported on Windows 7. WME has been replaced by a free version of Microsoft Expression Encoder. The Media 8 Encoding Utility is still listed. WME was available in both 32-bit and 64-bit versions.
Windows Media Encoder 9 can encode video using Windows Media Video version 7, 8 or 9. Audio encoding uses a number of Windows Media Audio version 9.2 or version 10 (if the version 10 codecs are installed) profiles and a Windows Media Audio 9 Voice speech codec. Content can also be created as uncompressed audio or video.
Windows Media Encoder 9 enables two-pass encoding to optimize quality for on-demand (streamed or download-and-play) content. It also supports variable bitrate (VBR) encoding for download-and-play scenarios. True VBR can be applied over the entire duration of a high-motion sequence, ensuring the highest quality. This version also enables scripted encoding with the wmcmd.vbs VBScript file allowing content developers to encode large numbers of prerecorded media files. Bundled with the program are the applications Windows Media File Editor, Windows Media Profile Editor, and Windows Media Stream Editor.
The GUI encoder application is actually a "wrapper" of the encoder itself. Developers can write their own applications using Visual Studio to perform the same functions found in the application. These applications can be used to automate audio and video production. An SDK is also available.
With the removal of Windows Media DRM in the Windows 10 Anniversary Update, Windows Media Encoder 9 is no longer compatible with the current version of Windows as of May 2017.
Versions
NetShow Encoder 3.0
NetShow Encoder 3.01 (comes with Powerpoint 2000)
Windows Media Encoder 4.0 (also as part of the Windows Media Tools) Windows Media Tools 4.1 was the last release for Windows 95 and Windows NT 4.0.
Windows Media Encoder 7.1 (for Windows 98, Windows Me and Windows 2000)
Windows Media 8 Encoding Utility (command-line) for Windows 98, Windows Me and Windows 2000
Windows Media Encoder 9
Windows Media Encoder x64 Edition (based on Windows Media 10 SDK)
Windows Media Encoder Studio Edition was a separate planned version of Windows Media Encoder 9 with support for segment encoding and multiple audio channels. After beta 1, it was eventually cancelled. Microsoft later released the commercial application, Expression Encoder as part of its Expression Studio suite.
See also
WMV HD
Windows Media Player
Windows Media Services
Windows Movie Maker
Microsoft Expression Encoder
Comparison of screencasting software
References
External links
Fix for issues with Windows Vista
Microsoft Expression Encoder
Softpedia - Windows Media Encoder
Streaming software
Screencasting software
Windows multimedia software
Windows-only freeware
Video editing software
Video conversion software |
40392400 | https://en.wikipedia.org/wiki/Automated%20Tropical%20Cyclone%20Forecasting%20System | Automated Tropical Cyclone Forecasting System | The Automated Tropical Cyclone Forecasting System (ATCF) is a piece of software originally developed to run on a personal computer for the Joint Typhoon Warning Center (JTWC) in 1988, and the National Hurricane Center (NHC) in 1990. ATCF remains the main piece of forecasting software used for the United States Government, including the JTWC, NHC, and Central Pacific Hurricane Center. Other tropical cyclone centers in Australia and Canada developed similar software in the 1990s. The data files with ATCF lie within three decks, known as the a-, b-, and f-decks. The a-decks include forecast information, the b-decks contain a history of center fixes at synoptic hours, and the f-decks include the various fixes made by various analysis center at various times. In the years since its introduction, it has been adapted to Unix and Linux platforms.
Reason for development
The need for a more modernized method for forecasting tropical cyclones had become apparent by the mid-1980s. At that time Department of Defense was using acetate, grease pencils, and disparate computer programs to forecast tropical cyclones. The ATCF software was developed by the Naval Research Laboratory for the Joint Typhoon Warning Center (JTWC) beginning in 1986, and used since 1988. During 1990 the system was adapted by the National Hurricane Center (NHC) for use at the NHC, National Centers for Environmental Prediction and the Central Pacific Hurricane Center. This provided the NHC with a multitasking software environment which allowed them to improve efficiency and cut the time required to make a forecast by 25% or 1 hour. ATCF was originally developed for use within DOS, before later being adapted to Unix and Linux.
System identification
Systems within ATCF are identified with the basin prefix (AL – North Atlantic Ocean, CP – Central North Pacific Ocean, EP – North-East Pacific Ocean, IO – North Indian Ocean, SH – Southern Hemisphere, SL – South Atlantic Ocean, WP – North-West Pacific Ocean) and then followed by two digit number between 01 and 49 for active tropical cyclones, which becomes incremented with each new system, and then the year associated with the system (e.g. EP202015 for Hurricane Patricia). Numbers from 50 through 79 after the basin acronym are used internally by the basin's respective Tropical Cyclone Warning Centers and Regional Specialized Meteorological Center. Numbers in the 80s are used for training purposes and can be reused. Numbers in the 90s are used for areas of interest, sometimes referred to as invests or areas of disturbed weather, and are also reused within any particular year. Their status is listed the following ways within the associated data file:
DB – disturbance,
TD – tropical depression,
TS – tropical storm,
TY – typhoon,
ST – super typhoon,
TC – tropical cyclone,
HU – hurricane,
SD – subtropical depression,
SS – subtropical storm,
EX – extratropical systems,
IN – inland,
DS – dissipating,
LO – low,
WV – tropical wave,
ET – extrapolated, and
XX – unknown. Times used are in a four digit year, month, day, and hour format.
Data formats and locations in ATCF
The "A deck" contains the official track and intensity forecast, as well as the model guidance, also known as the objective aids. The "B deck" contains the storm's track information at synoptic hours (0000, 0600, 1200, and 1800 UTC). The "F deck" contains what are known as position fixes and intensity estimates for the associated tropical cyclone, based on satellite data on the cyclone derived by the Dvorak technique. The "E deck" contains information regarding position error and probabilistic information regarding the forecast at that time.
Similar software used elsewhere
In the 1990s, other countries developed similar tropical cyclone forecasting software. The Bureau of Meteorology in Australia developed the Australian Tropical Cyclone Workstation. The Canadian Hurricane Centre developed Canadian Hurricane Centre Forecaster's Workstation.
References
Tropical cyclone meteorology |
510114 | https://en.wikipedia.org/wiki/Base%20station | Base station | Base station (or base radio station) is – according to the International Telecommunication Union's (ITU) Radio Regulations (RR) – a "land station in the land mobile service."
The term is used in the context of mobile telephony, wireless computer networking and other wireless communications and in land surveying. In surveying, it is a GPS receiver at a known position, while in wireless communications it is a transceiver connecting a number of other devices to one another and/or to a wider area.
In mobile telephony, it provides the connection between mobile phones and the wider telephone network. In a computer network, it is a transceiver acting as a switch for computers in the network, possibly connecting them to a/another local area network and/or the Internet. In traditional wireless communications, it can refer to the hub of a dispatch fleet such as a taxi or delivery fleet, the base of a TETRA network as used by government and emergency services or a CB shack.
Land surveying
In the context of external land surveying, a base station is a GPS receiver at an accurately-known fixed location which is used to derive correction information for nearby portable GPS receivers. This correction data allows propagation and other effects to be corrected out of the position data obtained by the mobile stations, which gives greatly increased location precision and accuracy over the results obtained by uncorrected GPS receivers.
Computer networking
In the area of wireless computer networking, a base station is a radio receiver/transmitter that serves as the hub of the local wireless network, and may also be the gateway between a wired network and the wireless network. It typically consists of a low-power transmitter and wireless router.
Wireless communications
In radio communications, a base station is a wireless communications station installed at a fixed location and used to communicate as part of one of the following:
a push-to-talk two-way radio system, or;
a wireless telephone system such as cellular CDMA or GSM cell site.
Terrestrial Trunked Radio
Base stations use RF power amplifiers (radio-frequency power amplifiers) to transmit and receive signals. The most common RF power amplifiers are metal–oxide–semiconductor field-effect transistors (MOSFETs), particularly LDMOS (power MOSFET) RF amplifiers. RF LDMOS amplifiers replaced RF bipolar transistor amplifiers in most base stations during the 1990s, leading to the wireless revolution.
Two-way radio
Professional
In professional two-way radio systems, a base station is used to maintain contact with a dispatch fleet of hand-held or mobile radios, and/or to activate one-way paging receivers. The base station is one end of a communications link. The other end is a movable vehicle-mounted radio or walkie-talkie. Examples of base station uses in two-way radio include the dispatch of tow trucks and taxicabs.
Professional base station radios are often one channel. In lightly used base stations, a multi-channel unit may be employed. In heavily used systems, the capability for additional channels, where needed, is accomplished by installing an additional base station for each channel. Each base station appears as a single channel on the dispatch center control console. In a properly designed dispatch center with several staff members, this allows each dispatcher to communicate simultaneously, independently of one another, on a different channel as necessary. For example, a taxi company dispatch center may have one base station on a high-rise building in Boston and another on a different channel in Providence. Each taxi dispatcher could communicate with taxis in either Boston or Providence by selecting the respective base station on his or her console.
In dispatching centers it is common for eight or more radio base stations to be connected to a single dispatching console. Dispatching personnel can tell which channel a message is being received on by a combination of local protocol, unit identifiers, volume settings, and busy indicator lights. A typical console has two speakers identified as select and unselect. Audio from a primary selected channel is routed to the select speaker and to a headset. Each channel has a busy light which flashes when someone talks on the associated channel.
Base stations can be local controlled or remote controlled. Local controlled base stations are operated by front panel controls on the base station cabinet. Remote control base stations can be operated over tone- or DC-remote circuits. The dispatch point console and remote base station are connected by leased private line telephone circuits, (sometimes called RTO circuits), a DS-1, or radio links. The consoles multiplex transmit commands onto remote control circuits. Some system configurations require duplex, or four wire, audio paths from the base station to the console. Others require only a two-wire or half duplex link.
Interference could be defined as receiving any signal other than from a radio in your own system. To avoid interference from users on the same channel, or interference from nearby strong signals on another channel, professional base stations use a combination of:
minimum receiver specifications and filtering.
analysis of other frequencies in use nearby.
in the US, coordination of shared frequencies by coordinating agencies.
locating equipment so that terrain blocks interfering signals.
use of directional antennas to reduce unwanted signals.
Base stations are sometimes called control or fixed stations in US Federal Communications Commission licensing. These terms are defined in regulations inside Part 90 of the commissions regulations. In US licensing jargon, types of base stations include:
A fixed station is a base station used in a system intended only to communicate with other base stations. A fixed station can also be radio link used to operate a distant base station by remote control. (No mobile or hand-held radios are involved in the system.)
A control station is a base station used in a system with a repeater where the base station is used to communicate through the repeater.
A temporary base is a base station used in one location for less than a year.
A repeater is a type of base station that extends the range of hand-held and mobile radios.
Amateur and hobbyist use
In amateur radio, a base station also communicates with mobile rigs but for hobby or family communications. Amateur systems sometimes serve as dispatch radio systems during disasters, search and rescue mobilizations, or other emergencies.
An Australian UHF CB base station is another example of part of a system used for hobby or family communications.
Wireless telephone
Wireless telephone differ from two-way radios in that:
wireless telephones are circuit switched: the communications paths are set up by dialing at the start of a call and the path remains in place until one of the callers hangs up.
wireless telephones communicate with other telephones usually over the public switched telephone network.
A wireless telephone base station communicates with a mobile or hand-held phone. For example, in a wireless telephone system, the signals from one or more mobile telephones in an area are received at a nearby base station, which then connects the call to the land-line network. Other equipment is involved depending on the system architecture. Mobile telephone provider networks, such as European GSM networks, may involve carrier, microwave radio, and switching facilities to connect the call. In the case of a portable phone such as a US cordless phone, the connection is directly connected to a wired land line.
Emissions issues
While low levels of radio-frequency power are usually considered to have negligible effects on health, national and local regulations restrict the design of base stations to limit exposure to electromagnetic fields. Technical measures to limit exposure include restricting the radio frequency power emitted by the station, elevating the antenna above ground level, changes to the antenna pattern, and barriers to foot or road traffic. For typical base stations, significant electromagnetic energy is only emitted at the antenna, not along the length of the antenna tower.
Because mobile phones and their base stations are two-way radios, they produce radio-frequency (RF) radiation in order to communicate, exposing people near them to RF radiation giving concerns about mobile phone radiation and health. Hand-held mobile telephones are relatively low power so the RF radiation exposures from them are generally low.
The World Health Organization has concluded that "there is no convincing scientific evidence that the weak RF signals from base stations and wireless networks cause adverse health effects."
The consensus of the scientific community is that the power from these mobile phone base station antennas is too low to produce health hazards as long as people are kept away from direct access to the antennas. However, current international exposure guidelines (ICNIRP) are based largely on the thermal effects of base station emissions, NOT considering the non-thermal effects harmless.
Emergency power
Fuel cell backup power systems are added to critical base stations or cell sites to provide emergency power.
Media
See also
Base transceiver station
Mobile switching center
Macrocell
Microcell
Picocell
Femtocell
Access point base station
Cell site
Cellular repeater
Mobile phone
Mobile phone radiation and health
Portable phone
Signal strength
Audio level compression
OpenBTS
Bandstacked
References
External links
Occupational Safety and Health Admin. Non-Ionizing Radiation Exposure Guidelines.
Surveying
Wireless networking
Telecommunications infrastructure
Radio stations and systems ITU |
22331403 | https://en.wikipedia.org/wiki/Quaid-e-Awam%20University%20of%20Engineering%2C%20Science%20%26%20Technology | Quaid-e-Awam University of Engineering, Science & Technology | The Quaid e Awam University of Engineering, Sciences & Technology () often referred as 'QUEST' is a public research university located in the urban neighborhood of Nawabshah, Sindh, Pakistan.It is one of the best universities in Pakistan, ranks 7th best university among engineering universities in Pakistan. The university is honored after the former Prime Minister of Pakistan, Zulfikar Ali Bhutto.
Academic Departments
Department of Artificial Intelligence
Department of Automation and Control Engineering
Department of Agro-Food Processing Engineering Technology
Department of Basic Sciences And Related Studies
Department of Chemical Engineering
Department of Civil Engineering
Department of Computer Science
Department of Computer Systems Engineering
Department of Electrical Engineering
Department of Electronic Engineering
Department of Energy & Environmental Engineering
Department of Energy systems Engineering
Department of Environmental Engineering
Department of English (Language and Literature)
Department of Information Technology
Department of Mathematics And Statistics
Department of Mechanical Engineering
Department of Physics
Department of Software Engineering
Department of Telecommunication Engineering
Academics
Undergraduate Programs
The system of education is the semester wise system. The academic year is divided into two semesters. University offers four-year (Eight Semesters) Bachelor's degree in Engineering, Information Technology, Computer Science, English and Mathematics.
Bachelor's degree program is offered by the following departments of university:
Department of Artificial Intelligence
Department of Automation and Control Engineering
Department of Agro-Food Processing Engineering Technology
Department of Chemical Engineering
Department of Civil Engineering
Department of Computer Science
Department of Computer Systems Engineering
Department of Electrical Engineering
Department of Electronic Engineering
Department of Energy & Environment Engineering
Department of English (Language and Literature)
Department of Information Technology
Department of Mathematics And Statistics
Department of Mechanical Engineering
Department of Physics
Department of Software Engineering
Department of Telecommunication Engineering
Postgraduate program
The new department situated at the front of Administration Block is Post Graduate Department, where various departments of university offer MS, M.Phil and Ph.D degree programs in following areas of research:
Master of Engineering (ME)
Construction Engineering
Civil Engineering
Structural Engineering
Power Engineering
Computer System Engineering
Computer Communication and Networks
Manufacturing Engineering
Industrial Engineering & Management
Energy Systems Engineering
Environmental Engineering
Communication Engineering
Industrial Automation and control
Master of Science (MS)
Information Technology
Software Engineering
Mathematics
Computer Science
Facilities
Q.U.E.S.T Software House, A-Sector
The Chairman of Department of Information Technology (Prof. Dr. Zahid Hussain Abro) recently open a Software House to develop software for University and commercial market of Pakistan.
Labs
All departments have their own laboratories with equipment and facilities for each subject. Especially Departments of Computer Systems, Information Technology, Computer Science have latest infrastructure and technical staff .
Student's Societies
The university support different international and national student societies . These Societies organize seminars about advancements in the technologies, different competitions and debates. The university also provides financial support to these societies to organize such events.
Sports
The university's offers a comprehensive range of facilities for sport and leisure for almost every student sport and participation at all levels. Student common rooms are annexed to every hostel in which facilities are provided for indoor games such as Table Tennis, Badminton, Carom, etc. Facilities exist for outdoor games such as Volleyball, Cricket, Tennis, Hockey, Basketball, Football, Athletics and Bodybuilding. For all these games a sports complex held near the Sector - C Mosque.
Scholarships
The university provides some of the following scholarship opportunities to its brighter and needy students.
HEC–JAPANESE NEED BASED MERIT SCHOLARSHIP
IEP-SAC SAUDI ARABIA SCHOLARSHIP
MORA SCHOLARSHIP
MERIT SCHOLARSHIP
Quaid-e-Azam Scholarship (mr)
Merit Scholarships
Foreign Funded Scholarship Programs
ogdcl scholarship
See also
List of Universities in Pakistan
Mehran University of Engineering and Technology
NED University of Engineering & Technology
Sindh Agriculture University
University of Sindh
Higher Education Commission
References
Q.U.E.S.T Official Website
Nawabshah
American Society of Mechanical Engineers ASME
Pakistan Engineering Council Accredited Engineering Qualifications
Uni-Index
Cultural Days of Q.U.E.S.T
Engineering universities and colleges in Pakistan
Public universities and colleges in Sindh
Memorials to Zulfikar Ali Bhutto |
6112660 | https://en.wikipedia.org/wiki/Gellish | Gellish | Gellish is an ontology language for data storage and communication, designed and developed by Andries van Renssen since mid-1990s. It started out as an engineering modeling language ("Generic Engineering Language", giving it the name, "Gellish") but evolved into a universal and extendable conceptual data modeling language with general applications. Because it includes domain-specific terminology and definitions, it is also a semantic data modelling language and the Gellish modeling methodology is a member of the family of semantic modeling methodologies.
Although its concepts have 'names' and definitions in various natural languages, Gellish is a natural-language-independent formal language. Any natural language variant, such as Gellish Formal English is a controlled natural language. Information and knowledge can be expressed in such a way that it is computer-interpretable, as well as system-independent and natural language independent. Each natural language variant is a structured subset of that natural language and is suitable for information modeling and knowledge representation in that particular language. All expressions, concepts and individual things are represented in Gellish by (numeric) unique identifiers (Gellish UID's). This enables software to translate expressions from one formal natural language to any other formal natural languages.
Overview
Gellish is intended for the expression of facts (statements), queries, answers, etc. For example, for the complete and unambiguous specification of business processes, products, facilities and physical processes; for information about their purchasing, fabrication, installation, operation and maintenance; and for the exchange of such information between systems, although in a system-independent, computer-interpretable and language-independent way. It is also intended for the expression of knowledge and requirements about such things.
The definition of Gellish can be derived from the definition of Gellish Formal English by considering 'expressions' as relations between the Unique Identifiers only. The definition of Gellish Formal English is provided in the Gellish English Dictionary-Taxonomy, which is a large 'smart dictionary' of concepts with relations between those concepts (earlier it was called STEPlib). The Dictionary-Taxonomy is called a 'smart dictionary', because the concepts are arranged in a subtype-supertype hierarchy, making it a taxonomy that supports inheritance of properties from supertype concepts to subtype concepts. Furthermore, because together with other relations between the concepts, the smart dictionary is extended into an ontology. Gellish has basically an extended object-relation-object structure to express facts by relations, whereas each fact may be accompanied by a number of auxiliary facts about the main fact. Examples of auxiliary facts are author, date, status, etc. To enable an unambiguous interpretation, Gellish includes the definition of a large number (more than 650) of standard relation types that determine the rich semantic expression capability of the language.
In principle, for every natural language there is a Gellish variant that is specific for that language. For example, Gellish Dutch (Gellish Nederlands), Gellish Italian, Gellish English, Gellish Russian, etc. Gellish does not invent its own terminology, such as Esperanto, but uses the terms from natural languages. Thus, the Gellish English dictionary-taxonomy is like an (electronic) ordinary dictionary that is extended with additional concepts and with relations between the concepts.
For example, the Gellish dictionary-taxonomy contains definitions of many concepts that also appear in ordinary dictionaries, such as kinds of physical objects like building, airplane, car, pump, pipe, properties such as mass and color, scales such as kg and bar, as well as activities and processes, such as repairing and heating, etc. In addition to that, the dictionary contains concepts with composed names, such as 'hairpin heat exchanger', which will not appear in ordinary dictionaries. The main difference with ordinary dictionaries is that the Gellish dictionary also includes definitions of standard kinds of relations (relation types), which are denoted by standard Gellish English phrases. For example, it defines relation types such as , , , , , , etc. Such standard relation types and concept definitions enable a Gellish-powered software to correctly and unambiguously interpret Gellish expressions.
Gellish expressions may be expressed in any suitable format, such as SQL or RDF or OWL or even in the form of spreadsheet tables, provided that their content is equivalent to the tabular form of Gellish Naming Tables (which define the vocabulary) and Fact Tables (together defining a Gellish Database content) or equivalent to Gellish Message Tables (for data exchange). An example of the core of a Message Table is the following:
A full Gellish Message Table requires additional columns for unique identifiers, the intention of the expression, the language of the expression, cardinalities, unit of measure, the validity context, status, creation date, author, references, and various other columns. Gellish Light only requires the three above columns, but then it does not support, for example, capabilities to distinguish homonyms; automated translation; and version management, etc. Those capabilities and several others are supported by Full Gellish. The following example illustrates the use of some additional columns in a Gellish Message Table, where UoM stands for 'unit of measure'.
The collection of standard relation types define the kinds of facts that can be expressed in Gellish, although anybody can create his own proprietary extension of the dictionary and thus can add concepts and relation types as and when required.
As Gellish is a formal language, any Gellish expression may only use concepts that are defined in a Gellish dictionary, or the definition of any concept should be ad hoc within the collection of Gellish expressions. Knowledge bases can be created by using the Gellish language and its concept definitions in a Gellish Dictionary. Example applications of a Gellish dictionary are usage as a source of classes for classification of equipment, documents, etc., or as standard terminology (metadata) or to harmonize data in various computer systems, or as a thesaurus or taxonomy in a search engine.
Gellish enables automatic translation, and enables the use of synonyms, abbreviations and codes as well as homonyms, due to the use of a unique natural language independent identifier (UID) for every concept. For example, 130206 (pump) and 1225 (is classified as a). This ensures that concepts are identified in a natural language independent way. Therefore, various Gellish Dictionaries use the same UID's for the same concept. This means that those dictionaries provide translations of the names of the objects, as well as a translation of the standard relation types. The UID's enable that information and knowledge that is expressed in one language variant of Gellish can be automatically translated and presented by Gellish-powered software in any other language variant for which a Gellish dictionary is available. For example, the phrase and the phrase are denotations of the same UID 1225.
For example, a computer can automatically express the second line in the above example in German as follows:
Questions (queries) can be expressed as well. Queries are facilitated through standardized terms such as what, which, where and when. They can be used in combination with reserved UID's for unknowns in the range 1-100. This enables Gellish expressions for queries, such as:
- query: what <is located in> Paris
Gellish-powered software should be able to provide the correct answer to this query by comparing the expression with the facts in the database, and should respond with:
- answer: The Eiffel Tower <is located in> Paris
Note that the automatic translation capability implies that a query/question that is expressed in a particular language, say English, can be used to search in a Gellish database in another language (say Chinese), whereas the answer can be presented in English.
Information models in Gellish
Information models can be distinguished in two main categories:
Models about individual things. These models may be about individual physical objects as well as about activities, processes and events, or a combination of them. An information model about an individual physical object and possibly also about its operation and maintenance, such as a process plant, a ship, an airplane, an infrastructural facility or a typical design (e.g. of a car or of a component) is called as Facility Information Model or a Product Model, whereas for a building it is called a Building Information Model (BIM). These models about individual things are characterized by their composition hierarchy, which specify (all) their parts, and by the fact that the assemblies as well as the parts are classified by kinds or types of things.
Models about kinds of things. These models are expressed as collections of relations of particular kinds between kinds of things. They can be further subdivided in the following sub-categories:
- Knowledge models, which are collections of expressions of facts about what can be the case (modeled knowledge).
- Requirements models, which are collections of expressions of facts about what shall be the case in a particular validity context (modeled requirements). This may include modeled versions of the content of requirements documents, such as standard specifications and standard types of components (e.g. as in component and equipment catalogs)
- Definition models, each of which consists of a semantic frame. A definition model is a collection of expressions about what is by definition the case for all things of a kind. The Gellish electronic smart dictionary-taxonomy or ontology is an example of a collection of definition models.
- Models that are collections that include a combination of expressions of the above kinds.
All these categories of models can include drawings and other documents as well as 3D shape information (the core of 3D models). They all can be expressed and integrated in Gellish.
The classification relation between individual things and kinds of things makes the definitions, knowledge and requirements about kinds of things available for the individual things. Furthermore, the subtype-supertype hierarchy in a Gellish Dictionary-Taxonomy implies that the knowledge and requirements that are specified for a kind of thing are inherited by all their subtypes. As a consequence, when somebody designs an individual item and classifies it by a particular kind, then all the knowledge and requirements that are known for the supertypes of that kind will also be recognized and can be made available automatically.
Each category of information model requires its own semantics, because the expression of the individual fact that something real is the case requires other kinds of relations than the expression of the general fact that something can be the case, which again differs from a fact that expresses that something shall be the case in a particular context or that something is by definition always the case. These semantic differences cause that the various categories of information models require their own subsets of standard relation types.
Therefore Gellish makes a distinction between the following categories of relation types:
Relation types for relations between kinds of things (classes). They are intended for the expression of knowledge, requirements and definitions. The various sub-categories knowledge, requirements and definitions are modeled by using different kinds of relations: relation types for things that can be the case, things that shall be the case and things that are by definition the case. All three within applicable cardinality constraints. For example, the specialization relation on the first line in the example above is used for defining a concept (centrifugal pump). The relation types <can have as part a> and <shall have as part a> are examples of kinds of relations that are used to specify knowledge and requirements respectively.
Relation types for relations between individual things. They are intended for the expression of information about individual things. For example the possession of an aspect relation on the third of the above lines.
Relation types for relations between individual things and kinds of things. They are intended for links between individual things and general concepts in the dictionary (or to private extensions of that dictionary). For example the classification and qualification relations above.
Relation types for relations between collections and for relations between a collection and an element in the collection or a common aspect of all elements.
Gellish databases and data exchange messages
Gellish is typically expressed in the form of Gellish Data Tables. There are three categories of Data Tables:
Naming Tables, which contain the vocabulary of the dictionary and the proprietary terms that are used in the expressions.
Fact Tables, which contain the expressions of facts in the form of relations between UID´s, together with a number of auxiliary facts.
A Gellish Database typically consists of one or more Naming Tables and one or more Fact Tables together. Data Tables and Fact Tables are one-to-one equivalent to Message Tables.
Message Tables, which combine the content of Naming Tables and Fact Tables into merged tables. Message Tables are intended for the exchange of data between systems and parties. A Message Table is a single standard table for the expression of any facts, including the unique identifiers (UID's' for the facts), the relation types and the related objects, but also including their names (terms) and a number of auxiliary facts, all combined in one table. Multiple Message Tables on different locations can be combined to one distributed database.
All table columns are standardised, so that each Gellish data table of a category contains the same standard columns, or of a subset of the standard ones. This provides standard interfaces for exchange of data between application systems. The content of data tables may also include constraints and requirements (data models) that specify the kind of data that should and may be provided for particular applications. Such requirements models make dedicated database designs superfluous. The Gellish Data Tables can be used as part of a central database or can form distributed databases, but tables can also be exchanged in data exchange files or as body of Gellish Messages.
A Naming Table relates terms in a language and language community ('speech community') to a unique identifier. This enables the unambiguous use of synonyms, abbreviations and codes as well as homonyms in multiple languages. The following table is an example of a Naming Table:
The inverse indicator is only relevant when phrases are used to denote relation types, because each standard relation type is denoted by at least one standard phrase as well as at least one standard inverse phrase. For example, the phrase <is a part of> has as inverse phrase <has as part>. Both phrases denote the same kind of relation (a composition relation). However, when the inverse phrase is used to express a fact, then the left hand and right hand objects in the expression should have an inverse position. Thus, the following expressions will be recognized as two equally valid expressions of the same fact (with the same Fact UID):
- A <is a part of> B
- B <has as part> A
So, the inverse indicator indicates for relation types whether as phrase is a base phrase (1) or an inverse phrase (2).
A Fact Table contains expressions of any facts, each of which is accompanied by a number of auxiliary facts that provide additional information relevant for the main facts. Examples of auxiliary facts are: the intention, status, author, creation date, etc.
A Gellish Fact Table consists of columns for the main fact and a number of columns for auxiliary facts. The auxiliary facts enable to specify things such as roles, cardinalities, validity contexts, units of measure, date of latest change, author, references, etcetera.:
The columns for the main fact in a Fact Table are:
- a UID of the fact that is expressed on this row in the table
- a UID of the intention with which the fact is communicated or stored (e.g. as a statement, a query, etc.)
- a UID of a left-hand object
- a UID of a relation type
- a UID of a right-hand object
- a UID of a unit of measure (optional)
- a string that forms a description (textual definition) of the left hand object.
These columns also appear in a Message Table as shown below.
A full Gellish Message table is in fact a combination of a Naming Table and a Fact Table. It contains not only columns for the expression of facts, but also columns for the names of the related objects and the additional columns to express auxiliary facts. This enables the use of a single table, also for the specification and use of synonyms and homonyms, multiple languages, etcetera.
The core of a Message Table is illustrated in the following table:
In the above example, the concepts with the names, as well as the (standard) relation types are selected with their UID's from the Gellish English Dictionary.
A Gellish Database table can be implemented in any tabular format. For example, it can be implemented as a SQL-based database or otherwise, as a STEPfile (according to ISO 10303-21), or as a simple spreadsheet table, as in Excel, such as the Gellish Dictionary itself.
Gellish database tables can also be described in an equivalent form using RDF/Notation3 or XML. A representation of “Gellish in XML” is defined in a standard XML Schema. An XML file with data according to that XML Schema is recommended to have as file extension GML, whereas GMZ stands for “Gellish in XML zipped”.
One of the differences between Gellish and RDF, XML or OWL is that Gellish English includes an extensive English Dictionary of concepts, including also a large (and extendable) set of standard relation types to make computer-interpretable expressions (in a form that is also readable for non-IT professionals). On the other hand, 'languages' such as RDF, XML and OWL only define a few basic concepts, which leaves much freedom for their users to define their own 'domain language' concepts.
This attractive freedom has the disadvantage that users of 'languages' such as RDF, XML or OWL still don't use a common language and still cannot integrate data that stem from different sources.
Gellish is designed to provide a real common language, at least to a much larger extent and therefore provides much more standardization and commonality in terminology and expressions.
Gellish compared with OWL
OWL (Web Ontology Language/Ontological Web Language) and Gellish are both meant for use on the semantic web. Gellish can be used in combination with OWL, or on its own. There are many similarities between the two languages, such as the use of unique identifiers (Gellish UIDs, OWL URIs (Uniform Resource Identifiers)) but also important differences. The main differences are as follows:
1. Target audience and meta level
OWL is a metalanguage, including a basic grammar, but without a dictionary. OWL is meant to be used by computer system developers and ontology developers to create ontologies. Gellish is a language that includes a grammar as well as a dictionary-taxonomy and ontology. Gellish is meant to be used by computer system developers as well as by end-users and can also be used by ontology developers when they want to extend the Gellish ontology or build their own domain ontology. Gellish does not make a distinction between a meta-language and a user language; the concepts from both 'worlds' are integrated in one language. So, the Gellish English dictionary contains concepts that are equivalent to the OWL concepts, but also contains the concepts from an ordinary English dictionary.
2. Vocabularies and ontologies
OWL can be used to explicitly represent the meaning of terms in vocabularies and the relationships between those terms. In other words, it can be used for the definition of taxonomies or ontologies. The terms in such a vocabulary do not become part of the OWL language. So OWL does not include definitions of the terms in a natural language, such as road, car, bolt or length. However, it can be used to define them and to build an ontology.
The upper ontology part of Gellish can also be used to define terms and the relations between them. However, many of such natural language terms are already defined in the lower part of the Gellish dictionary-taxonomy itself. So in Gellish, terms such as road, car, bolt or length are part of the Gellish language. Therefore, Gellish English is a subset of natural English.
3. Synonyms and multi-language capabilities
Gellish makes a distinction between concepts and the various terms that are used as names (synonyms, abbreviations and translations) to refer to those concepts in different contexts and languages. Every concept is identified by a unique identifier that is natural-language-independent and can have many different terms in different languages to denote the concept. This enables automatic translation between different natural language versions of Gellish.
In OWL, the various terms in different languages and the synonyms are in principle different concepts that need to be declared to be the same by explicit equivalence relations (unless the alternatives are expressed in terms of the alternative label annotation properties). On one hand, the OWL approach is simpler but it makes expressions ambiguous and makes data integration and automated translation significantly more complicated.
4. Upper ontology
OWL can be regarded as an upper ontology that consists of 54 'language constructs' (constructors or concepts).
The upper ontology part of Gellish currently consists of more than 1500 concepts of which about 650 are standard relation types. In addition to that the Gellish Dictionary-Taxonomy contains more than 40,000 concepts. This indicates the large semantic richness and expression capabilities of Gellish. Furthermore, Gellish contains definitions of many facts about the defined concepts that are expressed as relationships between those concepts.
5. Extensibility
OWL has a fixed set of concepts (terms) that are only extended when the OWL standard is extended. Gellish is extensible by any user, under Open Source conditions.
History
Gellish is a further development of ISO 10303-221 (AP221) and ISO 15926. Gellish is an integration and extension of the concepts that are defined in both standards. The main difference with both ISO standards is that Gellish is easier to implement and has more (precise) semantic expression capabilities and is suitable to express queries and answers as well. The specific philosophy of spatio-temporal parts that is used in ISO 15926 to represent discrete time periods to represent time can also be used in Gellish, however the recommended representation of time in Gellish is the more intuitive method that specifies that facts have a specified validity duration. For example, each property can have multiple numeric values on a scale, which is expressed as multiple facts, whereas for each of those facts an (optional) specification can be added of the moment or time period during which that fact is valid.
A subset of the Gellish Dictionary (Taxonomy) is used to create ISO 15926-4. Gellish in RDF is being standardized as ISO 15926-11.
References
Bibliography
External links
Controlled natural languages
Ontology languages
Semantic Web |
50541921 | https://en.wikipedia.org/wiki/Muhammad%20Yahuza%20Bello | Muhammad Yahuza Bello | Muhammad Yahuza Bello is a Nigerian mathematician who served as the 10th vice chancellor of Bayero University Kano.
Early life and education
Yahuza was Born on 22nd January, 1959 in Nassarawa Local Government in Kano state he attended Giginyu Primary School between 1966 and 1973, he attended Government Secondary School, Gaya where he graduated in 1977, he obtained first and second Degree in Mathematics Education from Bayero University Kano.
Yahuza attended the University of Arkansas, under the supervision of Naoki Kimura where he obtained his third degree in Mathematics between 1985 and 1988.
Career
Yahuza started his career in 1982 as Lecturer in Bayero University Kano, after saving for good 19 years, Yahuza became a professor of Mathematics in 2001.
Yahuza held several administrative positions at Bayero University Kano, which include Head of Mathematical Sciences Department, Sub-Dean, Deputy Dean, and Dean, Faculty of Science, he was also the Dean, School of Postgraduate Studies, Director, Centre for Information Technology; Deputy Vice Chancellor (Academics) Yahuza was elected the 10th Vice chancellor by the University congress and confirm by the Governing board of the University where he served between 2015 and 2020.
Yahuza has supervised and graduated six PhD Mathematics candidates, 37 MSc Mathematics and MSc Computer Science candidates. He also supervised over 50 BSc Mathematics and BSc Computer Science final year projects.
Yahuza is a mathematician who has a passion for computers, which lead to the establishment of Computer Science studies in Bayero University Kano, under the department of Mathematics in the year 1990 which now has become the Faculty of Computer Science with more than 1500 graduates.
Yahuza was appointed the Pro-chancellor of Yusuf Maitama Sule University, Kano by his the Executive Governor of Kano State Abdullahi Umar Ganduje immediately after the resignation of Alhaji Sule Yahya Hamma in 2020
References
1959 births
Living people
Nigerian mathematicians
University of Arkansas alumni
Bayero University Kano faculty
Vice-Chancellors of Nigerian universities |
16909658 | https://en.wikipedia.org/wiki/EyeOS | EyeOS | eyeOS is a web desktop following the cloud computing concept that seeks to enable collaboration and communication among users. It is mainly written in PHP, XML, and JavaScript. It is a private-cloud application platform with a web-based desktop interface. Commonly called a cloud desktop because of its unique user interface, eyeOS delivers a whole desktop from the cloud with file management, personal management information tools, collaborative tools and with the integration of the client’s applications.
History
The first publicly available eyeOS version was released on August 1, 2005, as eyeOS 0.6.0 in Olesa de Montserrat, Barcelona (Spain). Quickly, a worldwide community of developers took part in the project and helped improve it by translating, testing and developing it.
After two years of development, the eyeOS Team published eyeOS 1.0 (on June 4, 2007). Compared with previous versions, eyeOS 1.0 introduced a complete reorganization of the code and some new web technologies, like eyeSoft, a portage-based web software installation system. Moreover, eyeOS also included the eyeOS Toolkit, a set of libraries allowing easy and fast development of new web Applications.
With the release of eyeOS 1.1 on July 2, 2007, eyeOS changed its license and migrated from GNU GPL Version 2 to Version 3.
Version 1.2 was released just a few months after the 1.1 version and integrated full compatibility with Microsoft Word files.
eyeOS 1.5 Gala was released on January 15, 2008. This version is the first to support both Microsoft Office and OpenOffice.org file formats for documents, presentations and spreadsheets. It also has the ability to import and export documents in both formats using server side scripting.
eyeOS 1.6 was released on April 25, 2008, and included many improvements such as synchronization with local computers, drag and drop, a mobile version and more.
eyeOS 1.8 Lars was released on January 7, 2009 and featured a completely rewritten file manager and a new sound API to develop media rich applications. Later, on April 1, 2009, 1.8.5 was released with a new default theme and some rewritten apps such as the Word Processor or the Address Book. On July 13, 2009, 1.8.6 was released with an interface for the iPhone and a new version of eyeMail with support for POP3 and IMAP.
eyeOS 1.9 was released December 29, 2009. It was followed up with 1.9.0.1 release with minor fixes on February 18, 2010. These last two releases were the last of the "CLASSIC DESKTOP" interface. A major re-work was released completed in March 2010. This new product was dubbed EyeOS 2.x.
However, a small group of eyeOS developers still maintain the code within the eyeOS forum, where support is provided but the eyeOS group itself has stopped active 1.x development. It is now available as the On-eye project on GitHub.
Active development halted on 1.x as of February 3, 2010. eyeOS 2.0 release took place on March 3, 2010. This was a total re-structure of the OS operating system. The 2.x stable is the new series of eyeOS which is in active development and will replace 1.x as stable in a few months. It includes live collaboration and many more social capabilities than eyeOS 1.x.
EyeOS released 2.2.0.0 on July 28, 2010.
On December 14, 2010, a working group inside eyeOS opensource development community began the structure development and further upgrade of eyeOS 1.9.x. The group's main goal is to continue the work eyeOS has stopped on 1.9.x.
EyeOS released 2.5 on May 17, 2011. This was the last release under an open source license. It is available in SourceForge for download under another project called 2.5 OpenSource Version.
On April 1, 2014, Telefónica announced the acquisition of eyeOS, but this acquisition reinforces its future mobile cloud services plans and the development of free software solutions. eyeOS will maintain its headquarters in the Catalan city, where their staff will continue to work but will now form part of Telefónica. After its integration into Telefónica, eyeOS will continue to function as an independent subsidiary, led by its current CEO Michel Kisfaludi.
Structure and API
For developers, Eye OS provides the eyeOS Toolkit, a set of libraries and functions to develop applications for eyeOS. Using the integrated Portage-based eyeSoft system, one can create their own repository for eyeOS and distribute applications through it.
Each core part of the desktop is its own application, using JavaScript to send server commands as the user interacts. As actions are performed using AJAX (such as launching an application), it sends event information to the server. The server then sends back tasks for the client to do in XML format, such as drawing a widget.
On the server, eyeOS uses XML files to store information. This makes it simple for a user to set up on the server, as it requires zero configuration other than the account information for the first user, making it simple to deploy. To avoid bottlenecks that flat files present, each user's information and settings are stored in different files, preventing resource starvation from occurring; Though this in turn may create issues in high volume user environments due to host operating system open file pointer limits.
Professional edition
A Professional Edition of eyeOS was launched on September 15, 2011, as a commercial solution for businesses. It uses a new version number and was released under version 1.0 instead of continuing with the next version number in the open source project. The Professional Edition retains the web desktop interface used by the open source version, while targeting enterprise users. A host of new features designed for enterprises like file sharing and synchronization (called eyeSync), Active Directory/LDAP connectivity, system wide administration controls and a local file execution tool called eyeRun were introduced. A new suite of Web Apps (A mail client, calendar, instant messaging and collaboration tools) was also introduced, specific to the enterprise edition for the web desktop. With the eyeOS Professional Edition 1.1, a to-do task manager tool, Citrix XenApp integration and a Facebook like 'wall' for collaboration were introduced.
Awards
2007 – Received the Softpedia's Pick award.
2007 – Finalist at the SourceForge's 2007 Community Choice Awards at the "Best Project" category. The winner for that category was 7-Zip.
2007 – Won the Yahoo! Spain Web Revelation award in the Technology category.
2008 – Finalist for the Webware 100 awards by CNET, under the "Browsing" category.
2008 – Finalist at the SourceForge's 2008 Community Choice Awards at the "Most Likely to Change the World" category. The winner for that category was Linux.
2009 – Selected Project of the Month (August 2009) by SourceForge.
2009 – BMW Innovation Award.
2010 – Winner of Accelera (Ernst & Young).
2010 – Asturias & Girona Spanish Prince award “IMPULSA”.
2011 – Winner of MIT’s TR35 award as Innovator of the year in Spain.
Community
eyeOS community is formed with the eyeOS forums, which arrived at 10.000 members at April 4, 2008, the eyeOS wiki and the eyeOS Application Communities, available at eyeOS-Apps website hosted and provided by openDesktop.org as well as Softpedia.
See also
Web portal
Web 2.0
References
Web desktops
Web 2.0
Cloud computing
Free content management systems
Free software operating systems
Software using the GNU AGPL license |
932516 | https://en.wikipedia.org/wiki/Software%20Freedom%20Day | Software Freedom Day | Software Freedom Day (SFD) is an annual worldwide celebration of Free Software organized by the Digital Freedom Foundation (DFF). SFD is a public education effort with the aim of increasing awareness of Free Software and its virtues, and encouraging its use.
SFD was established in 2004 and was first observed on 28 August of that year. About 12 teams participated in the first Software Freedom Day. Since that time it has grown in popularity and while organisers anticipated more than 1,000 teams in 2010 the event has stalled at around 400+ locations over the past two years, representing a 30% decrease over 2009.
Since 2006 Software Freedom Day has been held on the third Saturday of September. In 2022, this event will be held on 17 September.
Organization
Each event is left to local teams around the world to organize. Pre-registered teams (2 months before the date or earlier) receive free schwag sent by DFF to help with the events themselves. The SFD wiki contains individual team pages describing their plans as well as helpful information to get them up to speed. Events themselves varies between conferences explaining the virtues of Free and Open Source Software, to workshops, demonstrations, games, planting tree ceremonies, discussions and InstallFests.
Past events
Note on the figures above: it is difficult to find figures of the early years. The maps on the SFD website are only reliable after 2007, however some years such as 2009 saw extra teams from two different sources which did not "officially" register with DFF. There was about 80 teams from China and a hundred from the Sun community (OSUM) who heavily subsidized goodies for their teams. In the early year of SFD the map was an optional component not connected with the registration script and therefore some teams did not go through the troubles of adding themselves.
Sponsors
The primary sponsor from the start was Canonical Ltd., the company behind Ubuntu, a Linux distribution. Then IBM, Sun Microsystems, DKUUG, Google, Red Hat, Linode, Nokia and now MakerBot Industries have joined the supporting organisations as well as the FSF and the FSFE. IBM and Sun Microsystems are currently not sponsoring the event. In terms of media coverage DFF is partnering with Linux Magazine, Linux Journal and Ubuntu User. Each local team can seek sponsors independently, especially local FOSS supporting organisations and often appears in local medias such as newspapers and TV.
See also
Outline of free software
Document Freedom Day
Hardware Freedom Day
Culture Freedom Day
Public Domain Day
International Day Against DRM, promoted by the Free Software Foundation in its Defective by Design campaign
References
External links
Software Freedom Day
Intellectual property activism
International observances
Unofficial observances
September observances
Saturday observances
Holidays and observances by scheduling (nth weekday of the month)
Recurring events established in 2004 |
309945 | https://en.wikipedia.org/wiki/Application%20service%20provider | Application service provider | An application service provider (ASP) is a business providing computer-based services to customers over a network; such as access to a particular software application (such as customer relationship management) using a standard protocol (such as HTTP).
The need for ASPs has evolved from the increasing costs of specialized software that have far exceeded the price range of small to medium-sized businesses. As well, the growing complexities of software have led to huge costs in distributing the software to end-users. Through ASPs, the complexities and costs of such software can be cut down. In addition, the issues of upgrading have been eliminated from the end-firm by placing the onus on the ASP to maintain up-to-date services, 24 x 7 technical support, physical and electronic security and in-built support for business continuity and flexible working.
The importance of this marketplace is reflected by its size. , estimates of the United States market ranged from 1.5 to 4 billion dollars. Clients for ASP services include businesses, government organizations, non-profits, and membership organizations.
Provider types
There are several forms of ASP business. These are:
A specialist or functional ASP delivers a single application, such as credit card payment processing or timesheet services;
A vertical market ASP delivers a solution package for a specific customer type, such as a dental practice;
An enterprise ASP delivers broad spectrum solutions;
A local ASP delivers small business services within a limited area.
Some analysts identify a volume ASP as a fifth type. This is basically a specialist ASP that offers a low cost packaged solution via their own website. PayPal was an instance of this type, and their volume was one way to lower the unit cost of each transaction.
In addition to these types, some large multi-line companies (such as HP and IBM), use ASP concepts as a particular business model that supports some specific customers.
The ASP model
The application software resides on the vendor's system and is accessed by users through a web browser using HTML or by special purpose client software provided by the vendor. Custom client software can also interface to these systems through XML APIs. These APIs can also be used where integration with in-house systems is required. ASPs may or may not use multitenancy in the deployment of software to clients; some ASPs offer an instance or license to each customer (for example using Virtualization), some deploy in a single instance multi-tenant access mode, now more frequently referred to as "SaaS".
Common features associated with ASPs include:
ASP fully owns and operates the software application(s)
ASP owns, operates and maintains the servers that support the software
ASP makes information available to customers via the Internet or a "thin client"
ASP bills on a "per-use" basis or on a monthly/annual fee
The advantages to this approach include:
Software integration issues are eliminated from the client site
Software costs for the application are spread over a number of clients
Vendors can build more application experience than the in-house staff
Low-code development platforms permit limited customization of pre-built applications
Key software systems are kept up to date, available, and managed for performance by experts
Improved reliability, availability, scalability and security of internal IT systems
A provider's service level agreement guarantees a certain level of service
Access to product and technology experts dedicated to available products
Reduction of internal IT costs to a predictable monthly fee
Redeploying IT staff and tools to focus on strategic technology projects that impact the enterprise's bottom line
Some inherent disadvantages include:
The client must generally accept the application as provided since ASPs can only afford a customized solution for the largest clients.
The client may rely on the provider to provide a critical business function, thus limiting their control of that function and instead relying on the provider
Changes in the ASP market may result in changes in the type or level of service available to clients
Integration with the client's non-ASP systems may be problematic
Evaluating an Application Service Provider security when moving to an ASP infrastructure can come at a high cost, as such a firm must assess the level of risk associated with the ASP itself. Failure to properly account for such risk can lead to:
Loss of control of corporate data
Loss of control of corporate image
Insufficient ASP security to counter risks
Exposure of corporate data to other ASP customers
Compromise of corporate data
Some other risks include failure to account for the financial future of the ASP in general, i.e. how stable a company is and if it has the resources to continue business into the foreseeable future. For these reasons Cisco Systems has developed a comprehensive evaluation guideline. This guideline includes evaluating the scope of the ASP's service, the security of the program and the ASP's maturity with regard to security awareness. Finally the guidelines indicate the importance of performing audits on the ASP with respect to:
Port/Network service
Application vulnerability
ASP Personnel
Physical visits to the ASP to assess the formality of the organization will provide invaluable insight into the awareness of the firm.
History
In terms of their common goal of enabling customers to outsource specific computer applications so they can focus on their core competencies, ASPs may be regarded as the indirect descendant of the service bureaus of the 1960s and 1970s. In turn, those bureaus were trying to fulfill the vision of computing as a utility, which was first proposed by John McCarthy in a speech at MIT in 1961. Jostein Eikeland, the founder of TeleComputing, is credited with coining the acronym ASP in 1996, according to Inc. Magazine. Traver H. Kennedy, founder and ex-Chairman of the ASP Industry Consortium, has been known as the "father of the ASP industry".
Comparisons
The ASP model is often compared with Software as a Service (SaaS), but while the latter typically delivers a generic service at scale to many users, the former typically involved delivering a service to a small number of users (often using separate single-tenant instances). This meant that the many benefits of multi-tenancy (cost sharing, economies of scale, etc.) were not accessible to ASP providers, and their services were more comparable to in-house hosting than to true multi-tenant SaaS solutions.
See also
Business service provider
Communication as a service
Hosted service provider
Multitenancy
Outsourcing
Service level agreement
Software as a service
Utility computing
Web application
References
External links
IT service management
Customer relationship management |
2613536 | https://en.wikipedia.org/wiki/VEST | VEST | VEST (Very Efficient Substitution Transposition) ciphers are a set of families of general-purpose hardware-dedicated ciphers that support single pass authenticated encryption and can operate as collision-resistant hash functions designed by Sean O'Neil, Benjamin Gittins and Howard Landman. VEST cannot be implemented efficiently in software.
VEST is based on a balanced T-function that can also be described as a bijective nonlinear feedback shift register with parallel feedback (NLPFSR) or as a substitution–permutation network, which is assisted by a non-linear RNS-based counter. The four VEST family trees described in the cipher specification are VEST-4, VEST-8, VEST-16, and VEST-32. VEST ciphers support keys and IVs of variable sizes and instant re-keying. All VEST ciphers release output on every clock cycle.
All the VEST variants are covered by European Patent Number EP 1820295(B1), owned by Synaptic Laboratories.
VEST was a Phase 2 Candidate in the eSTREAM competition in the hardware portfolio, but was not a Phase 3 or Focus candidate and so is not part of the final portfolio.
Overview
Design
Overall structure
VEST ciphers consist of four components: a non-linear counter, a linear counter diffusor, a bijective non-linear accumulator with a large state and a linear output combiner (as illustrated by the image on the top-right corner of this page). The RNS counter consists of sixteen NLFSRs with prime periods, the counter diffusor is a set of 5-to-1 linear combiners with feedback compressing outputs of the 16 counters into 10 bits while at the same time expanding the 8 data inputs into 9 bits, the core accumulator is an NLPFSR accepting 10 bits of the counter diffusor as its input, and the output combiner is a set of 6-to-1 linear combiners.
Accumulator
The core accumulator in VEST ciphers can be seen as a SPN constructed using non-linear 6-to-1 feedback functions, one for each bit, all of which are updated simultaneously. The VEST-4 core accumulator is illustrated below:
It accepts 10 bits (d0 − d9) as its input. The least significant five bits (p0 − p4) in the accumulator state are updated by a 5×5 substitution box and linearly combined with the first five input bits on each round. The next five accumulator bits are linearly combined with the next five input bits and with a non-linear function of four of the less significant accumulator bits. In authenticated encryption mode, the ciphertext feedback bits are also linearly fed back into the accumulator (e0 − e3) with a non-linear function of four of the less significant accumulator bits. All the other bits in the VEST accumulator state are linearly combined with non-linear functions of five less significant bits of the accumulator state on each round. The use of only the less significant bits as inputs into the feedback functions for each bit is typical of T-functions and is responsible for the feedback bijectivity. This substitution operation is followed by a pseudorandom transposition of all the bits in the state (see picture below).
Data authentication
VEST ciphers can be executed in their native authenticated encryption mode similar to that of Phelix but authenticating ciphertext rather than plaintext at the same speed and occupying the same area as keystream generation. However, unkeyed authentication (hashing) is performed only 8 bits at a time by loading the plaintext into the counters rather than directly into the core accumulator.
Family keying
The four root VEST cipher families are referred to as VEST-4, VEST-8, VEST-16, and VEST-32. Each of the four family trees of VEST ciphers supports family keying to generate other independent cipher families of the same size. The family-keying process is a standard method to generate cipher families with unique substitutions and unique counters with different periods. Family keying enables the end-user to generate a unique secure cipher for every chip.
Periods
VEST ciphers are assisted by a non-linear RNS counter with a very long period. According to the authors, determining average periods of VEST ciphers or probabilities of the shortest periods of VEST-16 and VEST-32 falling below their advertised security ratings for some keys remains an open problem and is computationally infeasible. They believe that these probabilities are below 2−160 for VEST-16 and below 2−256 for VEST-32. The shortest theoretically possible periods of VEST-4 and VEST-8 are above their security ratings as can be seen from the following table.
Performance
Computational efficiency in software
The core accumulator in VEST ciphers has a complex, highly irregular structure that resists its efficient implementation in software.
The highly irregular input structure coupled with a unique set of inputs for each feedback function hinders efficient software execution. As a result, all the feedback functions need to be calculated sequentially in software, thus resulting in the hardware-software speed difference being approximately equal to the number of gates occupied by the feedback logic in hardware (see the column "Difference" in the table below).
The large differential between VEST's optimised hardware execution and equivalently clocked software optimised execution offers a natural resistance against low cost general-purpose software processor clones masquerading as genuine hardware authentication tokens.
In bulk challenge-response scenarios such as RFID authentication applications, bitsliced implementations of VEST ciphers on 32-bit processors which process many independent messages simultaneously are 2–4 times slower per message byte than AES.
Hardware performance
VEST is submitted to the eStream competition under the Profile II as designed for "hardware applications with restricted resources such as limited storage, gate count, or power consumption", and shows high speeds in FPGA and ASIC hardware according to the evaluation by ETH Zurich.
The authors claim that according to their own implementations using "conservative standard RapidChip design front-end sign-off process", "VEST-32 can effortlessly satisfy a demand for 256-bit secure 10 Gbit/s authenticated encryption @ 167 MHz on 180ηm LSI Logic RapidChip platform ASIC technologies in less than 45K Gates and zero SRAM". On the 110ηm Rapidchip technologies, VEST-32 offers 20 Gbit/s authenticated encryption @ 320 MHz in less than 45 K gates". They also state that unrolling the round function of VEST can halve the clock-speed and reduce power consumption while doubling the output per clock-cycle, at the cost of increased area.
Key agility
VEST ciphers offer 3 keying strategies:
Instantly loading the entire cipher state with a cryptographically strong key (100% entropy) supplied by a strong key generation or key exchange process;
Instant reloading of the entire cipher state with a previously securely initialised cipher state;
Incremental key loading (of an imperfect key) beginning with the least significant bit of the key loaded into the counter 15, sliding the 16-bit window down by one bit on each round until the single bit 1 that follows the most significant bit of the key is loaded into the counter 0. The process ends with 32 additional sealing rounds. The entire cipher state can now be stored for instant reloading.
VEST ciphers offer only 1 resynchronisation strategy:
Hashing the (IV) by loading it incrementally 8-bits at a time into the first 8 RNS counters, followed by additional 32 sealing rounds.
History
VEST was designed by Sean O'Neil and submitted to the eStream competition in June 2005. This was the first publication of the cipher.
Security
The authors say that VEST security margins are inline with the guidelines proposed by Lars Knudsen in the paper "Some thoughts on the AES process" and the more conservative guidelines recently proposed by Nicolas Courtois in the paper “Cryptanalysis of Sfinks”. Although the authors are not publishing their own cryptanalysis, VEST ciphers have survived more than a year of public scrutiny as a part of the eStream competition organised by the ECRYPT. They were advanced to the second phase, albeit not as part of the focus group.
Attacks
At SASC 2007, Joux and Reinhard published an attack that recovered 53 bits of the counter state. By comparing the complexity of the attack to a parallelized brute-force attack, Bernstein evaluated the resultant strength of the cipher as 100 bits, somewhat below the design strength of most of the VEST family members. The designers of VEST claimed the attack is due to a typographical error in the original cipher specification and published a correction on the Cryptology ePrint archive on 21 January 2007, a few days prior to publication of the attack.
References
"Some thoughts on the AES process" paper by Lars R. Knudsen
"Cryptanalysis of Sfinks" paper by Nicolas Courtois
"Rediscovery of Time Memory Tradeoffs" paper by J. Hong and P. Sarkar
"Understanding Brute Force" paper by Daniel J. Bernstein
"Comments on the Rediscovery of Time Memory Data Tradeoffs" paper by C. De Cannière, J. Lano and B. Preneel
Ideal-to-Realized Security Assurance In Cryptographic Keys by Justin Troutman
Notes
External links
eSTREAM page on VEST
VEST eStream Phase II specification
VEST C reference source code and test vectors
ETH Zurich Hardware Performance Review
Stream ciphers
Message authentication codes
Cryptographic hash functions |
2769349 | https://en.wikipedia.org/wiki/Reference%20Manager | Reference Manager | Reference Manager was a commercial reference management software package sold by Thomson Reuters. It was the first commercial software of its kind, originally developed by Ernest Beutler and his son, Earl Beutler, in 1982 through their company Research Information Systems. Offered for the CP/M operating system, it was ported to DOS and then Microsoft Windows and later the Apple Macintosh. Sales were discontinued on December 31, 2015, support ended on December 31, 2016.
Operation
Reference Manager is most commonly used by people who want to share a central database of references and need to have multiple users adding and editing records at the same time. It is possible to specify for each user read-only or edit rights to the database. The competing package EndNote does not offer this functionality, but Citavi does.
Reference Manager offers different in-text citation templates for each reference type. It also allows the use of synonyms within a database. Reference Manager Web Publisher allows the publication of reference databases to an intranet or internet site. This allows anyone with a web browser to search and download references into their own bibliographic software. It includes the functionality to interact with the SOAP and WSDL standard services.
Updates
After abandoning the development of Reference Manager in 2008, Thomson Reuters discontinued its sale on December 31, 2015 to focus exclusively on EndNote. In 2018, the Science division, which owned EndNote, separated from Thomson Reuters to become Clarivate. EndNote X7 can import Reference Manager databases and convert Word documents formatted with Reference Manager into EndNote formatting. Reference Manager databases can also be imported into Citavi; Reference Manager formatted Word documents are converted into the Citavi format. Citavi permits the installation of a database for teamwork locally, as is possible with Reference Manager, while EndNote's team function is cloud-based.
See also
Comparison of reference management software
Citavi
EndNote
RIS format
References
External links
Reference Manager Email List Archive via the last scrape available in the Internet Archive
Issues with Word 2007
Reference management software
Proprietary software
Windows software |
47096313 | https://en.wikipedia.org/wiki/Yorba%20Foundation | Yorba Foundation | Yorba Foundation was a non-profit software group based in San Francisco, and founded by Adam Dingle wanting to bring first class software to the open source community. This organization had been created to answer people thinking open source brings hard to use, clunky and low-quality software usable only by hackers.
The company was made of 5 employees: Jim Nelson (coder and executive director), Adam Dingle (founder), Charles Lindsay, Eric Gregory and Nate Lillich (software engineers).
History
In December 2009, Yorba Foundation applied to get the 501(c)(3) status. After some requests for clarification by IRS in 2010 and 2011, Yorba received on May 22, 2014 a denial for that tax exemption status. The reason for the rejection, explained by IRS, is that the organization makes software that can be used by any person for any purpose, and not for a specific community. Therefore, Yorba cannot be considered as a charity while the Apache Foundation, with exactly the same purpose, is considered as-it. Although this status would have helped, according to Jim Nelson, this deny does not hinge on Yorba existence.
On October 31, 2013, Yorba moved from San Francisco's Mission District to the Financial District down the hill from Chinatown, still in San Francisco.
In April 2015, at the end of the month, the last Yorba employee, Jim Nelson, left the foundation. The reason for the end of activity is lack of being financially sustainable: funding ran out, donations did not match expenses and even if Yorba had received the 501(c)(3) status allowing to be tax exempted, they would still have probably to shut the project down.
On October 25, 2015, the following message was posted on their webpage without any further explanations:
May 2016 marked the completion of the process to begin the formal dissolution of the organization. The complete dissolution should happen later in 2016.
In May 2016, Yorba assigned the copyright of all these pieces of software to Software Freedom Conservancy except California which will be assigned in the near future due to an oversight on Yorba's part.
Main developments
Shotwell, an image organizer for the Linux operating system
Geary, a free email client written in Vala
Valencia, a plugin for gedit to help with coding in the Vala programming language
gexiv2 GObject wrapper around Exiv2
California, a GNOME 3 calendar application
References
Organizations established in 2009
Free software project foundations in the United States
Non-profit organizations based in San Francisco
2009 establishments in California
Defunct non-profit organizations based in the United States
2015 disestablishments in California
Organizations disestablished in 2015 |
6388697 | https://en.wikipedia.org/wiki/Michael%20Sweet%20%28programmer%29 | Michael Sweet (programmer) | Michael R. Sweet is a computer scientist known for being the original developer of CUPS. He also developed flPhoto, was the original developer of the Gimp-Print software (now known as Gutenprint), and continues to develop codedoc, HTMLDOC, Mini-XML, PAPPL, and many other projects. Sweet has contributed to other free software projects such as FLTK, Newsd, and Samba. He co-owned and ran Easy Software Products (ESP), a small company that specialized in Internet and printing technologies and is now the Chief Technology Officer of Lakeside Robotics Corporation.
Career
Sweet graduated in Computer Science at the SUNY Institute of Technology in Utica-Rome. He then spent several years working for TASC and Dyncorp on real-time computer graphics. After releasing a freeware tool "topcl", in 1993 Sweet set up Easy Software Products (ESP) and developed the ESP Print software. He started work on the CUPS software in 1997 and in 1999 released it under the GNU GPL license along with the commercially licensed ESP Print Pro.
Apple included CUPS in its macOS operating system and in February 2007, they purchased the copyright to the CUPS source code which, unusually for an Open Source project, was wholly owned by ESP. Apple also hired Sweet to continue the development of CUPS.
While working for Apple, Sweet spent six years as the chair of the Printer Working Group (PWG).
Sweet left Apple in December 2019 to start Lakeside Robotics Corporation. Sweet continues to be secretary of the Internet Printing Protocol (IPP) working group, a designated expert for IPP and the Printer management information base (MIB) for the Internet Engineering Task Force (IETF), and is active in printing standards development within the PWG. He has written several books including Serial Programming Guide for POSIX Operating Systems, OpenGL Superbible, and CUPS (Common Unix Printing System).
References
External links
Michael Sweet's homepage
Free software programmers
Living people
Year of birth missing (living people) |
43154038 | https://en.wikipedia.org/wiki/1996%20UCLA%20Bruins%20football%20team | 1996 UCLA Bruins football team | The 1996 UCLA Bruins football team represented the University of California, Los Angeles in the 1996 NCAA Division I-A football season. The season was highlighted by the 25-yard Skip Hicks touchdown run in the second overtime that won the game for the Bruins over the crosstown-rival USC Trojans.
Schedule
Game summaries
USC Trojans
Six straight wins for the Bruins over the Trojans.
Roster
References
UCLA
UCLA Bruins football seasons
UCLA Bruins football |
30862852 | https://en.wikipedia.org/wiki/Malaysian%20Expressway%20System | Malaysian Expressway System | The Malaysian Expressway System () is a network of national controlled-access expressways in Malaysia that forms the primary backbone network of Malaysian national highways. The network begins with the North–South Expressway (NSE), and is being substantially developed. Malaysian expressways are built by private companies under the supervision of the government highway authority, Malaysian Highway Authority (abbreviated as MHA; also referred to as Lembaga Lebuhraya Malaysia (LLM) in Malay).
Overview
The expressway network of Malaysia is considered as one of the best controlled-access expressway network in Asia after Japan and South Korea. They were 30 expressways in the country and the total length is . and another is under construction. The closed toll expressway system is similar to the Japanese Expressway System and Chinese Expressway System. All Malaysian toll expressways are controlled-access highway and managed in the Build-Operate-Transfer (BOT) system.
There are expressways in West Malaysia and East Malaysia, however, the former are better-connected. The North–South Expressway passes through all the major cities and conurbations in West Malaysia, such as Penang, Ipoh, the Klang Valley and Johor Bahru. The Pan Borneo Highway connects the Malaysian states of Sabah and Sarawak with Brunei.
Asian Highway Network
A few major expressways in Malaysia are part of the larger Asian Highway Network. The Asian Highway Network is an international project between Asian nations to develop their highway systems, which will form main routes in the network. Seven Asian Highway routes pass through Malaysia:-
AH2 Asian Highway Route 2 – along the North–South Expressway E1 and E2
AH18 Asian Highway Route 18 – along the Federal Route 3
AH140 Asian Highway Route 140 – along the Federal Route 4 and Butterworth–Kulim Expressway E15
AH141 Asian Highway Route 141 – consists of New Klang Valley Expressway E1 (Bukit Lanjan–Jalan Duta), Duta–Ulu Klang Expressway E33 (Jalan Duta–Sentul Pasar and Sentul Pasar–Greenwood), Kuala Lumpur Middle Ring Road 2 28 (Greenwood–Gombak North Interchange), Kuala Lumpur–Karak Expressway E8, East Coast Expressway E8 and Gebeng Bypass 101
AH142 Asian Highway Route 142 – along the MEC Highway FT222, Tun Razak Highway FT12 and Federal Route 1 (Segamat–Yong Peng South Interchange)
AH143 Asian Highway Route 143 – along the Second Link Expressway E3
AH150 Asian Highway Route 150 – along the Pan Borneo Highway.
AH2 border crossing dispute
The status of the route alignment of the Asian Highway 2 crossing the Malaysia–Singapore border is in dispute. Malaysia had commissioned the Second Link Expressway E3 as part of AH2 to maintain the primary access-controlled highway status of the route. Meanwhile, Singapore had commissioned the Johor–Singapore Causeway and the Bukit Timah Expressway as part of AH2, as the Johor–Singapore Causeway is the main gateway to Singapore from Malaysia, which could mean that the Skudai Highway would be included in the route alignment instead of the Second Link Expressway.
The completion of Johor Bahru Eastern Dispersal Link Expressway (EDL) in 2012, the Asian Highway AH2 route were re-routed to Eastern Dispersal Link Expressway (EDL) from Second Link Expressway. Second Link Expressway were gazetted as a part of Asian Highway AH143.
The Second Link Expressway, the Ayer Rajah Expressway, Marina Coastal Expressway and Kallang–Paya Lebar Expressway were later gazetted as Asian Highway AH143.
History
Interstate
Before tolled expressways were introduced in the mid-1970s, most Malaysians travelled around Peninsula Malaysia on federal roads.
The major reasons for building new expressways in Malaysia are the increasing number of vehicles along federal routes, the opening of major ports and airports in Malaysia, and the increasing population in major cities and towns of Malaysia.
In 1966, the Highway Planning Unit was established under the Ministry of Works and Communications.
The first tolled highway in Malaysia was the 20 km (10 miles) Tanjung Malim–Slim River tolled road (Federal Route 1) which was opened to traffic on 16 March 1966. It saved journey time by half an hour, and cars were charged 50 sen, buses and lorries RM1 and motorcycles 20 sen. However, in 1994, with the completion of the North–South Expressway, the toll plaza was removed and it became a toll-free highway.
In 1970, the first comprehensive five-year road development programme was formulated by the Highway Planning Unit, which included expanding rural roads and plans to construct three new highways linking the east and west coasts.
On 27 March 1974, the Kuala Lumpur–Seremban Expressway was constructed. Funded by a loan by the World Bank, the 63.4 km (39.3 mile) expressway was constructed in three phases; the first phase was from Kuala Lumpur to Nilai, while the second phase was from Nilai to Seremban. The third phase was the rehabilitation of the old Federal Route 1 from Kuala Lumpur to Seremban as a toll-free alternative to motorists. The completion of the Kuala Lumpur–Seremban Expressway in June 1977 marking the first step towards the construction of the new interstate expressway known as North–South Expressway (NSE).
The Kuala Lumpur–Karak Highway (Federal Route 2) was built between 1976 and 1979. Meanwhile, the 900 m Genting Sempah Tunnel was the first highway tunnel in Malaysia, constructed between 1977 and 1979. The tunnel was opened in 1979 by the then Minister of Works and Communications, Dato' Abdul Ghani Gilong.
The first sections of the North–South Expressway were the toll sections of the Kuala Lumpur–Seremban Expressway from Sungai Besi to Labu toll plazas, which was opened on 16 June 1982. The next sections were Bukit Kayu Hitam–Jitra and Senai–Johor Bahru stretches opened in 1985. Then the Ipoh–Changkat Jering and Seremban–Ayer Keroh stretches, which were opened to traffic in 1986. However, on 1 October 1987 the closed-toll system came into force along the Kuala Lumpur–Ayer Keroh and Ipoh–Changkat Jering stretches. The Ayer Keroh–Pagoh stretch on the North–South Expressway was opened to traffic in 1988. All sections of the North–South Expressway were completed and officially opened on 8 September 1994 by the former Prime Minister of Malaysia, Tun Dr Mahathir Mohamad.
Other interstate expressway projects in Malaysia are North–South Expressway Central Link (opened 1996), East Coast Expressway (opened 2004) and Kajang–Seremban Highway (opened 2008).
Because of a (permanent) massive traffic jam on the North–South Expressway Southern Route between Seremban and Nilai in Negeri Sembilan, a new bypass expressway named Paroi–Senawang–KLIA Expressway was proposed in 2013 to help reduce the traffic jams at the area.
The Phase 2 of the East Coast Expressway (Terengganu) linking Jabur and Kuala Terengganu was completed on 31 January 2015, thus completing the alignment of the East Coast Expressway from Kuala Lumpur to Kuala Terengganu.
The new project in the west coast of Peninsula Malaysia, West Coast Expressway (WCE) has been unveiled by the government. Construction of the 233.0 km (144.8 mile) expressway linking Banting, Selangor and Taiping, Perak was to start in 2015.
Greater Kuala Lumpur and Klang Valley
The history of the highways in the Klang Valley started after the expulsion of Singapore from Malaysia on 9 August 1965, when the Malaysian government decided to make Port Swettenham (now Port Klang) Malaysia's new national port as a replacement for Singapore. As a result, the government planned to build a first highway in Klang Valley known as Federal Highway connecting Port Swettenham (now Port Klang) to Kuala Lumpur in the 1960s.
In 1967, the 45 km (28 mile) Federal Highway (Federal Route 2), the first dual-carriageway highway in Malaysia was opened to traffic.
In the early 1990s the federal government decided to build more expressways and highways in Klang Valley because of the increasing size and population of the Klang Valley conurbation, development of new townships and industrial estates, and the massive traffic jams along Federal Highway.
The New Klang Valley Expressway (NKVE), which was opened in 1990, is the second link to Kuala Lumpur from Klang after Federal Highway. In 1997, North–South Expressway Central Link (NSECL), which is the main link to Kuala Lumpur International Airport (KLIA) was opened to traffic.
Other expressway projects in Klang Valley are Shah Alam Expressway (SAE/KESAS) (opened 1997), Damansara–Puchong Expressway (LDP) (opened 1999), Sprint Expressway (opened 2001), New Pantai Expressway (NPE) (opened 2004), SMART Tunnel (opened 2007), KL–KLIA Dedicated Expressway or Kuala Lumpur–Putrajaya Expressway (KLPE) (now Maju Expressway (MEX)) (opened 2007) and Duta–Ulu Klang Expressway (DUKE) (opened 2009).
In addition to Kuala Lumpur Inner Ring Road (KLIRR) as the inner ring road in Kuala Lumpur, Kuala Lumpur Middle Ring Road 1 (KLMRR1), Kuala Lumpur Western/Northern Dispersal Link Scheme (Sprint Expressway and DUKE) and Kuala Lumpur Middle Ring Road 2 (KLMRR2) also act as middle ring roads of the city. Kuala Lumpur–Kuala Selangor Expressway (KLS) (formerly Assam Jawa–Templer Park Highway (LATAR)), Kajang Dispersal Link Expressway (SILK), South Klang Valley Expressway (SKVE) and the planning Kuala Lumpur Outer Ring Road (KLORR) may form the outer ring roads of Kuala Lumpur.
Following the formation of the Greater Kuala Lumpur in the early 2010s, many expressways and highways will be built in the Greater Kuala Lumpur under the Economic Transformation Programme (ETP). These are the Besraya Extension Expressway (now part of the Besraya Expressway) (opened 2012), Damansara–Shah Alam Elevated Expressway (DASH), Sungai Besi–Ulu Klang Elevated Expressway (SUKE), East Klang Valley Expressway (EKVE) which will be part of the KLORR system, Sri Damansara Link and Tun Razak Link of the DUKE, Kinrara–Damansara Expressway (KIDEX Skyway) and Serdang–Kinrara–Putrajaya Expressway (SKIP). However, the proposed Kinrara–Damansara Expressway (KIDEX Skyway) project was officially scrapped by the Selangor State Government due to the several protests by the local Petaling Jaya citizens.
Penang and Greater Penang
The history of highways in Penang began in the 1970s when the Malaysian federal government decided to build the Penang Bridge, connecting Seberang Perai and Penang Island. The construction of the Penang Bridge, between Perai on the mainland and Gelugor on Penang Island, began in 1982 and was completed in 1985. This bridge was officially opened on 14 September 1985 by then Malaysian Prime Minister, Mahathir Mohamad.
The main reasons for constructing new expressways in Penang are the increasing populations in George Town and Butterworth, and the need for more cross-strait linkages. Prior to the construction of the Penang Bridge, the only way to cross the Penang Strait between Penang Island and the mainland was via ferries. In addition, major industrial estates, such as in Bayan Lepas on the island and Perai on the mainland, were opened, leading to the growth of residential townships like Bayan Baru and Seberang Jaya. These necessitated the construction of more bridges and expressways in the state.
Since then, a number of other expressway projects within Penang, such as the Butterworth Outer Ring Road (BORR), the Butterworth–Seberang Jaya Toll Road and the Tun Dr Lim Chong Eu Expressway were completed. On densely populated Penang Island, the Gelugor Highway and the Penang Middle Ring Road were also created to alleviate traffic congestion.
The.Butterworth–Kulim Expressway (BKE) is a tolled expressway that connects Butterworth with the town of Kulim (and Kulim Hi-tech Park) in neighbouring Kedah. This interstate highway was built as industrialisation began to spread out from Seberang Perai towards southern Kedah in the 1980s, forming what is now Greater Penang.
The Second Penang Bridge, officially named the Sultan Abdul Halim Muadzam Shah Bridge, was opened on 1 March 2014 by the Malaysian Prime Minister, Najib Tun Razak. This bridge, linking Batu Maung on Penang Island and Batu Kawan in Seberang Perai, is currently the longest bridge in Southeast Asia.
Iskandar Malaysia and Johor Bahru
The history of highways in Johor Bahru started in the 1980s when the city of Johor Bahru became a main southern international gateway to Malaysia from Singapore after the separation of Singapore from Malaysia on 9 August 1965.
The main reasons for building expressways in Johor Bahru are the increasing size of the Johor Bahru metropolitan area since it achieved city status on 1 January 1994, and the formation of the South Johor Economic Region (SJER) or Iskandar Development Region (IDR) (now Iskandar Malaysia) on 30 July 2006. Many townships have been constructed around Johor Bahru and industrial estates have been developed in areas such as Senai, Skudai, Tebrau, Pasir Gudang and Tampoi.
The first highway in Johor Bahru was Skudai Highway linking Senai to Johor Causeway, which was completed in 1985 and it was the first toll highway in Johor Bahru. However the toll plaza near Senai was abolished in 2004. Kempas Highway, the only state road in Malaysia constructed as a 2-lane highway was completed in 1994. The Malaysia–Singapore Second Crossing, which is the second link to Singapore after Johor Causeway, was opened to traffic on 18 April 1998.
Other expressway projects in Johor Bahru are Senai–Desaru Expressway (SDE) linking Senai in the west to Desaru in east coast of Johor, the Johor Bahru Eastern Dispersal Link Expressway (EDL) which linking Pandan interchange of the North–South Expressway to the new Sultan Iskandar CIQ Building in city centre, the Iskandar Coastal Highway linking Nusajaya in the west to the city centre in the east and the Johor Bahru East Coast Highway linking Kampung Bakar Batu passing through Permas Jaya, Taman Rinting and finally towards Pasir Gudang.
In addition to Johor Bahru Inner Ring Road (JBIRR) as the inner ring road in Johor Bahru, Pasir Gudang Highway, Second Link Expressway and Johor Bahru Parkway also act as middle ring roads of the city. Second Link Expressway and the Senai–Desaru Expressway may form the outer ring roads of Johor Bahru.
East Malaysia
The history of highways in East Malaysia started in the 1960s when the federal government decided to build the Pan Borneo Highway, linking Sarawak and Sabah state.
The Pan Borneo Highway project is a joint project between the governments of Brunei and Malaysia. The project started as soon as Sarawak and Sabah joined the federation of Malaysia in 1963. The lack of a road network system in Sarawak was the main factor of the construction.
There are one toll expressway, one toll federal highway and one toll state highway in Sarawak – the Tun Salahuddin Bridge in Kuching city, the Miri–Baram Highway in Miri Division, and the Lanang Bridge in Sibu. The Tun Salahuddin Bridge is the first only toll expressway in East Malaysia. However, the toll collection of both Lanang Bridge and Tun Salahuddin Bridge were abolished in 2015 and 2016 by the Sarawak state government.
On 31 March 2015, the dual carriageway toll free Pan Borneo Expressway in Sarawak was unveiled by the Federal Government. The highway project will be implemented with Lebuhraya Borneo Utara Sdn Bhd (LBU) as the Project Delivery Partner (PDP) managing and supervising its construction.
Features
Expressway standards
The construction, standards, management and usage of expressways in Malaysia are subject to Federal Roads Act (Private Management) 1984. In Malaysia, expressways are defined as high-speed routes with at least four lanes (two in each direction) and should have full access control or partial access control. Most expressways in Malaysia are controlled-access expressways.
Expressways are defined as high-speed highways built under the JKR R6 rural highway standard, as dual-carriageways of at least 4 lanes (2 lanes per carriageway) with full access control, grade-separated interchanges and high design speed limit of 120 km/h, allowing the maximum speed limit of 110 km/h. However, the section between Cahaya Baru and Penawar of the Senai–Desaru Expressway E22 is built as a two-lane single carriageway with the similar features as the Swiss autostrasse, making it as the first true two-lane controlled-access expressway in Malaysia followed by the section of Teluk Panglima Garang and Pulau Indah of the South Klang Valley Expressway (SKVE) E26 All expressways are considered federal highways, but administered by Malaysian Highway Authority (MHA) and the respective concessionaire companies.
Highways, on the other hand, complement the national network of expressways and federal roads and built under the JKR R5 rural highway standard, with relatively high design speed limit (although not as high as the expressway speed limit) of 100 km/h, allowing the maximum speed limit of 90 km/h. The highways are built with partial access control, and grade-separated interchanges and at-grade crossings are both permitted. However, it is possible for a federal or state highway to be built with almost equivalent standard of an expressway with the exception of lower speed limit, for example the Federal Highway. Highways can be built either as dual-carriageway or two-lane single carriageway.
Before the mid-1990s, there were no specific coding system for the expressways. When more and more expressways were built, a system of expressway numbering was applied to all expressways. Expressways are labelled with the letter "E" followed by assigned numbers, for example the code for North–South Expressway southern route is E2. The expressways have green signs and the text colour is white.
However, there are some exceptions in some highways. Some highways like Federal Highway (Federal Route 2) and Skudai Highway (Federal Route 1) retain their federal route codes. In addition, there are some highways in Malaysia which are classified as municipal roads such as Kuala Lumpur Middle Ring Road 1.
The syntax for highway exits in Malaysia is in the format or , where "xx" is the expressway code number (which can be one or two digits) and "nn" is the two-digit assigned number for each highway exit. For example, Johor Bahru exit at the end of North–South Expressway is labelled Exit 257, where the last two digits (57) are the assigned exit number and the first digit (2) is the expressway route number (E2). Meanwhile, Jalan Templer exit at the Federal Highway is labelled Exit 224, where the two digits (24) are the assigned exit number and the first digit (2) is the federal route number (2). Expressways have distance markers in green colour (blue for federal expressways and highways) placed every 100 m.
Route number categories
Expressway route numbers
Federal route numbers
Pavements
Most expressways are paved with typical tarmac, which is a mixture of fine stone chips and tar; however, some expressways are paved with concrete such as North–South Expressway Northern Route (from Bukit Lanjan Interchange, Selangor to Tapah interchange, Perak), New Klang Valley Expressway, North–South Expressway Southern Route (from Ayer Keroh interchange, Melaka to Tangkak interchange, Johor), SMART Tunnel and Skudai–Pontian Highway (from Universiti Teknologi Malaysia interchange to Taman Sri Pulai junction). Meanwhile, at Federal Highway linking Klang to Kuala Lumpur, the section of the expressway from Subang Jaya to Kota Darul Ehsan near Petaling Jaya is paved with asphalt.
Expressway monitoring and maintenance
Monitoring
Since 1986, Malaysian expressways have been built by private companies under the supervision of the government highway authority, Lembaga Lebuhraya Malaysia (Malaysian Highway Authority). Every private concession company, such as PLUS Expressways, ANIH Berhad (formerly MTD Prime) and the others have monitored and maintained their expressways.
Maintenance
Projek Penyelenggaraan Lebuhraya Berhad (PROPEL) has undertaken repair and maintenance works on highway facilities, such as road works and repair works, road line painting, cleaning works on laybys and rest and service areas, trimming grass and landscaping along expressway areas, installing road furniture, and others. Meanwhile, the PROPEL Response Team Unit is a special response team unit.
The Karak Expressway and East Coast Expressway are maintained by Alloy Consolidated Sdn Bhd.
Traffic management
Since late 2006, every expressway in Malaysia has been monitored by the Malaysian Highway Authority (LLM) Traffic Management Centre (LLM TMC). However, in some parts of Klang Valley, the expressways are monitored by the Integrated Transport Information System (ITIS); expressways in George Town, Penang are monitored by the Penang Island City Council.
Malaysian Highway Authority traffic information page
Toll system
Types of toll systems
Every expressway and highway in Malaysia has a toll system, which is either a closed toll system or open toll system. All transactions are in Malaysian Ringgit (RM)
Open system
Users only have to pay at certain toll plazas within the open system range for a fixed amount.
Closed system
Users collect toll tickets or touch in with their Touch 'n Go card North South Expressway issues the PLUSTransit cards and other closed toll expressways such as East Coast Expressway and South Klang Valley Expressway issues the transit card) before entering the expressway at respective toll plazas and pay an amount of toll or touch out with your same touch n go card at the exit toll plaza plus the distance from the plaza to the Limit of Maintenance Responsibility (LMR). The toll rate in this system is based on the distance traveled.
Starting 18 June 2013, the PLUSTransit reusable transit cards has been implemented at all PLUS expressways network to replace the transit ticket. Beginning 26 April 2017, the PLUSTransit Card no longer issue due upcoming full electronic toll collection at all PLUS Closed system, customer must touch in and out with the same card
Electronic toll collection
There are three types of the electronic toll collection (ETC) systems, Touch 'n Go card unit, Smart TAG on board unit and RFID tag. Touch 'n Go and Smart TAG, have been made compulsory in all expressways since 1 July 2004, following the instruction of the Works Minister, Datuk Seri S. Samy Vellu. Other electronic payment systems that were previously used by other highway operators such as PLUS TAG for all PLUS expressways network, Express TAG for Shah Alam Expressway, FasTrak for Damansara–Puchong Expressway and Sprint Expressway and SagaTag in Cheras–Kajang Expressway, were abolished in a move to standardise the electronic payment method.
{| class="wikitable sortable" style="margin-left:1em; margin-bottom:1em; color:black; font-size:95%;"
|
! colspan="3" ! style="padding:1px .5em; color:white; background-color:green;" |Private expressway concession company
! colspan="4" ! style="padding:1px .5em; color:white; background-color:green;" |PLUS Expressways
|-
! rowspan="2" |Toll collection systems
! rowspan="2" |
! rowspan="2" |
! style="padding:1px .5em; color:purple;" |
! rowspan="2" |
! rowspan="2" |
! style="padding:1px .5em; color:purple;" |
! rowspan="2" ! style="padding:1px .5em; color:purple;" |PLUS RFID
|-
! style="padding:1px .5em; color:purple;" |MyRFID
! style="padding:1px .5em; color:purple;" |VEP RFID
|-
|Touch 'n Go Generic Card
|
|
|
|
|
|
|
|-
|Touch 'n Go Corporate Card
|
|
|
|
|
|
|
|-
|Touch 'n Go Zing Card
|
|
|
|
|
|
|
|-
|Touch 'n Go eWallet
|via PayDirect link to Generic Card
|via PayDirect link to Generic Card
|
|
|
|VEP enforcement deferred until further notice
|after fully launch nationwide
|-
|Bank Card (Debit/Credit)
|
|
|
|
|
|
|under pilot test
|-
|Notes
| colspan="2" |Nationwide toll collection
|<small>RFID tag issuance by Touch 'n Go for Malaysian registered vehicle.</small>
| colspan="2" |Nationwide toll collection
|RFID tag issuance by Touch 'n Go for foreign registered vehicle.
|RFID tag issuance by PLUS Expressways.
|}
Multi Lane Free Flow
The Multi Lane Free Flow (MLFF) is an electronic toll collection system that allows free flow high speed toll system highway for all its users. With MLFF, the current toll lanes at toll plazas will be replaced with readers at gantry across the highway to detect vehicle and deduct toll using the existing ETC when fully implemented. The Malaysian Highway Authority (MHA) is planning to implement MLFF system at all highways in stages starting 2010.
Toll rebate
Beginning 1 September 2009, the 20 per cent rebate given to motorists who pay toll charges more than 80 times a month, can be saved for up to six months. The rebate can be redeemed at 126 locations which would be announced in due time.
Toll rate classes
There are fixed toll rate classes for every Malaysian expressway except for the Penang Bridge and the SMART Tunnel where toll rates are not the same.
Expressways
These classes apply to every expressway in Malaysia (including Johor–Singapore Causeway, Malaysia–Singapore Second Link and Sultan Abdul Halim Muadzam Shah Bridge):
Penang Bridge
SMART Tunnel
Facilities
There are several facilities provided along Malaysian expressway as follows:
Rest and Service Area – Rest and service areas (RSA) are located roughly about every 60 km along interstate expressways such as the North–South Expressway and the East Coast Expressway. However, some urban expressways may also provide RSA, such as the Shah Alam Expressway, the Damansara–Puchong Expressway and the Guthrie Corridor Expressway. A typical RSA may have a food court, fruit stall, craft shop, public toilets and baths, public telephones, huts (wakaf), petrol stations and prayer rooms (surau) for Muslims. Some RSAs may have ATMs, motels like "Highway Inn", convenience shops such as Highway Mart and 7-Eleven, fast food restaurants and a food court. The wireless broadband internet facility is now available in RSAs; the Tapah RSA in Perak was the first RSA on a Malaysian expressway to provide wireless broadband internet facilities.
Layby – Laybys are basic parking lots beside the expressways that may also have public toilets and baths, fruit stall, huts (wakaf) and public telephones. However, some laybys may have a few food stalls and petrol stations. Usually, there are about two laybys in between every two RSAs.
Overhead restaurants – Overhead restaurants are special RSAs with restaurants above the expressway. Unlike typical laybys and RSAs which are only accessible in one-way direction only, an overhead restaurant is accessible from both directions of the expressway. There are three overhead bridges in Malaysia – Sungai Buloh (North–South Expressway Northern Route), Ayer Keroh (North–South Expressway Southern Route) and USJ (North–South Expressway Central Link). The PLUS Art Gallery is located in Ayer Keroh Overhead Bridge Restaurant.
Customer Service Centre (CSC) – Every toll plaza in Malaysia has a customer service centre. This centre includes highway maps, toll fare lists, information counters, Touch 'n Go card reload counters, Touch 'n Go and Smart TAG sales counters and more.
Touch 'n Go Hub – The hub for the Touch 'n Go and Smart TAG sales.
Touch 'n Go Spot – The spot for the Touch 'n Go and Smart TAG sales counter. Usually can be found at all petrol stations.
Touch ‘n Go Drive-Through Purchase and Refill card lane (POS) – Touch 'n Go card users can refill existing or purchase new Touch 'n Go cards directly from the lane without the hassle of going to customer service centres. This Touch 'n Go POS lane are available at all expressways on the PLUS Expressway network.
Vista point – Vista points are special parking areas that allow motorists to see scenic views of the expressway; available only at Senawang (both directions) and Ipoh (northbound only).
Motorcycle shelter – Motorcycle shelters provide protection and shelter for motorcyclists from heavy rain. Usually, most motorcycle shelters are located below overhead bridges, but some may be special booths.
Motorcycle lane – In some parts of the whole expressway, there is an additional lane designated for motorcycles. These lanes are usually about half the width of a normal lane on the North–South Expressway and are positioned on the extreme left side of the main carriageway in each direction of travel. These special lanes are found in Shah Alam Expressway, Butterworth–Kulim Expressway, Federal Highway and Guthrie Corridor Expressway
Emergency phones – Emergency phones are located every 2 km along interstate expressways; useful if there are breakdowns on the expressway. Attendants from the nearest toll plaza will tow the broken cars to the nearest workshops.
Tunnel emergency exits (Ventilation and escape shafts) – Tunnel emergency exits are located every 1 km along expressway tunnels, such as SMART Tunnel, Penchala Tunnel on Sprint Expressway, Menora Tunnel on North–South Expressway and Genting Sempah Tunnel on Kuala Lumpur–Karak Expressway.
Highway hotline service – Every expressway has a hotline service.
Highway patrol unit – Every expressway has a highway patrol unit.
Highway helicopter patrol unit – This unit is available at all PLUS Expressway networks only.
Traffic Control and Surveillance System (TCSS) – The Traffic Control and Surveillance System (TCSS) comprises a number of traffic monitoring systems such as Traffic Closed-Circuit Television (CCTV), Traffic Monitoring Centre (TMC), Variable Message Systems (VMS) and Vehicles Breakdown Sensors.
Integrated Transport Information System (ITIS) – This system is normally found in the Klang Valley, Penang and Iskandar Malaysia/Johor Bahru.
Road Transport Department (JPJ) Enforcement Stations – These stations can be found at Karak Expressway and East Coast Expressway. These JPJ enforcement stations have weighing bridges to detect heavy vehicles.
Police Watch Tower – These towers can be found at all interstate expressways in Malaysia to monitor traffic situations during festive seasons.
Warning lights – These yellow lights can be found in hazardous and accident areas.
Automated Enforcement System (AES) – These systems can be found at accident-prone areas and the red-light camera at traffic light junctions.
Speed Indicator Display (SID) – These systems can be found at the Kerinchi Link of the Sprint Expressway to remind drivers in the event they are exceeding the permitted speed limit not to do so.
Runaway truck ramp – A traffic device that enables vehicles having braking problems to stop safely. These systems are found at mountainous areas such as at nearby Ipoh North Toll Plaza Interchange along the North–South Expressway Northern Route
Interchanges
These are the different types of expressway interchanges in Malaysia:
Trumpet interchange – It is usually found in every closed toll system expressway like the North–South Expressway and East Coast Expressway. The trumpet design is popular as a highway exit with toll booths for the closed toll system because of the minimum construction cost of its toll booths.
Cloverleaf interchange – It is more common in Malaysia to link two overlapping expressways because of its relatively cheaper cost. The biggest cloverleaf highway interchange in Malaysia is Bulatan Darul Ehsan a.k.a. Shah Alam Cloverleaf Interchange of Federal Highway in Shah Alam, Selangor.
Diamond Interchange – It is more popular in Malaysia to join the expressway crossing over municipal roads.
Multi-Level Stacked Diamond Interchange – It is a diamond interchange upgraded into a multi-level interchange. Examples include the Bandar Sunway Interchange between Damansara–Puchong Expressway and New Pantai Expressway in Petaling Jaya.
Diverging diamond interchange – This is a rare type of diamond interchange which involves temporary lane changes, i.e. from left-hand traffic to right and then back to the left. Like SPUI, it allows traffic from two opposite directions to turn right at the same time but does not allow traffic to go straight ahead. Examples include the Freescale Interchange at Damansara–Puchong Expressway.
Single-point urban interchange (SPUI) – A SPUI interchange is nearly similar with a typical diamond interchange but it allows traffic from two opposite directions to turn right at the same time; however, it does not allow traffic to go straight. Examples include the Danga City Mall interchange at Skudai Highway / Johor Bahru Inner Ring Road.
Roundabout interchange – Very popular in Malaysia.
Parclo interchange – An example of this is the Port Dickson Interchange on the North–South Expressway and Kapar Interchange on the New North Klang Straits Bypass.
Directional T interchange – These interchanges are found at Nilai North and Nilai Interchanges of North–South Expressway and also Setia Alam Interchange and Bukit Lanjan Interchange on New Klang Valley Expressway.
Stacked Interchange – Examples of these are the Penchala Interchange on the Damansara–Puchong Expressway and Penchala Link of the Sprint Expressway.
Multi-Level Stacked Interchange – Examples of these are the Ampang Interchange on the Jalan Ampang and the Ampang flyover of the Kuala Lumpur Middle Ring Road 2.
Multi-Level Stacked Roundabout – There are three-level and four-level roundabouts found in Malaysia. Examples of four-level roundabouts include the Segambut Interchange of Kuala Lumpur–Rawang Highway and Kewajipan Interchange of New Pantai Expressway.
Double U-Turn Interchange – These interchanges found at Tampoi North interchange on the Skudai Highway and Pasir Gudang Highway.
Left in/left out (LILO) junction – These junctions restrict the ingress and egress of the minor roads; they only permit left-turn entries. To turn to opposite direction, motorists may need to make a U-turn on the expressway. These junctions are very common in urban expressways such as in Sungai Besi Expressway and Damansara–Puchong Expressway.
Safety
Speed limits
The default national speed limit on Malaysian expressways is 110 km/h (68 mph), but in certain areas a lower speed limit (such as 90 km/h (56 mph) or 80 km/h (50 mph)) is applied, especially in single carriageway expressway, large urban areas, crosswinds, heavy traffic and in dangerous mountainous routes and 60 km/h (37 mph) is applied 1 km before the toll plaza Speed traps are also deployed by the Malaysian police at many places along the expressways.
Excluded vehicles
These vehicles may not use the expressways:
PLUS Expressways networks and East Coast Expressway (ECE)
Bicycles
Steam roller
Tractors
Excavators and backhoes
North–South Expressway Northern Route
(Sungai Buloh–Bukit Lanjan), New Klang Valley Expressway (Shah Alam–Jalan Duta), Federal Highway (Sungai Rasau–Batu Tiga): Monday to Friday (Except Public Holidays) from 6.30am to 9.30am
Heavy vehicles weighing 10,000 kg or more
Ampang–Kuala Lumpur Elevated Highway (AKLEH)
Bicycles
Maju Expressway (MEX) (Kuala Lumpur–Putrajaya Expressway (KLPE))
Bicycles
SMART Tunnel
Motorcycles and bicycles
Bus
Steam roller
Heavy vehicles like lorries, trailers, etc.
Tractors
Excavators and backhoes
Accidents
Malaysian expressways are potential sites of fatal highway accidents in Malaysia, especially during festive seasons. However, accidents in Malaysia happen on federal, state, and municipal roads. Most road accidents are caused by the attitude of certain road users who drive dangerously over the speed limit.
Emergency lanes are for emergency vehicles only and for stopping area if breakdown. However, most Malaysians are still not aware about the safety by using the emergency lanes as an alternative route during traffic jam.
Accident-prone areas
km 25 of Gunung Pulai near Kulai, Johor on North–South Expressway Southern Route
km 171 to 141 of Tangkak–Pagoh stretch on North–South Expressway Southern Route
km 25.1 of Jalan Duta toll plaza, Kuala Lumpur on North–South Expressway Northern Route
km 293 to 310 of North–South Expressway Northern Route from Gopeng Interchange to Tapah Interchange (Gua Tempurung stretch)
km 256 of old Jelapang toll plaza, Perak on North–South Expressway Northern Route
km --- to --- of North–South Expressway Northern Route from Menora Tunnel to Sungai Perak Rest and Service Area, Perak
km 30 to 35 of Gombak, Selangor on Kuala Lumpur–Karak Expressway (not far from Genting Sempah Tunnel).
Sungai Besi sharp corner flyover bridge from Jalan Dewan Bahasa (formerly Jalan Lapangan Terbang) on Kuala Lumpur Middle Ring Road 1 towards Kuala Lumpur–Seremban Expressway.
km of Kelana Jaya on Damansara–Puchong Expressway near Kelana Jaya LRT stations.
km of Puchong on Damansara–Puchong Expressway near Tractors Malaysia.
km of Damansara Utama–Section 14 on Sprint Expressway.
Subang Jaya aka Persiaran Tujuan Interchange on the railway bridge corner (from Kuala Lumpur to Subang Jaya) at the exit of Federal Highway.
km --- to --- of the East Coast Expressway Phase 2 from Jabor to Kuala Terengganu.
During workdays/peak hours
During workdays or peak hours, there are many restricted routes on the expressways especially in the Klang Valley and Penang to ease congestion during peak hours in the morning such as Federal Highway (Sungai Rasau–Subang), New Klang Valley Expressway (Shah Alam–Jalan Duta), North–South Expressway Northern Route (Rawang–Bukit Lanjan) and Penang Bridge. Heavy vehicles (except buses and tankers) with laden and unladen heavy vehicles weighing 10,000 kg or more are not allowed to enter the expressway between 6:30 am until 9:30 am on Monday to Friday (except public holidays). A compound fines will be issued to heavy vehicles which flouted the rule.
During festive seasons
During festive seasons such as Chinese New Year, Deepavali, Christmas and Hari Raya, activities such as construction, road repairs and maintenance works are temporarily stopped to ensure a smoother traffic flow on the expressways. Meanwhile, heavy goods vehicles such as logging trucks, cement trucks, container trucks, construction materials trucks and other heavy goods vehicles (except tanker lorry, provision goods truck, crane, tow truck, fire engine, ambulance, etc.) are banned from using roads, highways, and expressways during festive seasons. A massive nationwide operation known as Ops Selamat (previously called Ops Sikap) is held annually by the Malaysian police to ensure safety on all roads in Malaysia during festive seasons. To smooth traffic flow during peak periods in the festive seasons, a Travel Time Advisory (TTA) has been set up at all interstate expressways such as PLUS Expressways network and East Coast Expressway. Some Touch n Go reload lane at all plus highway will be temporary closed during festive season according to the date by plus highway, customer your required to reload the card before entering highway
Automated Enforcement System
The Automated Enforcement System (AES) is the road safety enforcement system to monitored all federal roads, highways and expressways in Malaysia. This system came into effect on 22 September 2012.
Type of AES
Speed light camera
Red light camera
Natural hazards
Other hazardous conditions on expressways include landslides, crosswinds, fog, storms, road damages, paddy (jerami) burning activities and flash floods.
List of landslide-prone areas
km of Bukit Lanjan–Jalan Duta on North–South Expressway Northern Route
km of Bukit Lanjan between Kota Damansara–Bukit Lanjan on New Klang Valley Expressway
km of Gua Tempurung between Gopeng–Tapah on North–South Expressway Northern Route
km of Bukit Merah between Bukit Merah–Taiping (North) on North–South Expressway Northern Route
km of Sungai Perak–Jelapang Toll Plaza on North–South Expressway Northern Route
km of Bukit Berapit between Changkat Jering–Kuala Kangsar on North–South Expressway Northern Route
km of Gombak–Genting Sempah on Karak Expressway
km of Bukit Tinggi–Bentong on Karak Expressway
km of Gunung Ma'okil between Pagoh–Yong Peng on North–South Expressway Southern Route
km of Puchong–Seri Kembangan on Damansara–Puchong Expressway near Puchong Selatan toll plaza.
km 15 of Skudai–Senai (North) on North–South Expressway Southern Route near Skudai toll plaza.
List of crosswind-prone areas
km of Senawang–Pedas-Linggi on North–South Expressway Southern Route
km of Alor Gajah–Ayer Keroh on North–South Expressway Southern Route
km along East Coast Expressway
List of flash flood-prone areas
km 15 of Batu Tiga Interchange on Federal Highway
km of Shah Alam Interchange on New Klang Valley Expressway
km of Sungai Besi on Sungai Besi Expressway near Razak Mansion
km of Seberang Jaya Interchange underpass on Butterworth–Kulim Expressway near Carrefour Seberang Jaya
km of Alor Star–Jitra of the North–South Expressway Northern Route
km 173.9–171.9 of Jasin, Melaka and Tangkak, Johor on North–South Expressway Southern Route
Controversial issues
There are several controversial issues regarding the construction of expressways. The main issue is the increase of toll rates, which can be a huge burden especially for residents of Kuala Lumpur and the surrounding Klang Valley conurbation.
There are also various parties who question the capability of the numerous expressways in Klang Valley to overcome traffic congestion, which does not show signs of improvement with the construction of new expressways. Three chief factors were blamed for the urban expressway congestion, namely the short-sighted policies by the authorities, greedy property developers, and the failure of local municipal councils to control the development in the Klang Valley.
There are also several protests being held by residents of some housing areas being affected by several planned expressways, such as the Sungai Besi–Ulu Klang Elevated Expressway (SUKE), Damansara–Shah Alam Elevated Expressway (DASH) and Kinrara–Damansara Expressway (KIDEX Skyway) (project scrapped in 2014). Environmental issues such as road noise and worsening congestion became the chief reasons of those protests.
Other controversial issues include the following:
The cracks found on beams on the Kepong Flyover in Kuala Lumpur Middle Ring Road 2 (MRR2) on 10 August 2004.
The flyover which collapsed on Setia Alam Interchange in New Klang Valley Expressway during construction on 10 July 2005, where, tragically, about 4 lives were lost.
The 8 fallen I-beams at the Pajam Interchange in Negeri Sembilan during the construction of the Kajang–Seremban Highway (LEKAS Highway) on the night of 27 September 2007.
The ramp collapse at the Batu Maung Interchange at Batu Maung side of the Penang Second Bridge during construction on 6 June 2013 killing one person.
Facts
The Tanjung Malim–Slim River tolled road (Federal Route 1) is the first tolled highway in Malaysia.
The Sultan Yahya Petra Bridge (Federal Route 3 ) is the first tolled bridge in Malaysia.
PLUS Expressways is the largest highway concessionaries operator company in Malaysia, also the largest listed toll expressway operator in Southeast Asia, and the eighth largest in the world. The second largest is Prolintas.
The longest bridge in Malaysia is Sultan Abdul Halim Muadzam Shah Bridge (Penang Second Bridge) E28 with a total length . The second longest bridge is Penang Bridge E36 with a total length .
The longest expressway in Malaysia is North–South Expressway E1, E2, with a total length .
The longest expressway river bridge in Malaysia is the Sungai Johor Bridge on Senai–Desaru Expressway E22 with a total length 1.7 km (1.708 m). The bridge also the longest single plane cable-stayed bridge in Malaysia.
The section between Cahaya Baru and Penawar of the Senai–Desaru Expressway E22 is built as a two-lane single carriageway making the first two-lane controlled-access expressway in Malaysia.
The first cable-stayed land bridge in Malaysia is the LDP cable stayed bridge at the Freescale interchange on Damansara–Puchong Expressway E11 with a total length
The most expensive section of the expressway is the Gopeng–Tapah section on the North–South Expressway Northern Route E1. At RM200 million, it translates to RM 20 million per kilometre. Embankment strengthening is the major contributor for this escalating cost.
The largest toll plaza in Malaysia is Bandar Cassia-PLUS Toll Plaza (Gateway Arch Toll Plaza) in Penang Second Bridge E28 with over 28 lanes (excluding additional motorcycle toll plaza). The second largest toll plaza is Sungai Besi Toll Plaza in North–South Expressway Southern Route E2 with over 18 lanes (excluding additional toll booths).
The highest toll plaza in Malaysia is Setul Toll Plaza in Kajang–Seremban Highway E21 located at the hilly top of Gunung Mantin–Seremban at 258 metres above sea level.
Bandar Saujana Putra Interchange on North–South Expressway Central Link E6 and South Klang Valley Expressway E26 is the only expressway interchange in Malaysia to have two toll plazas.
The North–South Expressway E1, E2, is the first expressway in Malaysia that provided an Overhead Bridge Restaurant (OBR).
The busiest expressway in Malaysia is Federal Highway route 2 from Klang to Kuala Lumpur
Federal Highway route 2 is the first highway in Malaysia to have a motorcycle lane.
The first highway tunnel in Malaysia is Genting Sempah Tunnel on Kuala Lumpur–Karak Expressway E8.
The first elevated highway in Malaysia is Ampang–Kuala Lumpur Elevated Highway (AKLEH) E12.
The longest flyover bridge in Malaysia is Batu Tiga Flyover on North–South Expressway Central Link E6.
SMART Tunnel E38 (4 km (2.5 miles)) is the longest motorway tunnel, as well as the first double-decked tunnel and the first tunnel that has a stormwater tunnel and a motorway tunnel in Malaysia.
The widest tunnel in Malaysia is Penchala Tunnel on Penchala Link of Sprint Expressway E23.
The first expressway with double-decked carriageway in Malaysia is Kerinchi Link on Sprint Expressway E23.
The Kuala Lumpur–Karak Expressway E8 is the only expressway in Malaysia has a separate carriageway at Genting Sempah in Selangor–Pahang border (one for Selangor side and one for Pahang side) due to their geographical locations.
The biggest cloverleaf highway interchange in Malaysia is Bulatan Darul Ehsan Interchange of Federal Highway route 2 and Kemuning–Shah Alam Highway E13 in Shah Alam, Selangor.
The largest highway interchange in Malaysia is Gelugor Complex Interchange at Penang Bridge E36.
The PLUS Speedway (formerly ELITE Speedway)'' is the first go-kart circuit in the Malaysian expressways. The circuit is located near the USJ Rest and Service Area on North–South Expressway Central Link E6.
Tapah Rest and Service Area (both bound) on North–South Expressway Northern Route E1 in Perak is the first rest and service area in the Malaysian expressway to have wireless broadband internet facilities.
The PLUS Art Gallery in Ayer Keroh Overhead Bridge Restaurant (OBR) North–South Expressway Southern Route E2 is the first highway art gallery in Malaysia.
The Perasing Rest and Service Area on the East Coast Expressway E8 is the biggest Rest and Service Area on the ECE network. It is only accessible by the directional T interchange to R&R in which it is probably the only one of its type in Malaysia.
The largest advertising board on the Malaysian expressways is the Giant Wau Kite Spectacular Advertising Board on the North–South Expressway Southern Route E2 near Sungai Besi and the North–South Expressway Central Link E6 near Putrajaya.
While most toll expressways in Malaysia use green signboards with white letters, the MetaCorp expressway networks (East–West Link Expressway and Kuala Lumpur–Seremban Expressway) E37 use blue signboards instead, as if they are municipal expressways.
Before 2007, all federally funded expressways have no exit numbering system; however, the exit number system similar to toll expressways was introduced in 2007 exclusively for Federal Highway route 2 (then followed by the Putrajaya–Cyberjaya Expressway route 29); thus making the Federal Highway as the first federal expressway with expressway exit numbering system.
Tun Salahuddin Bridge E24 is the first and currently the only toll expressway in the East Malaysia, while almost all toll expressways are built in the Peninsular Malaysia. The Tun Salahuddin Bridge is also the only toll expressway without any grade-separated interchanges.
The Machap Rest and Service Area (north bound) at the North–South Expressway Southern Route E2 is the first fully air-conditioned rest area in Malaysia.
The Sungai Perak Rest and Service Area (south bound) at the North–South Expressway Northern Route E1 in Perak is the first rest and service area in the Malaysian expressway to have an eco-management theme known as "The Green Trail" or "Jejak Hijau".
Putrajaya–Cyberjaya Expressway route 29 is the first future federal highway on Multimedia Super Corridor (MSC).
PLUSMiles is the first and only toll rebate loyalty programme in the Malaysian expressways.
Shah Alam is the first and currently the only city in Malaysia to have its own municipal route numbering system, while other municipal routes in Malaysia do not bear any route numbering scheme.
The longest closed toll collection system coverage in Malaysia is from Juru toll plaza to Skudai toll plaza (previously Ipoh South toll plaza to Skudai toll plaza), which runs through the North–South Expressway Northern Route E1, New Klang Valley Expressway E1, North–South Expressway Central Link E6 and North–South Expressway Southern Route E2.
The North–South Expressway Northern Route E1 is the first and the only expressway in Malaysia to have a runaway truck ramp near Jelapang, Ipoh.
There is one semi tunnel on the East–West Highway (Route 4) from Gerik, Perak to Jeli, Kelantan. It is probably the only one of its type in Malaysia.
The first true two-lane expressway with full access control in Malaysia is the Cahaya Baru–Penawar section of the Senai–Desaru Expressway E22.
The Johor Bahru Eastern Dispersal Link Expressway E14 is the only expressway in Malaysia has a controlled-access toll free expressway.
The Kota Bharu–Kuala Krai Expressway, which is under construction, will be the first state-owned controlled-access expressway once completed.
The East Coast Expressway E8 is the first expressway to have a different concessionaires, ANIH Berhad (phase 2) and Lebuhraya Pantai Timur 2 (LPT2) Sdn Bhd (phase 2).
The sections of the East Coast Expressway E8 Phase 2: Jabur–Bukit Besi and Telemung–Kuala Terengganu, are the first expressway build by the Malaysian Public Works Department (JKR).
List of expressways and highways
In popular culture
Films
These films were filmed mainly on Malaysian expressways:
Televisions
Dramas
Documentary
Video games
Burnout Dominator – The Spiritual City track is based on the real-life Kuala Lumpur city, with signboards leading to some expressways such as E2 North–South Expressway Southern Route and E23 Sprint Expressway.
Gallery
See also
Road signs in Malaysia
National Speed Limits
Malaysian Federal Roads system
Malaysian Highway Authority
Malaysian State Roads system
Multi Lane Free Flow
Electronic Toll Collection
Teras Teknologi (TERAS)
Highway
Driveway
Freeway
Motorway
Autobahn
Autoroutes of France
Controlled-access highway
Interstate Highway System
Expressways of Japan
Expressways of China
Highways in Colombia
Asian Highway Network
References
External links
Malaysian Highway Authority official page
Malaysian Highway Authority traffic information page
"Travel Smarter" – Malaysian Highway Concessionaires Company Association (PSKLM) highway information page
PLUS Malaysia Berhad – concession holder for:
North–South Expressway
Malaysia–Singapore Second Link
North–South Expressway Central Link
Seremban–Port Dickson Highway
Butterworth–Kulim Expressway
Penang Bridge
ANIH Berhad – concession holder for:
Karak Expressway
East Coast Expressway
Kuala Lumpur–Seremban Expressway
East–West Link Expressway
Gamuda Berhad – concession holder for:
SPRINT – Sprint Expressway
LITRAK – Damansara–Puchong Expressway
KESAS – Shah Alam Expressway
SMART – SMART Tunnel
Road Builder – concession holder for:
Besraya – Sungai Besi Expressway
NPE – New Pantai Expressway
LEKAS – Kajang–Seremban Highway
Grand Saga website
Propel Berhad – concession holder for highway maintenance in Malaysia
Motorways Exitlist – Exilist of expressway in Malaysia
Komuniti PTT Zello Channel Lebuhraya Utara–Selatan
Malaysian Public Works Department |
28396947 | https://en.wikipedia.org/wiki/Geac%20Computer%20Corporation | Geac Computer Corporation | Geac Computer Corporation, Ltd ( and ) was a producer of enterprise resource planning, performance management, and industry specific software based in Markham, Ontario. It was acquired by Golden Gate Capital's Infor unit in March 2006 for US$1 billion.
History
Geac was incorporated in March 1971 by Robert Kurt Isserstedt and Robert Angus ("Gus") German.
Geac started with a contract with the Simcoe County Board of Education to supply onsite accounting and student scheduling. They programmed inexpensive minicomputers to perform tasks that were traditionally done by expensive mainframe computers.
Hardware/software
Geac designed additional hardware to support multiple simultaneous terminal connections, and with Dr Michael R Sweet developed its own operating system (named Geac) and own programming
language (OPL) resulting in a multi-user real-time solution called the Geac 500.
The initial implementation of this system at Donlands Dairy in Toronto led to a contract at Vancouver City Savings Credit Union ("Vancity") in Vancouver, British Columbia, to create a real-time multi-branch online banking system. Geac developed hardware and operating system software to link minicomputers together, and integrated multiple-access disk drives, thereby creating a multi-processor minicomputer with a level of protection from data loss. Subsequently, Geac replaced the minicomputers with a proprietary microcoded processor of its own design, resulting in vastly improved software flexibility, reliability, performance, and fault tolerance. This system, called the Geac 8000 was introduced in 1978.
Geac introduced its library management software in 1977, and a number of well-known libraries adopted it. These included the US Library of Congress and the Bibliothèque Nationale de France. In the mid-1980s, it released a suite of office automation apps (calendar, wordprocessor, e-mail, spreadsheet, etc.) running on the 8000. This application suite was piloted by the federal Office for Regional Development (ORD – later absorbed by Industry Canada) and later still was used by the NAFTA Trade Negotiations Office. Compared to similar LAN-based office initiatives of the same period, Geac's multi-user minicomputer-based offering provided significantly higher availability. And its software developers were exemplary in fixing bugs promptly and responding to requests for enhancements.
Financials
During the 1990s the company successfully embarked on an aggressive acquisition strategy led by Steve Sadler, CEO, and expanded into a wide range of vertical markets, including newspaper publishing, health care, hospitality, property management, and others.
Its 1999 acquisition of JBA Holdings PLC by the new leader, Doug Bergeron, Geac CEO, doubled the size of the company, but became a financial disaster.
Geac's acquisitions were not aligned to any customer focused strategy: they covered a wide range of products and geographies, and many analysts accused Geac of "financial engineering".
In the early 2000s, the company faced significant financial issues: in April 2001, the company's US$225 million credit line was in default, and during FY2001, Geac posted a loss of US$169 million on revenues of US$552 million. Geac updated some of its legacy software replaced its management team, ultimately tapping its chairman, Charles S. Jones, to be the CEO, Donna DeWinter to be the CFO (Ms. De Winter is currently CEO of Nexient Learning), and made Craig Thorburn the senior vice president of acquisitions (while he was a partner at Blake, Cassels & Graydon). Geac then paid off its bank loans, and significantly improved its profit margins, and its stock began to increase. It listed on the NASDAQ. It also embarked on a strategy of establishing a single focus for its software products around selling software to the chief financial officer of client organizations. It profitably divested its real estate software operations after making it profitable and a growing business, and acquired two business performance management companies: Comshare () and Extensity (). Geac also obtained a $150 million credit line and fended off a proxy fight brought by Crescendo Partners. In March 2006, the company was acquired by Infor Global Solutions for US$1 billion, or $11.10 per share, compared to US$1.12 five years earlier, providing the investors a 10x return. In Fiscal 2001, the company posted a US$169.1 million loss, and in fiscal 2005, Geac posted net income of US$77 million.
After it was acquired, several executives of Geac, including CEO Charles S. Jones, left the company to form Bedford Funding, a private equity fund that invests in software companies. While Geac was headquartered in Canada, Mr. Jones lived in Westchester County, NY, and also served on the board of the Westchester Land Trust, to which he donated over $100,000 in 2006. Mr. Jones would also later donate $100,000 to Iona Preparatory School.
Bedford Funding would later make investments in several IT companies, including MDLIVE.
Geac OPL (Own Programming Language)
Geac invented ZOPL.
Geac's main low-level programming language was called OPL (Our Programming Language) which later became ZOPL (or Z-OPL). This was a language that was in many respects similar to (and, I believed, derived from) BCPL.
Psion used their own language called OPL, which was nothing like Geac's OPL. They did, however, at a later date use Psion Organiser devices in conjunction with their library management systems, which is where the long-standing confusion may have originated. They used the Psion Organiser hand-held devices because these used the same microprocessor that was used in the Epson device used previously. The Organiser could be used with a barcode reader, used for scanning the barcodes on books and on borrowers' library cards. However, at the time that the Organiser was being evaluated for use in mobile libraries it could not read Codabar - which was the barcode format used by Geac. Geac had developed the machine code for the Epson computer, which was therefore compatible with the Organiser barcode reader, and supplied the code to Psion to use - which resulted in the Psion barcode reader being able to read Codabar, and also Plessey barcodes.
ZOPL was a fairly low-level programming language, with some interesting and unusual features. Variable could be declared as DCL or BDCL types. DCL variables started at the top of memory, location 0. BDCL started further down the memory stack. There was no concept of other types, such as integer or character. In effect you were declaring an area of memory with a name. For example, if you declared:
DCL Fred (10)
DCL Alice (20)
then you had declared two lumps of memory, Fred and Alice. Fred starts at memory location 0 and has 10 bytes of memory (each byte of 8 bits). Alice started at position 9 and has 20 bytes of memory. You could put data into Fred by assigning Fred a value. But you could also do so using the address of Fred, using an offset if required. So $Fred+3 would be at address 2 in memory (i.e. the 3rd word in memory). You could put information into Alice in the same way, but also by using Fred with an offset greater than 9 - because Alice starts immediately after Fred. There was nothing to stop you from putting data into Alice by referencing Fred with a suitable offset.
Similarly, variables passed as parameters to functions or subroutines were actually being passed as addresses. You could retrieve data from Fred or Alice by using the contents of an address (|$Fred+3, for example).
It was a very versatile language, though you had to be careful in its use, for obvious reasons.
Geac Corporation's Own Programming Language (OPL) found uses in:
a 1970s high-level minicomputer language on Hewlett Packard systems
conventional business-applications on minicomputers
Geac had a number of higher level languages, such as Hugo (used by the Library division). These languages were quite different from ZOPL, and had only a limited number of variables (initially 24). They were very specifically designed for one many use - in this case Library Management Software, and there was a similar language for the Financial Systems business.
Geac operating system
Geac Corporation's Operating System was named Geac.
Geac minicomputers
Between 1971 and 1977, four Geac minicomputers were introduced:
Geac 150 (1971)
Geac 500 (1972)
Geac 800 (1973)
Geac 8000 (1977)
The 8000 had 300 MB disks, and initially supported 8–12 terminals (subsequently increased to permit 20–40). These terminals were custom-designed Informer units.
The 2nd version of the 8000, a dual-CPU system released 1978, supported up to about 1GB of hard disk.
The Geac 9000 (or Concept 9000) was introduced in the 1980s. This was a very different machine from the previous ones. It was multi-processor 16-bit machine, and each processor could operate fairly autonomously. The operating system maintained a list of processes that needed to be performed, and allocated them to any processor that was available at the time. It was, in effect, an early multi-processor running in parallel. Disk drives were connected to one processor, so any disk reads and writes had to go through that processor. Otherwise any processor could be used for any task.
All of the Geac computers had an unusual disc architecture. Disc files were fixed size, and could not grow. For this reason the Geac computers were not suitable for general-purpose multiuser systems, but it did make them superb designs for the applications for which they were intended. In a Library Management system, for instance, it is known at the outset how many titles, items (e.g. copies of books), borrowers and so forth were to be catered for. This means that, for instance, the borrower file size was known at the time of sale. The great advantage of this was that each borrower's data started at a known physical position on the disc, and all the information for that borrower was contiguous. In this way, given the ID number on the borrower's barcode, the physical location on the disc of that borrower's data is known, and the data could be located immediately, and with only one disc access and one disc read. In this way performance was fast and consistent, no matter how many books and borrowers were on the system.
Millennium
Geac purchased Dun & Bradstreet Software Services in 1996, including a dozen software packages collectively known as Millennium.
Acquisitions
Geac made numerous acquisitions during its existence, including:
Products
Products that Geac produced included Anael, Expert & Millennium Server, MPC, RunTime, SmartStream, System21, and VUBIS.
See also
List of companies of Canada
References
External links
A Brief History of Geac
Defunct software companies of Canada
ERP software companies
Companies based in Markham, Ontario
Computer companies established in 1971
2006 mergers and acquisitions |
19725658 | https://en.wikipedia.org/wiki/Happy%20drives | Happy drives | Happy drives are series of disk drive enhancements for the Atari 8-bit and Atari ST computer families produced by a small company Happy Computers. Happy Computers is most noted for the add-in boards for the Atari 810 and Atari 1050 disk drives, which achieved a tremendous speed improvement for reading and writing, and for the ability to "back up" floppies. Happy's products were among the most popular Atari computer add-ons. They were still in use and active in the aftermarket as of 2009.
Happy Computers
Happy Computers was formed in 1982 by Richard Adams under the name Happy Computing. At that time, the 810 Happy was hand wired on the internal side board. The name was changed to Happy Computers in 1983 when the company went from a sole proprietorship to a corporation.
It stopped shipping these products in 1990, and since then many other Atari enthusiasts have reverse engineered and replicated the products.
As early as 1983 Happy Computing was mentioned in context of software piracy. By 1986 software companies began producing fewer titles for the Atari than for the Apple II series or Commodore 64. They attributed this to their belief that an unusually high amount of software piracy existed on the Atari, and cited Happy Drive as a major cause of the piracy.
Atari 8-bit products
810 Upgrade
This was the first product released in 1982. The customer sent in either their 810 drive or the internal sideboard, and the upgrade was wired in. This consisted of a few extra logic chips, a different EPROM and point to point wiring.
In addition to the buffered reading and writing with zero latency and faster serial i/o, it made backups of floppies.
810 Enhancement
This version of the 810 Happy board was a plug-in board with a better data separator and used sockets already in place on the 810 internal board without the need for any soldering or permanent modification.
In addition to the buffered reading and writing with zero latency and faster serial i/o, it made backups of floppies.
Brian Moriarty of ANALOG Computing wrote in 1983 that the magazine was reluctant to publish reviews or advertisements of the 810 Enhancement "because of its unique potential for misuse", but after testing the board "decided that the legitimate performance benefits it offers are too significant to ignore". He found that booting time decreased to 11 seconds from 14-18, formatting time decreased to 25 seconds from 38, and drives would last longer because of more efficient disk access. Moriarty's tests confirmed the company's claim that the board and accompanying Happy Backup software could duplicate any disk readable by the Atari 810 drive. He wrote that the 810 Enhancement's $250 cost would probably be more useful as part of the purchase price of a second disk drive, but those with two drives "would find the high speed and special capabilities of a Happy drive to be a worthwhile investment" and "a pleasure to use". Moriarty concluded, "I hope the ATARI community will not abuse this power by using the Happy drive (and other similar products) to infringe on the rights of others".
1050 Enhancement
Atari released the more reliable, enhanced density (130 KB) 1050 drive with the introduction of the Atari 1200XL. The 1050 Enhancement was a plug-in board and could be installed without soldering or permanent modification. In addition to the buffered reading and writing with zero latency and faster serial i/o, it supported true double density (180 KB). The serial I/O of the 1050 Happy was faster than the 810 Happy due to the faster speed of the 6502 processor that replaced the on-board 6507.
1050 Controller
The 1050 Controller was a small board that was installed inside the 1050 Happy drive that had 2 switches and an led that allowed enabling or disabling disk write-protect to override the notch in the disk. It also allowed switch selection of a slower mode to provide compatibility with some picky programs. Some commercial software only ran in the original slow speed mode. The controller required a mechanical modification to the drive's enclosure and hence its installation was more permanent.
Warp Speed Software
The software that came with the Happy boards had many options.
Warp Speed DOS
Diagnostics for the Enhancement and the drive such as high speed xfer, RPM and read/write testing
Fast, slow and unHappy mode drive options for compatibility
Tracer mode for evaluating wasted space on floppies
Happy Compactor which allows organizing and combining multiple floppies into one.
Happy Backup for backing up floppies
Multi Drive, which allows high speed simultaneous writing with up to 4 Happy enhanced drives.
Sector copier
IBMXFR Program
This program was included with the Warp Speed Software. It allowed transferring files back and forth between an Atari and an IBM disk using a Happy enhanced 1050 drive. Because the 1050 was a single sided drive with only one head, the disk had to be formatted as SS (180K). The IBM disk could even be formatted on the 1050 drive.
Atari 16-bit products
Discovery Cartridge
The Discovery Cartridge was a device that plugged into the cartridge slot of the Atari ST Computer.
It backed up floppies and had connectors to allow a 3rd and 4th drive to be hooked up. The original ST computer only allowed for two floppy drives, and the extra drives were handy.
There were 4 different options available. Options included a pass through for another cartridge, a switch to bank select larger cartridges, and a switch to select/deselect the extra drives. There was also a battery backed up Time of Day clock option in the Discovery Cartridge, a significant oversight the Atari ST lacked in the stock configuration.
The power of the "HART" chip (Happy Atari Rotating Thing), designed by Richard Adams, allowed standard Atari drives to read the unusual Macintosh variable speed disks without needing a variable speed drive. The disks were then re-written in a standard "constant speed" 3.5 inch compatible format called Magic format. This allowed using the various Mac emulator products that would run most efficiently with Magic format disks. At least one of the Macintosh emulators also had a circuit to read Mac disks, but could make the emulations slower and less reliable. It was faster and more efficient to convert Mac disks to Magic format first.
The HART chip (IC number HARTD1©87HCI) also allowed copying conventional ST disks much faster. The computer floppy controller required two passes per track, three with verification. The HART chip could format and write in the same pass, saving one pass per track.
Q-Verter Cartridge
This was a smaller version of The Discovery Cartridge that plugged into the Atari ST cartridge slot and had a cable for 1 drive that allowed converting Mac disks.
References
External links
Antic Magazine, Vol. 4, NO. 3 / JULY 1985 / PAGE 40, by Eric Clausen, "EVERYTHING YOU WANTED TO KNOW ABOUT EVERY D.O.S., Including the brand-new DOS 2.5"
Analog Magazine #12 (Product Review)
Antic Vol. 6, No. 10 - Feb 1988 (new product announcement)
Antic Vol. 2, No. 5 (Q&A)
Antic Vol. 2, No. 9 (product review)
Antic Vol. 2, No. 4 (product review)
AtariMania, "The World's Finest Atari Database"
VintageComputerManuals.com, by Tim Patrick, PDF Document: Documentation for HappyXLVersion"
classiccmp.org, "Atari 8-Bit Computers Frequently Asked Questions List"
"Drive tests" by Mark D. Elliott, August 16 1989
Atari 8-bit family
Atari ST
Floppy disk drives |
11462382 | https://en.wikipedia.org/wiki/Self-tuning | Self-tuning | In control theory a self-tuning system is capable of optimizing its own internal running parameters in order to maximize or minimize the fulfilment of an objective function; typically the maximization of efficiency or error minimization.
Self-tuning and auto-tuning often refer to the same concept. Many software research groups consider auto-tuning the proper nomenclature.
Self-tuning systems typically exhibit non-linear adaptive control. Self-tuning systems have been a hallmark of the aerospace industry for decades, as this sort of feedback is necessary to generate optimal multi-variable control for non-linear processes. In the telecommunications industry, adaptive communications are often used to dynamically modify operational system parameters to maximize efficiency and robustness.
Examples
Examples of self-tuning systems in computing include:
TCP (Transmission Control Protocol)
Microsoft SQL Server (Newer implementations only)
FFTW (Fastest Fourier Transform in the West)
ATLAS (Automatically Tuned Linear Algebra Software)
libtune (Tunables library for Linux)
PhiPAC (Self Tuning Linear Algebra Software for RISC)
MILEPOST GCC (Machine learning based self-tuning compiler)
Performance benefits can be substantial. Professor Jack Dongarra, an American computer scientist, claims self-tuning boosts performance, often on the order of 300%.
Digital self-tuning controllers are an example of self-tuning systems at the hardware level.
Architecture
Self-tuning systems are typically composed of four components: expectations, measurement, analysis, and actions. The expectations describe how the system should behave given exogenous conditions.
Measurements gather data about the conditions and behaviour. Analysis helps determine whether the expectations are being met- and which subsequent actions should be performed. Common actions are gathering more data and performing dynamic reconfiguration of the system.
Self-tuning (self-adapting) systems of automatic control are systems whereby adaptation to randomly changing conditions is performed by means of automatically changing parameters or via automatically determining their optimum configuration. In any non-self-tuning automatic control system there are parameters which have an influence on system stability and control quality and which can be tuned. If these parameters remain constant whilst operating conditions (such as input signals or different characteristics of controlled objects) are substantially varying, control can degrade or even become unstable. Manual tuning is often cumbersome and sometimes impossible. In such cases, not only is using self-tuning systems technically and economically worthwhile, but it could be the only means of robust control. Self-tuning systems can be with or without parameter determination.
In systems with parameter determination the required level of control quality is achieved by automatically searching for an optimum (in some sense) set of parameter values. Control quality is described by a generalised characteristic which is usually a complex and not completely known or stable function of the primary parameters. This characteristic is either measured directly or computed based on the primary parameter values. The parameters are then tentatively varied. An analysis of the control quality characteristic oscillations caused by the varying of the parameters makes it possible to figure out if the parameters have optimum values, i.e.. if those values deliver extreme (minimum or maximum) values of the control quality characteristic. If the characteristic values deviate from an extremum, the parameters need to be varied until optimum values are found. Self-tuning systems with parameter determination can reliably operate in environments characterised by wide variations of exogenous conditions.
In practice systems with parameter determination require considerable time to find an optimum tuning, i.e. time necessary for self-tuning in such systems is bounded from below. Self-tuning systems without parameter determination do not have this disadvantage. In such systems, some characteristic of control quality is used (e.g., the first time derivative of a controlled parameter). Automatic tuning makes sure that this characteristic is kept within given bounds. Different self-tuning systems without parameter determination exist that are based on controlling transitional processes, frequency characteristics, etc. All of those are examples of closed-circuit self-tuning systems, whereby parameters are automatically corrected every time the quality characteristic value falls outside the allowable bounds. In contrast, open-circuit self-tuning systems are systems with para-metrical compensation, whereby input signal itself is controlled and system parameters are changed according to a specified procedure. This type of self-tuning can be close to instantaneous. However, in order to realise such self-tuning one needs to control the environment in which the system operates and a good enough understanding of how the environment influences the controlled system is required.
In practice self-tuning is done through the use of specialised hardware or adaptive software algorithms. Giving software the ability to self-tune (adapt):
Facilitates controlling critical processes of systems;
Approaches optimum operation regimes;
Facilitates design unification of control systems;
Shortens the lead times of system testing and tuning;
Lowers the criticality of technological requirements on control systems by making the systems more robust;
Saves personnel time for system tuning.
Literature
External links
Using Probabilistic Reasoning to Automate Software Tuning
Frigo, M. and Johnson, S. G., "The design and implementation of FFTW3", Proceedings of the IEEE, 93(2), February 2005, 216 - 231. .
Optimizing Matrix Multiply using PHiPAC: a Portable, High-Performance, ANSI C Coding Methodology
Faster than a Speeding Algorithm
Rethinking Database System Architecture: Towards a Self-tuning RISC-style Database System
Self-Tuning Systems Software
Microsoft Research Adds Data Mining and Self-tuning Technology to SQL Server 2000
A Comparison of TCP Automatic Tuning Techniques for Distributed Computing
Tunables library for Linux
A Review of Relay Auto-tuning Methods for the Tuning of PID-type Controllers
Control engineering
Control theory
Electronic feedback |
4636520 | https://en.wikipedia.org/wiki/Access%20Linux%20Platform | Access Linux Platform | The Access Linux Platform (ALP) is a discontinued open-source software based operating system, once referred to as a "next-generation version of the Palm OS," for mobile devices developed and marketed by Access Co., of Tokyo, Japan. The platform included execution environments for Java, classic Palm OS, and GTK+-based native Linux applications. ALP was demonstrated in devices at a variety of conferences, including 3GSM, LinuxWorld, GUADEC, and Open Source in Mobile.
The ALP was first announced in February 2006. The initial versions of the platform and software development kits were officially released in February 2007. There was a coordinated effort by Access, Esteemo, NEC, NTT DoCoMo, and Panasonic to use the platform as a basis for a shared platform implementing a revised version of the i.mode Mobile Oriented Applications Platform (MOAP) (L) application programming interfaces (APIs), conforming to the specifications of the LiMo Foundation. The first smartphone to use the ALP was to be the Edelweiss by Emblaze Mobile that was scheduled for mid-2009. However, it was shelved before release. The First Else (renamed from Monolith) smartphone, that was being developed by Sharp Corporation in cooperation with Emblaze Mobile and seven other partners, was scheduled for 2009, but was never released and officially cancelled in June 2010. The platform is no longer referenced on Access's website, but Panasonic and NEC released a number of ALP phones for the Japanese market between 2010 and 2013.
Look and feel
The user interface was designed with similar general goals to earlier Palm OS releases, with an aim of preserving the Zen of Palm, a design philosophy centered on making the applications as simple as possible. Other aspects of the interface included a task-based orientation rather than a file/document orientation as is commonly found on desktop systems.
The appearance of the platform was intended to be highly customizable to provide differentiation for specific devices and contexts.
In the last releases, they went for a much more modern look with gesture support, and were no longer close to the Palm OS.
Base frameworks
Similarly to Maemo, Nokia's internet tablet framework, ALP was based on components drawn from the GNOME project, including the GTK+ and GStreamer frameworks. A variety of other core components were drawn from mainstream open source projects, including BlueZ, matchbox, cramfs, and others. These components were licensed under the GNU General Public License (GPL), GNU Lesser General Public License (LGPL), and other open source licenses, meaning that ALP was a free or open environment on the software level.
Several components from ALP were released under the Mozilla Public License as The Hiker Project. These components addressed issues of application life-cycle, intertask communication, exchange and use of structured data, security, time and event-based notifications, and other areas common to the development of applications for mobile devices.
Application development
The ALP presented standard APIs for most common operations, as defined by the standards for Portable Operating System Interface (POSIX) and Linux Standard Base (LSB). However, neither standard addresses telephony, device customizing, messaging, or several other topics, so several other frameworks and APIs were defined by Access for those.
Applications for ALP could be developed as Linux-native code in C or C++, as legacy Palm OS applications (which run in the Garnet VM emulation environment), or in Java. Further execution environments were supported via the development of a launchpad used by the Application Manager (part of the Hiker framework).
The ALP SDK used an Eclipse-based integrated development environment (IDE), with added plug-ins, as did its predecessor Palm OS development environment. The compilers used were embedded application binary interface (EABI) enabled ARM versions of the standard GNU Compiler Collection (GCC) tool chain.
Security
The ALP used a combination of a user-space policy-based security framework and a kernel-space Linux security module to implement fine-grained access controls. The components for ALP's security implementation have been released as part of the Hiker framework. Controls were based on signatures and certificates; unsigned applications can be allowed access to a predefined set of safe APIs.
Devices
Panasonic cellular phones with ALP:
P-01E,
P-01F,
P-01G,
P-01H,
P-02B,
P-03C,
P-03D,
P-04C,
P-05C,
P-05B,
P-05C,
P-06B,
P-06C,
P-07B
NEC cellular phones with ALP:
N-01B,
N-01C,
N-01E,
N-01F,
N-01G,
N-02C,
N-02D,
N-03D,
N-04B,
N-05B,
N-05C,
N-06B,
N-07B,
N-07E,
N-08B
See also
Moblin project
Palm webOS
Ubuntu for Android
References
External links
Discontinued operating systems
Embedded Linux
Mobile Linux
Palm OS
Desktop environments based on GTK |
26569739 | https://en.wikipedia.org/wiki/Atlassian | Atlassian | Atlassian Corporation Plc () is a UK-domiciled, American-Australian -originated software company that develops products for software developers, project managers and other software development teams.
History
Mike Cannon-Brookes and Scott Farquhar founded Atlassian in 2002. The pair met while studying at the University of New South Wales in Sydney. They bootstrapped the company for several years, financing the startup with a $10,000 credit card debt.
The name is an ad hoc derivation from the titan Atlas in Greek mythology who had been punished to hold up the Heavens after the Greek gods had overthrown the Titans. (The usual form of the word is Atlantean.) The derivation was reflected in the company's logo used from 2011 through to the 2017 re-branding through a blue X-shaped figure holding up what is shown to be the bottom of the sky.
Atlassian released its flagship product, Jira – a project and issue tracker, in 2002. In 2004, it released Confluence, a team collaboration platform that lets users work together on projects, co-create content, and share documents and other media assets.
In July 2010, Atlassian raised $60 million in venture capital from Accel Partners.
In June 2011, Atlassian announced revenue of $102 million, up 35% from the year before.
In a 2014 restructuring, the parent company became Atlassian Corporation PLC of the UK, with a registered address in London—though the actual headquarters remained in Sydney.
Atlassian has nine offices in six countries: Amsterdam, Austin, New York City, San Francisco and Mountain View, California, Manila, Yokohama, Bangalore, and Sydney.
The group has over 6,000 employees serving more than 130,000 customers and millions of users.
In November 2015, Atlassian announced sales of $320 million, and Shona Brown was added to its board. On 10 December 2015 Atlassian made its initial public offering (IPO) on the NASDAQ stock exchange, under the symbol TEAM, putting the market capitalization of Atlassian at $4.37 billion. The IPO made its founders Farquhar and Cannon-Brookes Australia's first tech startup billionaires and household names in their native country, despite Atlassian being called a "very boring software company" in The New York Times for its focus on development and management software.
In March 2019, Atlassian's value was US$26.6 billion. Cannon-Brookes and Farquhar own approximately 30 percent each.
In October 2020, Atlassian announced the end of support for their "Server" products with sales ending in February 2021 and support ending in February 2024 in order to focus on "Cloud" and "Data Center" editions.
In October 2021, Atlassian received approval to construct their new Headquarters in Sydney, which will anchor the Tech Central precinct. Their building is planned to be the world's tallest hybrid timber structure and will embody leading sustainability technologies and principles.
Sales setup
Atlassian does not have a traditional sales team, relying instead on its website and its partner channel.
Acquisitions and product announcements
Additional products include Crucible, FishEye, Bamboo, and Clover which target programmers working with a code base. FishEye, Crucible and Clover came into Atlassian's portfolio through the acquisition of another Australian software company, Cenqua, in 2007.
In 2010, Atlassian acquired Bitbucket, a hosted service for code collaboration.
In 2012, Atlassian acquired HipChat, an instant messenger for workplace environments. Then in May 2012, Atlassian Marketplace was introduced as a website where customers can download plug-ins for various Atlassian products. That same year Atlassian also released Stash, a Git repository for enterprises, later renamed Bitbucket Server. Also, Doug Burgum became chairman of its board of directors in July 2012.
In 2013, Atlassian announced a Jira service desk product with full service-level agreement support.
In 2015, the company announced its acquisition of work chat company Hall, with the intention of migrating all of Hall's customers across to its own chat product HipChat.
A small startup called Dogwood Labs in Denver, Colorado which had a product called StatusPage (that hosts pages updating customers during outages and maintenance) was acquired in July 2016.
In January 2017, Atlassian announced the purchase of Trello for $425 million.
On 7 September 2017 the company launched Stride, a web chat alternative to Slack. Less than a year later, on 26 July 2018, Atlassian announced it was going to exit the chat business, that it had sold the intellectual property for HipChat and Stride to competitor Slack, and that it was going to shut down HipChat and Stride in 2019. As part of the deal, Atlassian took a small stake in Slack.
On 4 September 2018 the company acquired OpsGenie (a tool that generates alerts for helpdesk tickets) for $295 million.
On 18 March 2019, the company announced that it had acquired Agilecraft for $166 million.
On 17 October 2019, Atlassian completed acquisition of Code Barrel, makers of "Automation for Jira", available on Jira Marketplace.
On 12 May 2020, Atlassian acquired (a tool that generates helpdesk tickets from Slack conversations) for an undisclosed amount.
On 30 July 2020, Atlassian announced acquisition of Asset Management company Mindville to add to the power of its ITSM toolset for an undisclosed amount.
On 26 February 2021, Atlassian acquired cloud-based visualization and analytics company Chartio.
References
External links
2015 initial public offerings
Australian brands
Companies based in Sydney
Software companies of Australia
Software companies established in 2002
Australian companies established in 2002
Project management software
Development software companies |
15323 | https://en.wikipedia.org/wiki/Internet%20Protocol | Internet Protocol | The Internet Protocol (IP) is the network layer communications protocol in the Internet protocol suite for relaying datagrams across network boundaries. Its routing function enables internetworking, and essentially establishes the Internet.
IP has the task of delivering packets from the source host to the destination host solely based on the IP addresses in the packet headers. For this purpose, IP defines packet structures that encapsulate the data to be delivered. It also defines addressing methods that are used to label the datagram with source and destination information.
IP was the connectionless datagram service in the original Transmission Control Program introduced by Vint Cerf and Bob Kahn in 1974, which was complemented by a connection-oriented service that became the basis for the Transmission Control Protocol (TCP). The Internet protocol suite is therefore often referred to as TCP/IP.
The first major version of IP, Internet Protocol Version 4 (IPv4), is the dominant protocol of the Internet. Its successor is Internet Protocol Version 6 (IPv6), which has been in increasing deployment on the public Internet since c. 2006.
Function
The Internet Protocol is responsible for addressing host interfaces, encapsulating data into datagrams (including fragmentation and reassembly) and routing datagrams from a source host interface to a destination host interface across one or more IP networks. For these purposes, the Internet Protocol defines the format of packets and provides an addressing system.
Each datagram has two components: a header and a payload. The IP header includes source IP address, destination IP address, and other metadata needed to route and deliver the datagram. The payload is the data that is transported. This method of nesting the data payload in a packet with a header is called encapsulation.
IP addressing entails the assignment of IP addresses and associated parameters to host interfaces. The address space is divided into subnetworks, involving the designation of network prefixes. IP routing is performed by all hosts, as well as routers, whose main function is to transport packets across network boundaries. Routers communicate with one another via specially designed routing protocols, either interior gateway protocols or exterior gateway protocols, as needed for the topology of the network.
Version history
In May 1974, the Institute of Electrical and Electronics Engineers (IEEE) published a paper entitled "A Protocol for Packet Network Intercommunication". The paper's authors, Vint Cerf and Bob Kahn, described an internetworking protocol for sharing resources using packet switching among network nodes. A central control component of this model was the "Transmission Control Program" that incorporated both connection-oriented links and datagram services between hosts. The monolithic Transmission Control Program was later divided into a modular architecture consisting of the Transmission Control Protocol and User Datagram Protocol at the transport layer and the Internet Protocol at the internet layer. The model became known as the Department of Defense (DoD) Internet Model and Internet protocol suite, and informally as TCP/IP.
IP versions 1 to 3 were experimental versions, designed between 1973 and 1978. The following Internet Experiment Note (IEN) documents describe version 3 of the Internet Protocol, prior to the modern version of IPv4:
IEN 2 (Comments on Internet Protocol and TCP), dated August 1977 describes the need to separate the TCP and Internet Protocol functionalities (which were previously combined.) It proposes the first version of the IP header, using 0 for the version field.
IEN 26 (A Proposed New Internet Header Format), dated February 1978 describes a version of the IP header that uses a 1-bit version field.
IEN 28 (Draft Internetwork Protocol Description Version 2), dated February 1978 describes IPv2.
IEN 41 (Internetwork Protocol Specification Version 4), dated June 1978 describes the first protocol to be called IPv4. The IP header is different from the modern IPv4 header.
IEN 44 (Latest Header Formats), dated June 1978 describes another version of IPv4, also with a header different from the modern IPv4 header.
IEN 54 (Internetwork Protocol Specification Version 4), dated September 1978 is the first description of IPv4 using the header that would be standardized in .
The dominant internetworking protocol in the Internet Layer in use is IPv4; the number 4 identifies the protocol version, carried in every IP datagram. IPv4 is described in (1981).
Version number 5 was used by the Internet Stream Protocol, an experimental streaming protocol that was not adopted.
The successor to IPv4 is IPv6. IPv6 was a result of several years of experimentation and dialog during which various protocol models were proposed, such as TP/IX (), PIP () and TUBA (TCP and UDP with Bigger Addresses, ). Its most prominent difference from version 4 is the size of the addresses. While IPv4 uses 32 bits for addressing, yielding c. 4.3 billion () addresses, IPv6 uses 128-bit addresses providing ca. addresses. Although adoption of IPv6 has been slow, , most countries in the world show significant adoption of IPv6, with over 35% of Google's traffic being carried over IPv6 connections.
The assignment of the new protocol as IPv6 was uncertain until due diligence assured that IPv6 had not been used previously. Other Internet Layer protocols have been assigned version numbers, such as 7 (IP/TX), 8 and 9 (historic). Notably, on April 1, 1994, the IETF published an April Fools' Day joke about IPv9. IPv9 was also used in an alternate proposed address space expansion called TUBA. A 2004 Chinese proposal for an "IPv9" protocol appears to be unrelated to all of these, and is not endorsed by the IETF.
Reliability
The design of the Internet protocol suite adheres to the end-to-end principle, a concept adapted from the CYCLADES project. Under the end-to-end principle, the network infrastructure is considered inherently unreliable at any single network element or transmission medium and is dynamic in terms of the availability of links and nodes. No central monitoring or performance measurement facility exists that tracks or maintains the state of the network. For the benefit of reducing network complexity, the intelligence in the network is purposely located in the end nodes.
As a consequence of this design, the Internet Protocol only provides best-effort delivery and its service is characterized as unreliable. In network architectural parlance, it is a connectionless protocol, in contrast to connection-oriented communication. Various fault conditions may occur, such as data corruption, packet loss and duplication. Because routing is dynamic, meaning every packet is treated independently, and because the network maintains no state based on the path of prior packets, different packets may be routed to the same destination via different paths, resulting in out-of-order delivery to the receiver.
All fault conditions in the network must be detected and compensated by the participating end nodes. The upper layer protocols of the Internet protocol suite are responsible for resolving reliability issues. For example, a host may buffer network data to ensure correct ordering before the data is delivered to an application.
IPv4 provides safeguards to ensure that the header of an IP packet is error-free. A routing node discards packets that fail a header checksum test. Although the Internet Control Message Protocol (ICMP) provides notification of errors, a routing node is not required to notify either end node of errors. IPv6, by contrast, operates without header checksums, since current link layer technology is assumed to provide sufficient error detection.
Link capacity and capability
The dynamic nature of the Internet and the diversity of its components provide no guarantee that any particular path is actually capable of, or suitable for, performing the data transmission requested. One of the technical constraints is the size of data packets possible on a given link. Facilities exist to examine the maximum transmission unit (MTU) size of the local link and Path MTU Discovery can be used for the entire intended path to the destination.
The IPv4 internetworking layer automatically fragments a datagram into smaller units for transmission when the link MTU is exceeded. IP provides re-ordering of fragments received out of order. An IPv6 network does not perform fragmentation in network elements, but requires end hosts and higher-layer protocols to avoid exceeding the path MTU.
The Transmission Control Protocol (TCP) is an example of a protocol that adjusts its segment size to be smaller than the MTU. The User Datagram Protocol (UDP) and ICMP disregard MTU size, thereby forcing IP to fragment oversized datagrams.
Security
During the design phase of the ARPANET and the early Internet, the security aspects and needs of a public, international network could not be adequately anticipated. Consequently, many Internet protocols exhibited vulnerabilities highlighted by network attacks and later security assessments. In 2008, a thorough security assessment and proposed mitigation of problems was published. The IETF has been pursuing further studies.
See also
ICANN
IP routing
List of IP protocol numbers
Next-generation network
References
External links
Internet layer protocols
Internet |
18127613 | https://en.wikipedia.org/wiki/International%20Conference%20on%20Availability%2C%20Reliability%20and%20Security | International Conference on Availability, Reliability and Security | The ARES - The International Conference on Availability, Reliability and Security focuses on rigorous and novel research in the field of dependability, computer and information security. In cooperation with the conference several workshops are held covering a huge variety of security topics. The Conference and Workshop Proceedings are published by IEEE Computer Society Press. In the CORE ranking, ARES is ranked as B. Participants from almost 40 countries attend ARES 2013.
The conference is hosted by universities and research institutions:
2006: Vienna University of Technology, Austria
2007: Vienna University of Technology, Austria, in co-operation with ENISA – The Network and Information Security Agency of the European Union
2008: Polytechnic University of Catalonia, Spain, in co-operation with ENISA
2009: Fukuoka Institute of Technology, Japan
2010: Krakowska Akademia, Poland
2011: Vienna University of Technology, Austria
2012: University of Economics, Prague, Czech Republic
2013: University of Regensburg, Germany
2013: University of Regensburg, Germany
In 2013 the keynotes of the ARES conference are held by
Elena Ferrari, University of Insubria, Italy
Carl Gunter, University of Illinois, US
Furthermore, a panel about Threats & Risk Management – Bridging the Gap between Industry needs and Research takes place. The panelists are:
Gary McGraw, Cigital, US
Greg Soukiassian, BC & RS, France
Chris Wills, CARIS Research, UK
Tutorials are held by Gary McGraw, Haya Shulman, Ludwig Fuchs, Stefan Katzenbeisser
2012: University of Economics, Prague, Czech Republic
In 2012 the keynotes of the ARES conference are held by:
Annie Antón, Georgia Institute of Technology (US)
Chenxi Wang, Vice President, Principal Analyst at Forrester Research (US)
Further, a panel will be moderated by Shari Lawrence Pfleeger with the panelists:
Angela Sasse, University College London (UK)
David Budgen, Durham University (UK)
Kelly Caine, Indiana University (US)
2011: Vienna University of Technology, Austria
In 2011 the keynotes of the ARES conference will be held by:
Gary McGraw
Shari Pfleeger
Furthermore, Gene Spafford will give an invited talk.
2010: Krakowska Akademia, Poland
In 2010 the keynotes of the ARES conference were held by:
Gene Spafford (Purdue University)
Ross J. Anderson (Cambridge University)
2009: Fukuoka Institute of Technology, Japan
In 2009 the keynotes of the ARES conference were held by:
Prof. Elisa Bertino (Purdue University)
Sushil Jajodia (George Mason University Fairfax)
Eiji Okamoto (Tsukuba University)
Additionally an invited talk was held by:
Solange Ghernaouti (University of Lausanne)
The acceptance rate for ARES 2009 was 25% (= 40 full papers)
2008: Polytechnic University of Catalonia (UPC) Barcelona, Spain
In 2008 the keynotes of the ARES conference were held by:
Prof. Ravi Sandhu, Executive Director, Chief Scientist and Founder, Institute for Cyber Security (ICS) and Lutcher Brown Endowed Chair in Cyber-Security
Prof. Günther Pernul, Department of Information Systems, University of Regensburg
Prof. Vijay Atluri, Management Science and Information Systems Department, Research Director of the Center for Information Management, Integration and Connectivity (CIMIC), Rutgers University
The acceptance rate ARES 2008: 40 full papers of 190 submissions
2007 Vienna University of Technology, Austria
Since 2007 the ARES conference was held in conjunction with the CISIS conference. In 2007 the keynotes of the ARES conference were held by:
Prof. Reinhard Posch, chief information officer for the Federal Republic of Austria
Prof. Bhavani Thuraisingham, director of Cyber Security Research Center, University of Texas at Dallas (UTD)
2006: Vienna University of Technology, Austria
The first ARES conference in 2006 was held in conjunction with the AINA conference. In 2006 the keynotes of the ARES conference were held by:
Dr. Louis Marinos, ENISA Security Competence Department, Risk Management, Greece
Prof. Andrew Steane, Centre for Quantum Computation, University of Oxford, UK
Prof. David Basin, Information Security, Department of Computer Science, ETH Zurich, Switzerland
External links
Current and past ARES Conferences
List of publications of the past ARES Conferences
Current and past CISIS Conferences
ENISA
AINA
DBLP
Computer science conferences |
70729 | https://en.wikipedia.org/wiki/Non-linear%20editing | Non-linear editing | Non-linear editing is a form of offline editing for audio, video, and image editing. In offline editing, the original content is not modified in the course of editing. In non-linear editing, edits are specified and modified by specialized software. A pointer-based playlist, effectively an edit decision list (EDL), for video and audio, or a directed acyclic graph for still images, is used to keep track of edits. Each time the edited audio, video, or image is rendered, played back, or accessed, it is reconstructed from the original source and the specified editing steps. Although this process is more computationally intensive than directly modifying the original content, changing the edits themselves can be almost instantaneous, and it prevents further generation loss as the audio, video, or image is edited.
A non-linear editing system (NLE) is a video editing (NLVE) program or application, or an audio editing (NLAE) digital audio workstation (DAW) system. These perform non-destructive editing on source material. The name is in contrast to 20th century methods of linear video editing and film editing.
Basic techniques
A non-linear editing approach may be used when all assets are available as files on video servers, or on local solid-state drives or hard disks, rather than recordings on reels or tapes. While linear editing is tied to the need to sequentially view film or hear tape, non-linear editing enables direct access to any video frame in a digital video clip, without having to play or scrub/shuttle through adjacent footage to reach it, as is necessary with video tape linear editing systems.
When ingesting audio or video feeds, metadata are attached to the clip. Those metadata can be attached automatically (timecode, localization, take number, name of the clip) or manually (players names, characters, in sports). It is then possible to access any frame by entering directly the timecode or the descriptive metadata. An editor can, for example at the end of the day in the Olympic Games, easily retrieve all the clips related to the players who received a gold medal.
The non-linear editing method is similar in concept to the cut and paste techniques used in IT. However, with the use of non-linear editing systems, the destructive act of cutting of film negatives is eliminated. It can also be viewed as the audio/video equivalent of word processing, which is why it is called desktop video editing in the consumer space.
Broadcast workflows and advantages
In broadcasting applications, video and audio data are first captured to hard disk-based systems or other digital storage devices. The data are then imported into servers employing any necessary transcoding, digitizing or transfer). Once imported, the source material can be edited on a computer using any of a wide range of video editing software.
The end product of the offline non-linear editing process is a frame-accurate edit decision list (EDL) which can be taken, together with the source tapes, to an online quality tape or film editing suite. The EDL is then read into an edit controller and used to create a replica of the offline edit by playing portions of the source tapes back at full quality and recording them to a master as per the exact edit points of the EDL.
Editing software records the editor's decisions in an EDL that is exportable to other editing tools. Many generations and variations of the EDL can exist without storing many different copies, allowing for very flexible editing. It also makes it easy to change cuts and undo previous decisions simply by editing the EDL (without having to have the actual film data duplicated). Generation loss is also controlled, due to not having to repeatedly re-encode the data when different effects are applied. Generation loss can still occur in digital video or audio when using lossy video or audio compression algorithms as these introduce artifacts into the source material with each encoding or reencoding. Lossy compression algorithms (codecs) such as Apple ProRes, Advanced Video Coding and mp3 are very widely used as they allow for dramatic reductions on file size while often being indistinguishable from the uncompressed or losslessly compressed original.
Compared to the linear method of tape-to-tape editing, non-linear editing offers the flexibility of film editing, with random access and easy project organization. In non-linear editing, the original source files are not lost or modified during editing. This is one of the biggest advantages of non-linear editing compared to linear editing. With the EDLs, the editor can work on low-resolution copies of the video. This makes it possible to edit both standard-definition broadcast quality and high definition broadcast quality very quickly on desktop computers that may not have the power to process huge full-quality high-resolution data in real-time.
The costs of editing systems have dropped such that non-linear editing tools are now within the reach of home users. Some editing software can now be accessed free as web applications; some, like Cinelerra (focused on the professional market) and Blender, can be downloaded as free software; and some, like Microsoft's Windows Movie Maker or Apple Inc.'s iMovie, come included with the appropriate operating system.
Accessing the material
The non-linear editing retrieves video media for editing. Because these media exist on the video server or other mass storage that stores the video feeds in a given codec, the editing system can use several methods to access the material:
Direct access
The video server records feeds with a codec readable by the editing system, has network connection to the editor and allows direct editing. The editor previews material directly on the server (which it sees as remote storage) and edits directly on the server without transcoding or transfer.
Shared storage
The video server transfers feeds to and from shared storage that is accessible by all editors. Media in the appropriate codec on the server need only transferred. If recorded with a different codec, media must be transcoded during transfer. In some cases (depending on material), files on shared storage can be edited even before the transfer is finished.
Importing
The editor downloads the material and edits it locally. This method can be used with the previous methods.
Editor brands
, Davinci Resolve had a user base of more than 2 million using the free version alone. This is a comparable user base to Apple's Final Cut Pro X, which also had 2 million users . This is in comparison to 2011, when reports indicated, "Avid's Media Composer is still the most-used NLE on prime-time TV productions, being employed on up to 90 percent of evening broadcast shows."
Some notable NLEs are:
Avid Media Composer
Adobe Premiere Pro
DaVinci Resolve
Final Cut Pro X and its predecessor, Final Cut Pro 7.
Vegas Pro
Shotcut
Home use
Early consumer applications using a multimedia computer for non-linear editing of video may have a video capture card to capture analog video or a FireWire connection to capture digital video from a DV camera, with its video editing software. Various editing tasks could then be performed on the imported video before export to another medium, or MPEG encoded for transfer to a DVD.
Modern web-based editing systems can take video directly from a camera phone over a GPRS or 3G mobile connection, and editing can take place through a web browser interface, so, strictly speaking, a computer for video editing does not require any installed hardware or software beyond a web browser and an internet connection.
History
When videotapes were first developed in the 1950s, the only way to edit was to physically cut the tape with a razor blade and splice segments together. While the footage excised in this process was not technically destroyed, continuity was lost and the footage was generally discarded. In 1963, with the introduction of the Ampex Editec, video tape could be edited electronically with a process known as linear video editing by selectively copying the original footage to another tape called a master. The original recordings are not destroyed or altered in this process. However, since the final product is a copy of the original, there is a generation loss of quality.
First non-linear editor
The first truly non-linear editor, the CMX 600, was introduced in 1971 by CMX Systems, a joint venture between CBS and Memorex. It recorded and played back black-and-white analog video recorded in "skip-field" mode on modified disk pack drives the size of washing machines. These were commonly used to store about half an hour of data digitally on mainframe computers of the time. The 600 had a console with two monitors built in. The right monitor, which played the preview video, was used by the editor to make cuts and edit decisions using a light pen. The editor selected from options superimposed as text over the preview video. The left monitor was used to display the edited video. A DEC PDP-11 computer served as a controller for the whole system. Because the video edited on the 600 was in low-resolution black and white, the 600 was suitable only for offline editing.
The 1980s
Non-linear editing systems were built in the 1980s using computers coordinating multiple LaserDiscs or banks of VCRs. One example of these tape and disc-based systems was Lucasfilm's EditDroid, which used several LaserDiscs of the same raw footage to simulate random-access editing. EditDroid was demonstrated at NAB in 1984. EditDroid was the first system to introduce modern concepts in non-linear editing such as timeline editing and clip bins.
The LA-based post house Laser Edit also had an in-house system using recordable random-access LaserDiscs.
The most popular non-linear system in the 1980s was Ediflex, which used a bank of Sony JVC VCRs for offline editing. Ediflex was introduced in 1983 on the Universal series "Still the Beaver". By 1985 it was used on over 80% of filmed network programs. In 1985 Ediflex maker, Cinedco was awarded the Technical Emmy for "Design and Implementation of Non-Linear Editing for Filmed Programs."
In 1984, Montage Picture Processor was demonstrated at NAB. Montage used 17 identical copies of a set of film rushes on modified consumer Betamax VCRs. A custom circuit board was added to each deck that enabled frame-accurate switching and playback using vertical interval timecode. Intelligent positioning and sequencing of the source decks provided a simulation of random-access playback of a lengthy edited sequence without any rerecording. The theory was that with so many copies of the rushes, there could always be one machine cued up to replay the next shot in real time. Changing the EDL could be done easily, and the results seen immediately.
The first feature edited on the Montage was Sidney Lumet's Power. Notably, Francis Coppola edited The Godfather Part III on the system, and Stanley Kubrick used it for Full Metal Jacket. It was used on several episodic TV shows (Knots Landing, for one) and on hundreds of commercials and music videos.
The original Montage system won an Academy Award for Technical Achievement in 1988. Montage was reincarnated as Montage II in 1987, and Montage III appeared at NAB in 1991, using digital disk technology, which should prove to be considerably less cumbersome than the Betamax system.
All of these original systems were slow, cumbersome, and had problems with the limited computer horsepower of the time, but the mid-to-late-1980s saw a trend towards non-linear editing, moving away from film editing on Moviolas and the linear videotape method using U-matic VCRs. Computer processing advanced sufficiently by the end of the '80s to enable true digital imagery, and has progressed today to provide this capability in personal desktop computers.
An example of computing power progressing to make non-linear editing possible was demonstrated in the first all-digital non-linear editing system, the "Harry" effects compositing system manufactured by Quantel in 1985. Although it was more of a video effects system, it had some non-linear editing capabilities. Most importantly, it could record (and apply effects to) 80 seconds (due to hard disk space limitations) of broadcast-quality uncompressed digital video encoded in 8-bit CCIR 601 format on its built-in hard disk array.
The 1990s
The term nonlinear editing was formalized in 1991 with the publication of Michael Rubin's Nonlinear: A Guide to Digital Film and Video Editing (Triad, 1991)—which popularized this terminology over other terminology common at the time, including real-time editing, random-access or RA editing, virtual editing, electronic film editing, and so on.
Non-linear editing with computers as it is known today was first introduced by Editing Machines Corp. in 1989 with the EMC2 editor, a PC-based non-linear off-line editing system that utilized magneto-optical disks for storage and playback of video, using half-screen-resolution video at 15 frames per second. A couple of weeks later that same year, Avid introduced the Avid/1, the first in the line of their Media Composer systems. It was based on the Apple Macintosh computer platform (Macintosh II systems were used) with special hardware and software developed and installed by Avid.
The video quality of the Avid/1 (and later Media Composer systems from the late 1980s) was somewhat low (about VHS quality), due to the use of a very early version of a Motion JPEG (M-JPEG) codec. It was sufficient, however, to provide a versatile system for offline editing. Lost in Yonkers (1993) was the first film edited with Avid Media Composer, and the first long-form documentary so edited was the HBO program Earth and the American Dream, which won a National Primetime Emmy Award for Editing in 1993.
The NewTek Video Toaster Flyer for the Amiga included non-linear editing capabilities in addition to processing live video signals. The Flyer used hard drives to store video clips and audio, and supported complex scripted playback. The Flyer provided simultaneous dual-channel playback, which let the Toaster's video switcher perform transitions and other effects on video clips without additional rendering. The Flyer portion of the Video Toaster/Flyer combination was a complete computer of its own, having its own microprocessor and embedded software. Its hardware included three embedded SCSI controllers. Two of these SCSI buses were used to store video data, and the third to store audio. The Flyer used a proprietary wavelet compression algorithm known as VTASC, which was well regarded at the time for offering better visual quality than comparable non-linear editing systems using motion JPEG.
Until 1993, the Avid Media Composer was most often used for editing commercials or other small-content and high-value projects. This was primarily because the purchase cost of the system was very high, especially in comparison to the offline tape-based systems that were then in general use. Hard disk storage was also expensive enough to be a limiting factor on the quality of footage that most editors could work with or the amount of material that could be held digitized at any one time.
Up until 1992, the Apple Macintosh computers could access only 50 gigabytes of storage at once. This limitation was overcome by a digital video R&D team at the Disney Channel led by Rick Eye. By February 1993, this team had integrated a long-form system that let the Avid Media Composer running on the Apple Macintosh access over seven terabytes of digital video data. With instant access to the shot footage of an entire movie, long-form non-linear editing was now possible. The system made its debut at the NAB conference in 1993 in the booths of the three primary sub-system manufacturers, Avid, Silicon Graphics and Sony. Within a year, thousands of these systems had replaced 35mm film editing equipment in major motion picture studios and TV stations worldwide.
Although M-JPEG became the standard codec for NLE during the early 1990s, it had drawbacks. Its high computational requirements ruled out software implementations imposing extra cost and complexity of hardware compression/playback cards. More importantly, the traditional tape workflow had involved editing from videotape, often in a rented facility. When the editor left the edit suite, they could securely take their tapes with them. But the M-JPEG data rate was too high for systems like Avid/1 on the Mac and Lightworks on PC to store the video on removable storage. The content needed to be stored on fixed hard disks instead. The secure tape paradigm of keeping your content with you was not possible with these fixed disks. Editing machines were often rented from facilities houses on a per-hour basis, and some productions chose to delete their material after each edit session, and then ingest it again the next day to guarantee the security of their content. In addition, each NLE system had storage limited by its fixed disk capacity.
These issues were addressed by a small UK company, Eidos Interactive. Eidos chose the new ARM-based computers from the UK and implemented an editing system, launched in Europe in 1990 at the International Broadcasting Convention. Because it implemented its own compression software designed specifically for non-linear editing, the Eidos system had no requirement for JPEG hardware and was cheap to produce. The software could decode multiple video and audio streams at once for real-time effects at no extra cost. But most significantly, for the first time, it supported unlimited cheap removable storage. The Eidos Edit 1, Edit 2, and later Optima systems let the editor use any Eidos system, rather than being tied down to a particular one, and still keep his data secure. The Optima software editing system was closely tied to Acorn hardware, so when Acorn stopped manufacturing the Risc PC in the late 1990s, Eidos discontinued the Optima system.
In the early 1990s, a small American company called Data Translation took what it knew about coding and decoding pictures for the US military and large corporate clients and spent $12 million developing a desktop editor based on its proprietary compression algorithms and off-the-shelf parts. Their aim was to democratize the desktop and take some of Avid's market. In August 1993, Media 100 entered the market, providing would-be editors with a low-cost, high-quality platform.
Around the same period, two other competitors provided non-linear systems that required special hardware—typically cards added to the computer system. Fast Video Machine was a PC-based system that first came out as an offline system, and later became more online editing capable. The Imix video cube was also a contender for media production companies. The Imix Video Cube had a control surface with faders to allow mixing and shuttle control. Data Translation's Media 100 came with three different JPEG codecs for different types of graphics and many resolutions. These other companies caused tremendous downward market pressure on Avid. Avid was forced to continually offer lower-priced systems to compete with the Media 100 and other systems.
Inspired by the success of Media 100, members of the Premiere development team left Adobe to start a project called "Keygrip" for Macromedia. Difficulty raising support and money for development led the team to take their non-linear editor to the NAB Show. After various companies made offers, Keygrip was purchased by Apple as Steve Jobs wanted a product to compete with Adobe Premiere in the desktop video market. At around the same time, Avid—now with Windows versions of its editing software—was considering abandoning the Macintosh platform. Apple released Final Cut Pro in 1999, and despite not being taken seriously at first by professionals, it has evolved into a serious competitor to entry level's Avid's systems.
DV
Another leap came in the late 1990s with the launch of DV-based video formats for consumer and professional use. With DV came IEEE 1394 (FireWire/iLink), a simple and inexpensive way of getting video into and out of computers. Users no longer had to convert video from analog to digital—it was recorded as digital to start with—and FireWire offered a straightforward way to transfer video data without additional hardware. With this innovation, editing became a more realistic proposition for software running on standard computers. It enabled desktop editing producing high-quality results at a fraction of the cost of earlier systems.
HD
In early 2000, the introduction of highly compressed HD formats such as HDV has continued this trend, making it possible to edit HD material on a standard computer running a software-only editing system.
Avid is an industry standard used for major feature films, television programs, and commercials. Final Cut Pro received a Technology & Engineering Emmy Award in 2002.
Since 2000, many personal computers include basic non-linear video editing software free of charge. This is the case of Apple iMovie for the Macintosh platform, various open-source programs like Kdenlive, Cinelerra-GG Infinity and PiTiVi for the Linux platform, and Windows Movie Maker for the Windows platform. This phenomenon has brought low-cost non-linear editing to consumers.
The cloud
The demands of video editing in terms of the volumes of data involved means the proximity of the stored footage being edited to the NLE system doing the editing is governed partly by the capacity of the data connection between the two. The increasing availability of broadband internet combined with the use of lower-resolution copies of original material provides an opportunity to not just review and edit material remotely but also open up access to far more people to the same content at the same time. In 2004 the first cloud-based video editor, known as Blackbird and based on technology invented by Stephen Streater, was demonstrated at IBC and recognised by the RTS the following year. Since that time a number of other cloud-based editors have become available including systems from Avid, WeVideo and Grabyo. Despite their reliance on a network connection, the need to ingest material before editing can take place, and the use of lower-resolution video proxies, their adoption has grown. Their popularity has been driven largely by efficiencies arising from opportunities for greater collaboration and the potential for cost savings derived from using a shared platform, hiring rather than buying infrastructure, and the use of conventional IT equipment over hardware specifically designed for video editing.
4K
, 4K Video in NLE was fairly new, but it was being used in the creation of many movies throughout the world, due to the increased use of advanced 4K cameras such as the Red Camera. Examples of software for this task include Avid Media Composer, Apple's Final Cut Pro X, Sony Vegas, Adobe Premiere, DaVinci Resolve, Edius, and Cinelerra-GG Infinity for Linux.
8K
8K video was relatively new. 8K video editing requires advanced hardware and software capable of handling the standard.
Image editing
For imaging software, early works such as HSC Software's Live Picture brought non-destructive editing to the professional market and current efforts such as GEGL provide an implementation being used in open-source image editing software.
Quality
An early concern with non-linear editing had been picture and sound quality available to editors. Storage limitations at the time required that all material undergo lossy compression techniques to reduce the amount of memory occupied. Improvements in compression techniques and disk storage capacity have mitigated these concerns, and the migration to high-definition video and audio has virtually removed this concern completely. Most professional NLEs are also able to edit uncompressed video with the appropriate hardware.
See also
Hard disk recorder
Index of video-related articles
Version control
Notes
References
External links
Episode of the TV show "The Computer Chronicles" from 1990, which includes a feature on the first Avid Media Composer
Film and video technology
Film editing
Digital audio
Video editing software
Filmmaking |
1939623 | https://en.wikipedia.org/wiki/Cdrdao | Cdrdao | cdrdao (“CD recorder disc-at-once”) is a free utility software product for authoring and ripping of CD-ROMs. The program is released under the GPL. Cdrdao records audio or data CD-Rs in disk-at-once mode based on a textual description of the CD contents, known as a TOC file that can be created and customized inside a text editor.
cdrdao runs from command line and has no graphical user interface, except for third-party ones such as K3b (Linux), Gcdmaster (Linux) or XDuplicator (Windows).
References
External links
cdrdao man page
Gcdmaster page
XDuplicator website
K3b website
Free optical disc authoring software
Free software programmed in C++
Console CD ripping software
Linux CD ripping software |
25909309 | https://en.wikipedia.org/wiki/Robert%20Creasy | Robert Creasy | Robert Jay Creasy (November 15, 1939 – August 11, 2005) was the project leader of the first full virtualization hypervisor, the IBM CP-40, which later developed into IBM's highly successful line of mainframe VM operating systems.
Biography
Robert J. Creasy was born on November 15, 1939, in Honesdale, Pennsylvania. He graduated from MIT in 1961, marrying Rosalind Reeves that year. After graduation, he worked as a programmer on the CTSS timesharing system and on Project MAC. Disappointed with the direction of MAC, when he heard that Norm Rasmussen, Manager of IBM's Cambridge Scientific Center, intended to build a time sharing system based on IBM's System/360 and needed someone to lead the project, Creasy left MIT to join IBM.
Robert and Rosalind moved to California in 1965.
He retired from IBM's Scientific Center in Palo Alto in 1993.
He died on August 11, 2005, in Pioneer, California, survived by his wife, Rosalind, son Robert W. and wife Julie; daughter, Laura and husband Joel; grandson, Joel Alexander; brother, John and wife Kathy, and other relatives.
Origins of VM
In the fall of 1964, the future development of time sharing was problematical. IBM had lost the Project MAC contract to GE, leading to the development of Multics. IBM itself had committed to a time sharing system known as TSS. At the IBM Cambridge Scientific Center, Manager Norm Rasmussen was concerned that IBM was heading in the wrong direction. He decided to proceed with his own plan to build a timesharing system, with Bob Creasy leading what became known as the CP-40 Project.
Creasy had decided to build CP-40 while riding on the MTA. “I launched the effort between Xmas 1964 and year’s end, after making the decision while on an MTA bus from Arlington to Cambridge. It was a Tuesday, I believe.” (R.J. Creasy, private communication with Melinda Varian, 1989.)
Creasy and Les Comeau spent the last week of 1964 joyfully brainstorming the design of CP-40, a new kind of operating system, a system that would provide not only virtual memory, but also virtual machines. They had seen that the cleanest way to protect users from one another (and to preserve compatibility as the new System/360 design evolved) was to use the System/360 Principles of Operations manual to describe the user's interface to the Control Program. Each user would have a complete System/360 virtual machine (which at first was called a “pseudo-machine”).
The idea of a virtual machine system had been bruited about a bit before then, but it had never really been implemented. The idea of a virtual S/360 was new, but what was really important about their concept was that nobody until then had seen how elegantly a virtual machine system could be built, with really very minor hardware changes and not much software.
Back during that last week of 1964, when they were working out the design for the
Control Program, Creasy and Comeau immediately recognized that they would need a second system, a console monitor system, to run in some of their virtual machines. Although they knew that with a bit of work they would be able to run any of IBM's S/360 operating systems in a virtual machine, as contented users of CTSS they also knew that they wouldn't be satisfied using any of the available systems for their own development work or for the Center's other time-sharing requirements. Rasmussen, therefore, set up another small group under Creasy to
build CMS (which was then called the “Cambridge Monitor System”).
Like Multics, CMS would draw heavily on the lessons taught by CTSS. Indeed, the CMS user interface would be very much like that of CTSS.
The combination of CP-40 and CMS evolved into CP/CMS which was made available to IBM customers in 1967. In 1972, a revised version was released as IBM's VM/370 product.
Notes
References
1939 births
2005 deaths
American computer scientists
VM (operating system) |
4478182 | https://en.wikipedia.org/wiki/Barry%20B.%20Powell | Barry B. Powell | Barry Bruce Powell (born 1942) is an American classical scholar. He is the Halls-Bascom Professor of Classics Emeritus at the University of Wisconsin–Madison, author of the widely used textbook Classical Myth and many other books. Trained at Berkeley and Harvard, he is a specialist in Homer and in the history of writing. He has also taught Egyptian philology for many years and courses in Egyptian civilization.
Work
His Writing: Theory and History of the Technology of Civilization (Wiley-Blackwell 2009) attempts to create a scientific terminology and taxonomy for the study of writing, and was described in Science as "stimulating and impressive" and "a worthy successor to the pioneering book by Semitic specialist I. J. Gelb." This book has been translated into Arabic and modern Greek.
Powell's study Homer and the Origin of the Greek Alphabet advances the thesis that a single man invented the Greek alphabet expressly in order to record the poems of Homer. This thesis is controversial. The book was the subject of an international conference in Berlin in 2002 and has been influential outside classical philology, especially in media studies. Powell's Writing and the Origins of Greek Literature follows up themes broached by the thesis.
Powell's textbook, Classical Myth (8th edition) is widely used for classical myth courses in America, Canada, Australia, New Zealand, and Taiwan, as his text The Greeks: History, Culture, Society (with Ian Morris) is widely used in ancient history classes. His text World Myth is popular in such courses.
Powell's critical study Homer is widely read as an introduction for philologists, historians, and students of literature. In this study, Powell suggested that Homer may have hailed from Euboea instead of Ionia.
A New Companion to Homer (with Ian Morris), also translated into modern Greek and Chinese, is a comprehensive review of modern scholarship on Homer.
His literary works include poetry (Rooms Containing Falcons), an autobiography (Ramses in Nighttown), a mock-epic (The War at Troy: A True History), an academic novel (A Land of Slaves: A Novel of the American Academy), a novel about Berkeley (The Berkeley Plan: A Novel of the Sixties), a novel about Jazz (Take Five, with Sanford Dorbin), and a collection of short fiction. He has published a memoir: Ramses Reborn. In Tales of the Trojan War he retells in a droll, sometimes ribald style, the stories attached to the Trojan cycle, based on ancient sources.
He has translated the Iliad and the Odyssey. The introduction to these poems discusses Powell's thesis about the Greek alphabet and the recording of Homer and is an influential review of modern Homeric criticism. He has also translated the Aeneid and the poems of Hesiod.
Works
Books
Composition by Theme in the Odyssey, Beiträge zur klassichen Philologie, 1974
Homer and the Origin of the Greek Alphabet, Cambridge University Press, 1991
A New Companion to Homer (with Ian Morris), E. J. Brill, 1995
A Short Introduction to Classical Myth, Pearspm, 2000
Writing and the Origins of Greek Literature, Cambridge University Press, 2003
Homer, Wiley-Blackwell, 2004, 2nd ed. 2007
Helen of Troy, Screenplay based on Margaret George novel. 2006
Rooms Containing Falcons, poetry, 2006
The War at Troy: A True History, mock-epic, 2006
Ramses in Nighttown, a novel, 2006
The Greeks: History, Culture, Society (with Ian Morris), Pearson, 2006, 2nd ed. 2009
Writing: Theory and History of the Technology of Civilization, Wiley-Blackwell, 2009
Ilias, Odysseia, Greek text with translation of Alexander Pope, Chester River Press 2009
A Land of Slaves: A Novel of the American Academy, Orion Books 2011
World Myth, Pearson, 2013
The Iliad, Oxford University Press, 2013
The Odyssey, Oxford University Press, 2014
Classical Myth, eighth edition, Pearson, 2014
Homer's Iliad and Odyssey: The Essential Books, Oxford University Press, 2014
Vergil's Aeneid, Oxford University Press, 2015
Vergil's Aeneid: The Essential Books, Oxford University Press 2015
The Berkeley Plan: A Novel of the Sixties, Orion Books 2016
The House of Odysseus, and other short fictions, Orion Books 2016
The Poems of Hesiod: Theogony, Works and Days, the Shield of Heracles, University of California Press 2017
Take Five, A Story of Jazz in the Fifties (with Sanford Dorbin), Telstar Books, 2017
Ramses Reborn: A Memoir, Amazon Publications, 2017
Tales of the Trojan War, Amazon Publications, 2017
Articles
"Did Homer Sing at Lefkandi?", Electronic Antiquity 1(2), July 1993 (online version)
Notes and references
External links
Personal page at University of Wisconsin–Madison (archived 2012)
Staff profile page at University of Wisconsin–Madison (archived 2019)
American classical scholars
Living people
Classical philologists
Classical scholars of the University of Wisconsin–Madison
1942 births
Translators of Homer
Homeric scholars |
23752518 | https://en.wikipedia.org/wiki/History%20of%20the%20AmigaOS%204%20dispute | History of the AmigaOS 4 dispute | The following history of the AmigaOS 4 dispute documents the legal battle mainly between the companies Amiga, Inc. and Hyperion Entertainment over the operating system AmigaOS 4. On 30 September 2009, Hyperion and Amiga, Inc. reached a settlement agreement where Hyperion was granted an exclusive, perpetual and worldwide right to distribute and use 'The Software', a term used during the dispute and subsequent settlement to refer to source code from AmigaOS 3 and earlier, and ownership of AmigaOS 4.x and beyond.
Background
Amiga, Inc.
After Commodore filed for bankruptcy in 1994, its name and IP rights, including Amiga, were sold to Escom. Escom kept the Amiga products and sold the Commodore name on to Tulip Computers. Escom went bankrupt in 1997 and sold the Amiga IP to Gateway 2000 (now only Gateway). On 27 December 1999, Gateway sold the Amiga name and rights to Amino Development, who changed the company name to Amiga, Inc. once the assets had been acquired. The 'Amino' Amiga, Inc. and the 'KMOS' Amiga, Inc. are seen by Hyperion as legally distinct entities, contracts to one are of no relevance to the other.
Hyperion's OS4 project
Hyperion Entertainment released AmigaOS 4 (OS4) to the public in 2004. The five year development process led to accusations of vapourware and producing a modern PowerPC OS, given that Hyperion claimed that they had the original AmigaOS 3.1 source code to reference (a claim later proven accurate). This was made worse by the apparent much more rapid progress and maturity of competitor and alternative AmigaOS clone MorphOS, which had been begun several years earlier. Perhaps the most important feature of OS4 as regards the legal dispute is the presence of an entirely new PowerPC native kernel. ExecSG replaces the original Amiga Exec and is claimed entirely the work and property of Hyperion's subcontracted developers Thomas and Hans-Joerg Frieden. Neither Amiga, Inc. nor Hyperion actually own ExecSG, so technically cannot demand or hand it over, leaving the OS with fragmented and confused ownership.
The supposed rebirth of Amiga
In 2007 The Inquirer reported that the Amiga was inching closer to rebirth with the long-awaited release of AmigaOS 4.0, a new PowerPC-native version of the classic AmigaOS (Motorola 68k) from the 1980s. This new PowerPC OS would run on the AmigaOne machines, now out of production, which could only run Linux while waiting for the new PowerPC OS to be released. The year after, Amiga, Inc. also announced a new AmigaOS 4 compatible system that would be available shortly. The new machine was neither Genesi's Efika, nor the project codenamed Samantha, (now known as the Sam440ep from ACube Systems). The new hardware was from a new entrant, the Canadian company ACK Software Controls, and would have consisted of a budget and advanced model.
The dispute
Four days after Amiga, Inc. announced the new Amiga OS4 (OS4) compatible machines, they sued Hyperion Entertainment (Hyperion). Amiga, Inc. stated that it decided to produce a PowerPC version of AmigaOS in 2001 and on November 3, 2001, they signed a contract with Hyperion (then a game developer for the 68k Amiga platform as well as Linux and Macintosh). Amiga, Inc. gave Hyperion access to the sources of the last Commodore version, AmigaOS 3.1, but access to the post-Commodore versions OS 3.5 and 3.9 had to be purchased from the third party responsible for their development, since Haage & Partner (developers of OS 3.5 and 3.9) never returned their AmigaOS source code to Amiga, Inc.
Amiga, Inc. also said that its contract allowed Hyperion to use Amiga trademarks in the promotion of OS4 on Eyetech's AmigaOne and stipulated that Hyperion should make its best efforts to deliver OS 4 by March 1, 2002, a port of an elderly operating system (68k) for an entirely different processor architecture (PowerPC) in four months, an optimistic target that Hyperion failed to meet.
According to Amiga, Inc., the contract permits the purchase of the full sources of OS4 from Hyperion for US$25,000. The court filing says that Amiga, Inc. paid this sometime in April–May 2003, to keep Hyperion from going bankrupt, and that between then and November 21, 2006, Amiga, Inc. paid another $7,200, then $8,850 more which it says Hyperion said was owing.
Furthermore, in the filing, Amiga, Inc. President Bill McEwen revealed that Amiga, Inc. still hasn't received the sources for AmigaOS 4, that he's discovered that much of its development was outsourced to third-party contract developers and that it is not clear if Hyperion has all the rights to this external work. Eventually, after five years and $41,050, on 21 November 2006, Amiga, Inc. told Hyperion it had violated the contract and gave it 30 days to sort it out—to finish the product and hand over the sources. That did not happen, so the contract was terminated. on 20 December 2006.
Hyperion claims in its defense that Amiga, Inc. rendered the contract null through dealings with KMOS, a company which acquired the Amiga assets and renamed itself Amiga, Inc. over 2004–05.
Four days later, on 24 December 2006, Hyperion released the final version of OS4 – although according to Amiga, Inc., Hyperion claims that this was merely an update of the developers' preview version of 16 April 2004. Since the contract ended, Hyperion had no rights to use the name AmigaOS or any Amiga intellectual property, or to market OS4 or enter into any agreements about it with anyone else. Nevertheless, AmigaOS 4 was still being developed and distributed. Furthermore, ACube Systems released a series of Sam440ep motherboards, which run AmigaOS 4.
For a time, the case seemed deadlocked with neither side being apparently able to prove the point either way. Without Amiga, Inc.'s permission, Hyperion Entertainment could not use the AmigaOS name or related trademarks. Hyperion's defense centered around the potentially contract-voiding nature of the Amiga, Inc./KMOS handover, the problems they faced in acquiring the post-Commodore OS 3.x source code which Amiga, Inc. claimed to own and have access to, and the presence of new work and open components in the new operating system.
Hyperion Entertainment and Amiga, Inc. reached settlement agreement
On 30 September 2009, Hyperion Entertainment and Amiga, Inc. reached settlement agreement where Hyperion was granted, "an exclusive, perpetual, worldwide right to AmigaOS 3.1 in order to use, develop, modify, commercialize, distribute and market AmigaOS 4.x.
References
External links
Amiga, Inc.
Hyperion Entertainment
ACube Systems
AmigaOS 4 |
6327638 | https://en.wikipedia.org/wiki/Joanna%20Rutkowska | Joanna Rutkowska | Joanna Rutkowska (born 1981 in Warsaw) is a Polish computer security researcher, primarily known for her research on low-level security and stealth malware, and as founder of the Qubes OS security-focused desktop operating system.
She became known in the security community after the Black Hat Briefings conference in Las Vegas in August 2006, where Rutkowska presented an attack against Vista kernel protection mechanism, and also a technique dubbed Blue Pill, that used hardware virtualization to move a running OS into a virtual machine. Subsequently, she has been named one of Five Hackers who Put a Mark on 2006 by eWeek Magazine for her research on the topic. The original concept of Blue Pill was published by another researcher at IEEE Oakland on May 2006 under the name VMBR.
During the following years, Rutkowska continued to focus on low-level security. In 2007 she demonstrated that certain types of hardware-based memory acquisition (e.g. FireWire based) are unreliable and can be defeated. Later in 2007, together with a team member Alexander Tereshkin, presented further research on virtualization malware. In 2008, Rutkowska with her team focused on Xen hypervisor security. In 2009, together with a team member Rafal Wojtczuk, presented an attack against Intel Trusted Execution Technology and Intel System Management Mode.
In April 2007, Rutkowska founded Invisible Things Lab in Warsaw, Poland. The company focuses on OS and VMM security research and provides various consulting services. In a 2009 blog post she coined the term "evil maid attack", detailing a method for accessing encrypted data on disk by compromising the firmware via an external USB flash drive.
In 2010, she and Rafal Wojtczuk began working on the Qubes OS security-oriented desktop Xen distribution, which utilizes Fedora Linux. The initial release of Qubes 1.0 was completed by September 3, 2012 and is available as a free download. Its main concept is "security by compartmentalization", using domains implemented as lightweight Xen virtual machines to isolate various subsystems. Each compartment is referred to as a Qube, which operates as a separate hardware level virtual machine. The project refers to itself as "a reasonably secure operating system" and has received endorsements by numerous privacy and security experts. It is fairly unique in its capabilities, having a design informed by research on proven vulnerabilities in the trusted compute base (TCB), that are unaddressed in most common desktop operating systems.
She has published seminal works on systems trustability, most recently Intel x86 Considered Harmful and State Considered Harmful - A Proposal for a Stateless Laptop. Rutkowska has been invited as an esteemed presenter at security conferences, such as Chaos Communication Congress, Black Hat Briefings, HITB, RSA Conference, RISK, EuSecWest & Gartner IT Security Summit.
References
External links
Invisible Things Lab - corporate website
CNET news - Vista Hacked at Black Hat
SubVirt: Implementing malware with virtual machines
People associated with computer security
Polish bloggers
Living people
Place of birth missing (living people)
Polish computer scientists
Polish women computer scientists
21st-century engineers
21st-century women scientists
Polish computer programmers
Women bloggers
1981 births
People from Warsaw |
60087082 | https://en.wikipedia.org/wiki/Ran%20Canetti | Ran Canetti | Ran Canetti (Hebrew: רן קנטי) is a professor of Computer Science at Boston University. and the director of the Check Point Institute for Information Security and of the Center for Reliable Information System and Cyber Security. He is also associate editor of the Journal of Cryptology and Information and Computation. His main areas of research span cryptography and information security, with an emphasis on the design, analysis and use of cryptographic protocols.
Biography
Born in 1962 in Tel Aviv, Israel, Canetti obtained his BA in Computer Science in 1989, his BA in Physics in 1990, and his M.Sc in Computer Science in 1991, all from the Technion, Haifa. He received his PhD in 1995 from the Weizmann Institute, Rehovot under the supervision of Prof. Oded Goldreich. He then completed his post-doctoral training at the Lab of Computer Science, at the Massachusetts Institute of Technology (MIT) in 1996 under the supervision of Prof. Shafi Goldwasser. He then joined IBM’s T.J. Watson Research Center and was a Research Staff Member until 2008.
Canetti is known for his contribution to both the practice and theory of cryptography. Prominent contributions include the Keyed-Hash Message Authentication Code (HMAC), the definition of which was first published in 1996 in a paper by Mihir Bellare, Ran Canetti, and Hugo Krawczyk, and the formulation of the Universally Composable Security framework, which allows analyzing security of cryptographic protocols in a modular and robust way.
Canetti is the recipient of the RSA Award for Excellence in Mathematics (2018). He is a Fellow of the Association of Cryptologic Research. He received the IBM Research Outstanding Innovation Award in 2006, the IBM Corporate Award in 2005, the IBM Research Division Award in 1999, two IBM Best Paper Awards and the Kennedy Thesis Award from The Weizmann Institute in
Current roles
Since July 2011, Canetti has been a Professor in the Department of Computer Science at Boston University and the Director for Research at the Center for Reliable Information Systems and Cyber Security (RISCS) at Boston University since September 2011. His current positions include being the Head of the Check Point Institute of Information Security at Tel Aviv University, the Editor for the Journal of Cryptography and Editor of Information and Computation, and an advisor at Identiq, a Peer-to-Peer Identity Validation Network.
Canetti currently lives in Brookline, MA and is married with two children.
Patents
Canetti's registered patents and recognized and authorized standards include:
R. Canetti, S. Halevi, M. Steiner. Mitigating Dictionary Attacks on Password-Based Local Storage. Patent application submitted August 2006.
R. Canetti, M. Charikar, R. Kumar, S. Rajagopalan, A. Sahai, A. Tomkins. Non-Transferable Anonymous Credentials. U.S. Patent No. 7,222,362, May 2007.
R. Canetti and A. Herzberg, A Mechanism for Keeping a Key Secret from Mobile Eavesdroppers. US patent No. 5,412,723, May 1995.
R. Canetti and A. Herzberg, Secure Communication and Computation in an Insecure Environment. US patent No. 5,469,507, November 1995.
Standards
M. Baugher, R. Canetti, L. Dondeti, F. Lindholm, “Group Key Management Architecture,” Internet Engineering Task Force RFC 4046, 2005.
A. Perrig, R. Canetti, B. Briscoe, D. Tygar, D. Song, “TESLA: Multicast Source Authentication Transform”, Internet Engineering Task Force RFC 4082, 2005.
H. Krawczyk, M. Bellare and R. Canetti, “HMAC: Keyed-Hashing for Message Authentication”, Internet Engineering Task Force RFC 2104, February 1997. Also appears as an American National Standard Institute (ANSI) standard X9.71 (2000), and as a Federal Information Processing Standard No. 198, National Institute of Standards and Technology (NIST), 2002.
Books
Canetti has also authored several books including:
Security and Composition of Cryptographic Protocols
A Chapter in Secure Multiparty Computation, Ed. Manoj Prabhakaran and Amit Sahai.
Cryptology and Information Security Series, IOS Press, 2013.
A chapter in the Journal of Cryptology Special Issue on Byzantine Agreement. R. Canetti, (Ed.) Vol. 18, No. 3, 2005
Chapter on the Decisional Diffie-Hellman Assumption. Encyclopedia of Cryptography and Security, H. van Tilborg, Henk (Ed.), Springer-Verlag, 2005.
Publications
Bellare, Mihir; Canetti, Ran; Krawczyk, Hugo. Keying Hash Functions for Message Authentication, 1996
R. Canetti, Universally Composable Security: A New Paradigm for Cryptographic Protocols. 42nd FOCS, 2001
N. Bitansky, R. Canetti, O. Paneth, A. Rosen. On the Existence of Extractable One-Way Functions, STOC, 2014
Ran Canetti, Yilei Chen, Leonid Reyzin, Ron D. Rothblum 2018: Fiat-Shamir and Correlation Intractability from Strong KDM-Secure Encryption. EUROCRYPT(1): 91-122.
Ran Canetti, Ling Cheung, Dilsun Kirli Kaynar, Moses Liskov, Nancy A. Lynch, Olivier Pereira, Roberto Segala (2018): Task-structured Probabilistic I/O Automata. J. Comput. Syst. Sci. 94: 63-97.
Some of Canetti's past activities include being a co-organizer of the Crypto in the Clouds Workshop at MIT (2009), co-organizer of the CPIIS TAU/IDC Workshop on Electronic voting (2009), co-organizer of the Theoretical Foundations of Practical Information Security workshop (2008). He was also the Program Committee chair for the Theory of Cryptography Conference (2008) and for eight years was the co-chair of the Multicast Security Working Group at the Internet Engineering Task Force (2000-2008).
Ran Canetti's Full List of Publications (1990-2018)
Areas of research
His research interests span multiple aspects of cryptography and information security, with emphasis on the design, analysis and use of cryptographic protocols.
Awards
RSA Award for Excellence in Mathematics 2018
IBM Research Outstanding Innovation Award, 2006. Given for work on sound foundations for modern cryptography.
IBM Corporate Award, 2005. Given for the continued impact of the HMAC algorithm.
IBM Research Best Paper Award, 2004
IBM Research Outstanding Innovation Award, 2004
IBM Research Best Paper Award, 2001
IBM Research Division Award, 1999. Given for contribution to the IPSEC standard.
IBM Innovation Award, 1997. Given for the design of the HMAC message authentication function.
The Kennedy Thesis Award, The Weizmann Institute, 1996
The Rothschild post-doctoral scholarship, 1995-6
The Gutwirth Special Excellence Fellowship, the Technion, 1992
Public appearances
Canetti has spoken at major conferences worldwide including the below selection of keynote talks:
Composable Formal Security Analysis: Juggling Soundness, Simplicity and Efficiency, given at ICALP 2008, Reykjavik, Iceland 2008. See the accompanying paper
Obtaining Universally Composable Security: Towards the Bare Bones of Trust, given at AsiaCrypt 2007, Kuching, Malaysia, December 2007, Slides (PDF). See also accompanying paper.
How to Obtain and Assert Composable Security, given at the 16th Usenix Security Symposium, Boston, MA, August 2007, Slides (PDF) and audio recording (mp3)
Universally Composable Security with Global Set-Up, given at IPAM Program on Applications and Foundations of Cryptography and Computer Security UCLA, November 2006, Slides (PDF)
Security and Composition of Cryptographic Protocols: A Tutorial, given at IPAM Program on Applications and Foundations of Cryptography and Computer Security UCLA, September, 2006. Slides (PDF). See also accompanying paper.
The HMAC Construction: A Decade Later, given at MIT CIS Seminar, December 2005. Slides (PDF)
References
1962 births
Living people
Boston University faculty
IBM people
Israeli cryptographers
Israeli computer scientists
Technion – Israel Institute of Technology alumni
Tel Aviv University faculty
Weizmann Institute of Science alumni |
9630727 | https://en.wikipedia.org/wiki/Black%20Ball%20Line%20%28trans-Atlantic%20packet%29 | Black Ball Line (trans-Atlantic packet) | The Black Ball Line (originally known as the Wright, Thompson, Marshall, & Thompson Line, then as the Old Line) was a passenger line founded by a group of New York Quaker merchants headed by Jeremiah Thompson, and included Isaac Wright & Son (William), Francis Thompson and Benjamin Marshall. All were Quakers except Marshall.
The line initially consisted of four packet ships, the Amity, Courier, Pacific and the James Monroe. All of these were running between Liverpool, England and New York City. This first scheduled trans-Atlantic service was founded in 1817. In operation for some 60 years, it took its name from its flag, a black ball on a red background.
History
The Wright, Thompson, Marshall, & Thompson Line was founded in 1817 and began shipping operations in 1818. At some point in the line's history it became known as the Old Line and eventually became known as the Black Ball Line after the 1840s. The Black Ball Line established the modern era of liners. The packet ships were contracted by governments to carry mail and also carried passengers and timely items such as newspapers. Up to this point there were no regular passages advertised by sailing ships. They arrived at port when they could, dependent on the wind, and left when they were loaded, frequently visiting other ports to complete their cargo. The Black Ball Line undertook to leave New York on a fixed day of the month irrespective of cargo or passengers. The service took several years to establish itself and it was not until 1822 that the line increased sailings to two per month; it also reduced the cost of passage to 35 guineas.
The sensation this created brought in competitors such as the Red Star Line, which also adopted fixed dates. The average passage of packets from New York to Liverpool was 23 days eastward and 40 days westward. But this was at a period where usual reported passages were 30 and 45 days respectively, while westward passages of 65 to 90 days excited no attention. The best passage from New York to Liverpool in those days was the 15 days 16 hours achieved at the end of 1823 by the ship New York (though often incorrectly reported as Canada). The westward crossing had a remarkable record of 15 days 23 hours set by the Black Ball's Columbia in 1830, during an unusually prolonged spell of easterly weather which saw several other packet ships making the journey in 16 to 17 days. Captain Joseph Delano was reported to be "up with the Banks of Newfoundland in ten days".
In 1836 the Line passed into the hands of Captain Charles H. Marshall, he gradually added the Columbus, Oxford. Cambridge, New York, England, Yorkshire, Fidelia, Isaac Wright, Isaac Webb, the third Manhattan, Montezuma, Alexander Marshall, Great Western, and Harvest Queen to the fleet.
The Black Ball Line is mentioned in several sea shanties, such as "Blow the Man Down," "Homeward Bound", "Eliza Lee", "New York Girls", and "Hurrah for the Black Ball Line."
List of Black Ball Line (USA) ships
Similar shipping lines
In 1851 James Baines & Co. of Liverpool entered the packet trade using the same name and flag as the New York company, despite its protests. Thus, for about twenty years, two "Black Ball lines" under separate ownership were operating in direct competition on the transatlantic packet trade. James Baines & Co. also operated ships running between Liverpool and Australia, including famous clipper ships such as Champion of the Seas, James Baines, Lightning, Indian Queen, Marco Polo and Sovereign of the Seas.
The Saint John-Liverpool Packet Line which existed for a couple of years in the 1850s was also known as the Black Ball Line. It was managed by Richard Wright, St. John, and William and James Fernie, in Liverpool.
Notes
References
External links
A Tribute To A Dynasty: The Black Ball Line and The Pacific Northwest
Transatlantic WNYC Reading Room
Steamship Ticket for Passage of Mr. Nicholas Fish on the Packet Ship Yorkshire, 1859 at GG Archives
Transatlantic shipping companies
Transport companies established in 1817
Packet (sea transport)
Defunct shipping companies of the United Kingdom
Defunct shipping companies of the United States
Companies with year of disestablishment missing
Historic transport in Merseyside |
4578845 | https://en.wikipedia.org/wiki/Wireless%20site%20survey | Wireless site survey | A wireless site survey, sometimes called an RF (Radio Frequency) site survey or wireless survey, is the process of planning and designing a wireless network, to provide a wireless solution that will deliver the required wireless coverage, data rates, network capacity, roaming capability and quality of service (QoS). The survey usually involves a site visit to test for RF interference, and to identify optimum installation locations for access points. This requires analysis of building floor plans, inspection of the facility, and use of site survey tools. Interviews with IT management and the end users of the wireless network are also important to determine the design parameters for the wireless network.
As part of the wireless site survey, the effective range boundary is set, which defines the area over which signal levels needed support the intended application. This involves determining the minimum signal to noise ratio (SNR) needed to support performance requirements.
Wireless site survey can also mean the walk-testing, auditing, analysis or diagnosis of an existing wireless network, particularly one which is not providing the level of service required.
Wireless site survey process
Wireless site surveys are typically conducted using computer software that collects and analyses WLAN metrics and/or RF spectrum characteristics. Before a survey, a floor plan or site map is imported into a site survey application and calibrated to set scale. During a survey, a surveyor walks the facility with a portable computer that continuously records the data. The surveyor either marks the current position on the floor plan manually, by clicking on the floor plan, or uses a GPS receiver that automatically marks the current position if the survey is conducted outdoors. After a survey, data analysis is performed and survey results are documented in site survey reports generated by the application.
All these data collection, analysis, and visualization tasks are highly automated in modern software. In the past, however, these tasks required manual data recording and processing .
Types of wireless site surveys
There are three main types of wireless site surveys: passive, active, and predictive.
During a passive survey, a site survey application passively listens to WLAN traffic to detect active access points, measure signal strength and noise level. However, the wireless adapter being used for a survey is not associated to any WLANs. For system design purposes, one or more temporary access points are deployed to identify and qualify access point locations. This used to be the most common method of pre-deployment wifi survey.
During an active survey, the wireless adapter is associated with one or several access points to measure round-trip time, throughput rates, packet loss, and retransmissions. Active surveys are used to troubleshoot wifi networks or to verify performance post-deployment.
During a predictive survey, a model of the RF environment is created using simulation tools. It is essential that the correct information on the environment is entered into the RF modeling tool, including location and RF characteristics of barriers like walls or large objects. Therefore, temporary access points or signal sources can be used to gather information on propagation in the environment. Virtual access points are then placed on the floor plan to estimate expected coverage and adjust their number and location. The value of a predictive survey as a design tool versus a passive survey done with only a few access point is that modeled interference can be taken into account in the design.
Additionally, some survey application allow the user to collect RF spectrum data using portable hardware spectrum analyzers, which is beneficial in case of high RF interference from non-802.11 sources, such as microwave ovens or cordless phones.
Site survey software and hardware
Depending on the survey type, a number of software and software/hardware options are available to WLAN surveyors.
Software
Passive and active surveys are performed using software and typically require only a compatible off-the-shelf Wi-Fi adapter; no additional specialized hardware is required. Predictive surveys require no hardware at all, as no wireless data collection is needed. Currently, professional-level site survey applications exist primarily for Microsoft Windows. Some site survey applications for other platforms, including iOS and Android, also exist, however they are limited in functionality due to the limitations of the underlying platform API. For example, signal level measurements cannot be obtained on iOS without jailbreaking. The feasibility of creating professional-level applications for non-Windows tablets is debated.
Hardware
Unlike passive and active surveys, RF spectrum surveys require specialized RF equipment. There are various types of spectrum analyzers ranging from large and expensive bench-top units to portable ("field units") and PC-based analyzers. Because portability is a decisive factor in conducting wireless site surveys, PC-based spectrum analyzers in CardBus and USB form factors are widely used today. WLAN chipset manufacturers are starting to incorporate spectrum analysis into their chipset designs; this functionality is integrated into some high-end enterprise-class 802.11n access points.
See also
Comparison of wireless site survey applications
References
Site survey |
61585454 | https://en.wikipedia.org/wiki/M%C3%A1ire%20O%27Neill | Máire O'Neill | Máire O'Neill (née McLoone) (born 1978) is an Irish Professor of Information Security and inventor based at the Centre for Secure Information Technologies Queen's University Belfast. She was named the 2007 British Female Inventors & Innovators Network Female Inventor of the Year. She was the youngest person to be made a professor of engineering at Queen's University Belfast and youngest person to be inducted into the Irish Academy of Engineering.
Early life and education
O'Neill is from Glenties. Her father, John McLoone, built a hydroelectric scheme on the Oweneda river, which was close to O'Neill's house, providing the family with free electricity. He was a vice-principal and maths teacher at Glenties Comprehensive School. She has lived in Belfast since she was a teenager. At Strathearn School she studied mathematics, physics and technology. She studied electronic engineering at Queen's University Belfast, and was sponsored by a local company to work on data security. She decided to stay on for a PhD in the architectures of data encryption. She was a PhD student working under the supervision of John McCanny at Queen's University Belfast. During her PhD she worked at a university spin-out company, Amphion Semiconductor, where she designed electronic circuits. Her first interaction with entrepreneurship was during her doctoral training, when her PhD project on high speed Advanced Encryption Standard (AES) was successfully commercialised by an American semiconductor company for use in set-top boxes. The AES circuit design developed by O'Neill improved hardware efficiency six-fold. She earned her PhD in 2002, and was awarded a Royal Academy of Engineering Research Fellowship in 2003. Together with John McCanny O'Neill wrote a book about system on-chip architectures for private-key data encryption.
Research and career
In 2004 O'Neill was made a lecturer in Electronics, Communications and Information Technology at Queen's University Belfast. She worked on security systems to protect users from cyber threats, and was made Head of the Cryptography Research Team. She works on improving hardware security. She has also worked with Electronics and Telecommunications Research Institute on a new type of security system to protect electric vehicle charging systems, which was licensed by LG CNS. O’Neill was awarded an Engineering and Physical Sciences Research Council Leadership Grant to develop research into next generation data security and has since been awarded a Horizon 2020 grant. Her research has considered the data security requirements that are associated with emerging applications of mobile computing. She worked on quantum dot cellular automata circuit design techniques, which are being considered as alternates to CMOS and have lower power dissipation. She also developed PicoPUF , a physical unclonable function (PUF) device that contained a semiconductor IP core to provide authentication for microchips, which was awarded the INVENT2015 prize. In 2013 O'Neill wrote the academic text book Design of Semiconductor QCA Systems, which was published by Artech House.
There is a region of China that produces 95% of the world's cultured freshwater pearls. Differentiating a real from a fake pearl can be challenging, and increasing numbers of counterfeit pearls are bankrupting Chinese pearl farmers. O'Neill devised an approach to determine whether or not a pearl is real using Radio-Frequency IDentification tags. These RFID tags could be embedded into each pearl that the farmer's collect, guaranteeing their authenticity. They could encode other information about the pearl onto the RFID tag, which could be collected by a simple scanner. Equivalent to the damages caused by fake pearls; hacked and cloned devices acting on a network can be dangerous. O'Neill is also investigating ways to secure connected devices, the so-called internet of things. In 2017 she was made Director of the United Kingdom Research Institute in Secure Hardware and Embedded Systems, a £5 million centre in Belfast.
O'Neill is currently investigating post-quantum cryptography algorithms.
Academic service
At the age of 32 O'Neill was the youngest person ever to be appointed a professor of engineering at Queen's University Belfast. In 2018 O'Neill was named the Principal Investigator of the Centre for Secure Information Technologies (CSIT) in the Northern Ireland Science Park. She delivered a TED talk on the future of internet of things security at Queen's University Belfast in 2019. She has appeared on BBC World Service. She was elected to join the UK Artificial Intelligence council in May 2019.
In August 2019 O'Neill was appointed Acting Director of ECIT, the Institute of Electronics, Communications and Information Technology at Queen's University Belfast.
O'Neill has worked on improving gender balance in engineering throughout her career. She was the 2006 Belfast Telegraph schools lecturer, sharing her work on data encryption with hundreds of school children. O'Neill led the successful Queen's University Belfast silver Athena SWAN application. She has described Wendy Hall as one of her role models.
Awards and honours
Her awards and honours include;
2003 Royal Academy of Engineering Research Fellowship
2004 Vodafone Award at Britain's Younger Engineers Event
2006 Women's Engineering Society prize at the IET Young Woman Engineer of the Year
2007 British Female Inventors & Innovators Network Female Inventor of the Year
2007 European Union Women Inventors & Innovators Innovator of the Year
2015 Fellow of the Irish Academy of Engineering
2014 Royal Academy of Engineering Silver Medal
2015 INVENT 2015
2017 Elected to Royal Irish Academy
2019 Blavatnik Awards for Young Scientists
2019 Fellow of the Royal Academy of Engineering
2020 Regius Professorship
Personal life
O'Neill is married to an electronic engineer, and they have three children. Her two brothers are both electronic engineers and her two sisters are medical doctors.
References
1978 births
Living people
Alumni of Queen's University Belfast
Academics of Queen's University Belfast
Fellows of the Royal Academy of Engineering
Female Fellows of the Royal Academy of Engineering
Members of the Royal Irish Academy
Irish women engineers
Irish inventors |
33225484 | https://en.wikipedia.org/wiki/Formula%20Systems | Formula Systems | Formula Systems () is a publicly traded holding company headquartered in Or Yehuda, Israel. Through its subsidiaries it operates mainly in the area of information technology. Shares of Formula Systems are traded on the NASDAQ Global Select Market and on the Tel Aviv Stock Exchange. In 2010 Polish software maker Asseco acquired a 50.2% stake in the company.
History
Formula Systems was founded in 1985 by Dan Goldstein and his brother Gad. Dan Goldstein, who had an undergraduate degree in math and computer science and a master's in business management, had been a doctoral student at Tel Aviv University when, in 1981, he decided to leave the academic world and direct his efforts toward the private sector instead. He established a software company, which he called Formula Software Solutions, serving such clients as Egged and Mekorot, and recruited brother Gad Goldstein to work alongside him. Soon the two co-founded Formula Systems, as well as many other technology companies. Until 1998, the company established its position as a technology oriented holding company. As such, it created, developed and invested in a number of the early companies, floated and traded on Nasdaq such as BluePhoenix, Wiztech, Matrix, Sapiens International Corporation and many others. In 1998 Formula Systems acquired control of the Mashov Group, including assets such as Magic Software Enterprises, Walla!, Paradigm Geophysical, Babylon (software) and others. In 2005, Dan Goldstein separated Formula Systems into two separate entities, by spinning off all the young start-up companies under Mashov Computers, which he renamed Formula Vision. Goldstein sold his share in Formula Systems to the Emblaze Group, and dedicated himself to the cultivation of several struggling yet promising start-ups.
In 2006, 33.6% of Formula was owned by Emblaze Group. In late 2010, Polish software maker Asseco, a subsidiary of Asseco Group, acquired a 50.2% stake in Formula Systems.
Subsidiaries
Formula Systems' portfolio has changed over the years, but always with an emphasis on software engineering. At various times it included Magic Software Enterprises, Crystal Systems, Liraz, Sivan, Matrix IT, Sapiens International Corporation, New Applicom, Sintec, nextSource, and BluePhoenix Solutions. As of fiscal year 2010, Formula retains a majority interest in three public companies: Sapiens International Corporation N.V. (71.6%), Magic Software Enterprises (51.7%), and Matrix (50.1%).
See also
TA BlueTech Index
List of Israeli companies quoted on the Nasdaq
References
Companies listed on the Nasdaq
Companies listed on the Tel Aviv Stock Exchange
Companies established in 1985
Software companies of Israel
1985 establishments in Israel |
5348971 | https://en.wikipedia.org/wiki/Pocket%20Paint | Pocket Paint | Pocket Paint is the Windows CE version of the raster graphics editor Microsoft Paint accessory commonly included with the Windows operating system. Because it is written to run on the leaner Windows CE operating system, it lacks a few of the features found in its bigger brother for the desktop. The only image format supported is BMP.
Pocket Paint was bundled as part of Microsoft's Handheld PC and Palm-size PC Power Toys for Windows CE. Like the desktop Power Toys, the applications included in this freeware distribution were not officially supported by Microsoft, although most if not all were written by the operating system developers.
Pocket Paint has not been included with any of Microsoft's next-generation Power Toys for the CE operating system, such as the Pocket PC platform, and appears to be abandoned.
See also
Comparison of raster graphics editors
Microsoft Fresh Paint
External links
Review of Pocket Paint
Download Pocket Paint with the Plus! Pack for Handheld PC
Raster graphics editors |
1044497 | https://en.wikipedia.org/wiki/Workplace%20OS | Workplace OS | Workplace OS is IBM's ultimate operating system prototype of the 1990s. It is the product of an exploratory research program in 1991 which yielded a design called the Grand Unifying Theory of Systems (GUTS), proposing to unify the world's systems as generalized personalities cohabitating concurrently upon a universally sophisticated platform of object-oriented frameworks upon one microkernel. Developed in collaboration with Taligent and its Pink operating system imported from Apple via the AIM alliance, the ambitious Workplace OS was intended to improve software portability and maintenance costs by aggressively recruiting all operating system vendors to convert into Workplace OS personalities. In 1995, IBM reported that "Nearly 20 corporations, universities, and research institutes worldwide have licensed the microkernel, laying the foundation for a completely open microkernel standard." At the core of IBM's new unified strategic direction for the entire company, the project was intended also as a bellwether toward PowerPC hardware platforms, to compete with the Wintel duopoly.
With protracted development spanning four years and $2 billion (or 0.6% of IBM's revenue for that period), the project suffered development hell characterized by workplace politics, feature creep, and the second-system effect. Many idealistic key assumptions made by IBM architects about software complexity and system performance were never tested until far too late in development, and found to be infeasible. In January 1996, the first and only commercial preview was launched with the name "OS/2 Warp Connect (PowerPC Edition)" by limited special order by select IBM customers, as a crippled product. The entire operating system was soon discontinued in 1996 due to very low market demand, including that of enterprise PowerPC hardware.
A University of California case study described the Workplace OS project as "one of the most significant operating systems software investments of all time" and "one of the largest operating system failures in modern times".
Overview
Objective
By 1990, IBM acknowledged the software industry to be in a state of perpetual crisis. This was due to the chaos from inordinate complexity of software engineering inherited by its legacy of procedural programming practices since the 1960s. Large software projects were too difficult, fragile, expensive, and time-consuming to create and maintain; they required too many programmers, who were too busy with fixing bugs and adding incremental features to create new applications. Different operating systems were alien to each other, with their own proprietary applications. IBM envisioned "life after maximum entropy" through "operating systems unification at last" and wanted to lay a new worldview for the future of computing.
IBM sought a new world view of a unified foundation for computing, based upon the efficient reuse of common work. It wanted to break the traditional monolithic software development cycle of producing alphas, then betas, then testing, and repeating over the entire operating system — instead, compartmentalizing the development and quality assurance of individual unit objects. This new theory of unifying existing legacy software and the new way of building all new software, was nicknamed the Grand Unified Theory of Systems or GUTS.
Coincidentally, Apple already had a two-year-old secret prototype of its microkernel-based object-oriented operating system with application frameworks, named Pink. The theory of GUTS was expanded by Pink, yielding Workplace OS.
Architecture
IBM described its new microkernel architecture as scalable, modular, portable, client/server distributed, and open and fully licensable both in binary and source code forms. This microkernel-based unified architecture was intended to allow all software to become scalable both upward into supercomputing space and downward into mobile and embedded space.
Leveraged upon a single microkernel, IBM wanted to achieve its grand goal of unification by simplifying complex development models into reusable objects and frameworks, and all while retaining complete backward compatibility with legacy and heritage systems. Multiple-library support would allow developers to progressively migrate select source code objects to 64-bit mode, with side-by-side selectable 32-bit and 64-bit modes. IBM's book on Workplace OS says, "Maybe we can get to a 64-bit operating system in our lifetime." IBM intended shareable objects to eventually reduce the footprint of each personality, scaling them down to a handheld computing profile.
At the base of Workplace OS is a fork of the Mach 3.0 microkernel (release mk68) originally developed by Carnegie Mellon University and heavily modified by the Open Software Foundation's Research Institute. Officially named "IBM Microkernel", it provides five core features: IPC, virtual memory support, processes and threads, host and processor sets, and I/O and interrupt support.
On top of the IBM Microkernel, is a layer of shared services (originally called Personality Neutral Services or PNS) to cater to some or all of the personalities above them. Shared services are endian-neutral, have no user interface, and can serve other shared services. Byte summarizes that shared services "can include not only low-level file system and device-driver services but also higher-level networking and even database services. [Workplace OS's lead architect Paul Giangarra] believes that locating such application-oriented services close to the microkernel will improve their efficiency by reducing the number of function calls and enabling the service to integrate its own device drivers." This layer contains the file systems, the scheduler, network services, and security services. IBM first attempted a device driver model completely based in userspace to maximize its dynamic configuration, but later found the need to blend it between userspace and kernelspace, while keeping as much as possible in userspace. The Adaptive Driver Architecture (ADD) was designed for the creation of layered device drivers, which are easily portable to other hardware and operating system platforms beyond Workplace OS, and which consist of about 5000-8000 lines of device-specific code each. Some shared services are common only to select personalities, such as MMPM serving multimedia only to Windows 3.1 and OS/2 personalities, and which is alien or redundant to other markets.
Atop the shared services, another layer of userspace servers called personalities provide DOS, Windows, OS/2 (Workplace OS/2), and UNIX (WPIX) environments. The further hope was to support OS/400, AIX, Taligent OS, and MacOS personalities. Personalities provide environment subsystems to applications. Any one personality can be made dominant for a given version of the OS, providing the desktop user with a single GUI environment to accommodate the secondary personalities. In 1993, IBM intended one release version to be based upon the OS/2 Workplace Shell and another to be based upon the UNIX Common Desktop Environment (CDE).
IBM explained the branding: "Workplace OS is the codename for a collection of operating system components including, among others, the IBM Microkernel and the OS/2 personality. Workplace OS/2 is the specific codename for the OS/2 personality. Workplace OS/2 will operate with the IBM Microkernel and can be considered OS/2 for the PowerPC." For the 1995 final preview release, IBM continued, "When we stopped using the name 'Workplace' and started calling the product 'OS/2 for the PowerPC', you might have thought that the 'Workplace' was dead. But the 'Workplace' is far from dead. It has simply been renamed for prime time."
IBM intended for Workplace OS to run on several processor architectures, including PowerPC, ARM, and x86 which would range in size from handheld PDAs to workstations to large 64-bit servers and supercomputers. IBM saw the easy portability of the Workplace OS as creating a simple migration path to move its existing x86 (DOS and OS/2) customer base onto a new wave of standard reference PowerPC-based systems, such as the PC Power Series and the Power Macintosh. Creating a unique but open and industry-standard reference platform of open-source microkernel, IBM hedged its company-wide operating system strategy by aggressively attempting to recruit other computer companies to adopt its microkernel as a basis for their own operating systems.
History
Development
GUTS
In January 1991, there was an internal presentation to the IBM Management Committee of a new strategy for operating system products. This included a chart called the Grand Unification Theory of Operating Systems (GUTS) which outlined how a single microkernel underlying common subsystems could provide a single unifying architecture for the world's many existing and future operating systems. It was initially based in a procedural programming model, not object-oriented. The design elements of this plan had already been implemented on IBM's RS/6000 platform via the System Object Model (SOM), a model which had already been delivered as integral to the OS/2 operating system.
Sometime later in 1991, as a result of the Apple/IBM business partnership, a small exploratory IBM team first visited the Taligent team, who demonstrated a relatively mature prototype operating system and programming model based entirely on Apple's Pink project from 1987. There, GUTS's goals were greatly impacted and expanded by exposure to these similar goals—especially advanced in the areas of aggressive object-orientation, and of software frameworks upon a microkernel. IBM's optimistic team saw the Pink platform as being the current state of the art of operating system architecture. IBM wanted to adopt Pink's more object-oriented programming model and framework-based system design, and add compatibility with legacy procedural programming along with the major concept of multiple personalities of operating systems, to create the ultimate possible GUTS model.
Through the historic Apple/IBM partnership, Apple's CEO John Sculley said that the already volume-shipping OS/2 and MacOS would become unified upon the common PowerPC hardware platform to "bring a renaissance to the industry".
In late 1991, a small team from Boca Raton and Austin began implementing the GUTS project, with the goal of proving the GUTS concept, by first converting the monolithic OS/2 2.1 system to the Mach microkernel, and yielding a demo. To gain shared access to key personnel currently working on the existing OS/2, they disguised the project as the Joint Design Task Force and brought "a significant number" of personnel from Boca, Austin (with LANs and performance), Raleigh (with SNA and other transport services), IBM Research (with operating systems and performance), and Rochester (with the 64-bit, object-oriented worldview from AS/400). Pleased with the robust, long-term mentality of the microkernel technology and with the progress of the project, the team produced a prototype in mid 1992. The initial internal-development prototypes ran on x86-based hardware and provided a BSD Unix derived personality and a DOS personality.
Demos and business reorganization
At Comdex in late 1992, the team flew in and assembled a private demonstration based on last-minute downloads to replace corrupted files and one hour of sleep. The presentation was so well received that the prototype was put on the trade show floor on Thursday, as the first public demonstration of the IBM Microkernel-based system running OS/2, DOS, 16-bit Windows, and UNIX applications. In 1992, IBM ordered Taligent to migrate the Taligent OS from its internally developed microkernel named Opus, onto the IBM Microkernel. Ostensibly, this would have allowed Taligent's operating system (implemented as a Workplace OS personality) to execute side-by-side with DOS and OS/2 operating system personalities.
In 1993, InfoWorld reported that Jim Cannavino "has gone around the company and developer support for a plan to merge all of the company's computing platforms—ES/9000, AS/400, RS/6000, and PS/2—around a single set of technologies, namely the PowerPC microprocessor, the Workplace OS operating system, and the Taligent object model, along with a series of open standards for cross-platform development, network interoperability, etc." On June 30, 1993, a presentation was given at the Boca Programming Center by Larry Loucks, IBM Fellow and VP of Software Architecture of the Personal Software Products (PSP) Division.
By 1993, IBM reportedly planned two packages of Workplace OS, based on personality dominance: one based on the OS/2 Workplace Shell and another based upon the UNIX Common Desktop Environment (CDE). IBM and Apple were speaking about the possibility of a Mac OS personality.
By January 1994, the IBM Power Personal Systems Division had still not yet begun testing its PowerPC hardware with any of its three intended launch operating systems: definitely AIX and Windows NT, and hopefully also Workplace OS. Software demonstrations showed limited personality support, with the dominant one being the OS/2 Workplace Shell desktop, and the DOS and UNIX personalities achieving only fullscreen text mode support with crude hotkey switching between the environments. Byte reported that the multiple personality support promised in Workplace OS's conceptual ambitions was more straightforward, foundational, and robust than that of the already-shipping Windows NT. The magazine said "IBM is pursuing multiple personalities, while Microsoft appears to be discarding them" while conceding that "it's easier to create a robust plan than a working operating system with robust implementations of multiple personalities".
In 1994, the industry was reportedly shifting away from monolithic development and even application suites, toward object-oriented, component-based, crossplatform, application frameworks.
By 1995, Workplace OS was becoming notable for its many and repeated launch delays, with IBM described as being inconsistent and "wishy washy" with dates. This left IBM's own PowerPC hardware products without a mainstream operating system, forcing the company to at least consider the rival Windows NT. In April 1994, Byte reported that under lead architect Paul Giangarra, IBM had staffed more than "400 people working to bring [Workplace OS] up on Power Personal hardware".
In May 1994, the RISC Systems software division publicly announced the company's first attempt to even study the feasibility of converting AIX into a Workplace OS personality, which the company had been publicly promising since the beginning. One IBM Research Fellow led a team of fewer than ten, to identify and address the problem, which was the fundamentally incompatible byte ordering between the big-endian AIX and the little-endian Workplace OS. This problem is endemic, because though the PowerPC CPU and Workplace OS can perform in either mode, endianness is a systemwide configuration set once at boot time only; and Workplace OS favors OS/2 which comes from the little-endian Intel x86 architecture. After seven months of silence on the issue, IBM announced in January 1995 that the intractable endianness problem had resulted in the total abandonment of the flagship plan for an AIX personality.
In late 1994, as Workplace OS approached its first beta version, IBM referred to the beta product as "OS/2 for the PowerPC". As the project's first deliverable product, this first beta was released to select developers on the Power Series 440 in December 1994. There was a second beta release in 1995. By 1995, IBM had shipped two different releases of an application sampler CD, for use with the beta OS releases.
Preview launch
In mid 1995, IBM officially named its planned initial Workplace OS release "OS/2 Warp Connect (PowerPC Edition)" with the code name "Falcon". In October 1995, IBM announced the upcoming first release, though still a developer preview. The announcement predicted it to have version 1.0 of the IBM Microkernel with the OS/2 personality and a new UNIX personality, on PowerPC. Having been part of the earliest demonstrations, the UNIX personality was now intended to be offered to customers as a holdover due to the nonexistence of a long-awaited AIX personality, but the UNIX personality was also abandoned prior to release.
This developer release is the first ever publication of Workplace OS, and of the IBM Microkernel (at version 1.0), which IBM's internal developers had been running privately on Intel and PowerPC hardware. The gold master was produced on December 15, 1995 with availability on January 5, 1996, only to existing Power Series hardware customers who paid $215 for a special product request through their IBM representative, who then relayed the request to the Austin research laboratory. The software essentially appears to the user as the visually identical and source-compatible PowerPC equivalent of the mainstream OS/2 3.0 for Intel. Packaged as two CDs with no box, its accompanying overview paper booklet calls it the "final edition" but it is still a very incomplete product intended only for developers. Its installer only supports two computer models, the IBM PC Power Series 830 and 850 which have PowerPC 604 CPUs of , of RAM, and IDE drives. Contrary to the product's "Connect" name, the installed operating system has no networking support. However, full networking functionality is described within the installed documentation files, and in the related book IBM's Official OS/2 Warp Connect PowerPC Edition: Operating in the New Frontier (1995) — all of which the product's paper booklet warns the user to disregard. The kernel dumps debugging data to the serial console. The system hosts no compiler, so developers are required to cross-compile applications on the source-compatible OS/2 for Intel system, using MetaWare’s High C compiler or VisualAge C++, and manually copy the files via relocatable medium to run them.
With an officially concessionary attitude, IBM had no official plans for a general release packaged for OEMs or retail, beyond this developer preview available only via special order from the development lab. Upon its launch, Joe Stunkard, spokesman for IBM's Personal Systems Products division, said "When and if the Power market increases, we'll increase the operating system's presence as required." On January 26, 1996, an Internet forum statement is made by John Soyring, IBM's Vice President of Personal Software Products: "We are not planning additional releases of the OS/2 Warp family on the PowerPC platform during 1996 — as we just released in late December 1995 the OS/2 Warp (PowerPC Edition) product. ... We have just not announced future releases on the PowerPC platform. In no way should our announcement imply that we are backing away from the PowerPC."
Roadmap
On November 22, 1995, IBM's developer newsletter said, "Another focus of the 1996 product strategy will be the IBM Microkernel and microkernel-based versions of OS/2 Warp. Nearly 20 corporations, universities, and research institutes worldwide have licensed the microkernel, laying the foundation for a completely open microkernel standard." IBM planned a second feature-parity release for Intel and PowerPC in 1996. In 1996, a Workplace OS version was rumored to exist internally that also supported x86 and ARM processors. IBM reportedly tested OS/2 on the never-released x86-compatible PowerPC 615 CPU.
At this point, the several-year future roadmap of Workplace OS included IBM Microkernel 2.0 and was intended to subsume the fully converged future of the OS/2 platform starting after the future release of OS/2 version 4, including ports to Pentium, Pentium Pro, MIPS, ARM, and Alpha CPUs.
Discontinuation
The Workplace OS project was finally canceled in March 1996 due to myriad factors: inadequate performance; low acceptance of the PowerPC Reference Platform; poor quality of the PowerPC 620 launch; extensive cost overruns; lack of AIX, Windows, or OS/400 personalities; and the overall low customer demand. The only mainstream desktop operating system running on PowerPC was Windows NT, which also lacked supply and demand. Industry analysts said that "the industry may have passed by the PowerPC". In 1996, IBM also closed the Power Personal Division responsible for personal PowerPC systems. IBM stopped developing new operating systems, and instead committed heavily to Linux, Java, and some Windows. In 2012, IBM described Linux as the "universal platform" in a way that happens to coincide with many of the essential design objectives of GUTS.
Reception
Industrial reception
Reception was enthusiastically but skeptically mixed, as the young IT industry was already constantly grappling with the second-system effect, and was now presented with Workplace OS and PowerPC hardware as the ultimate second system duo to unify all preceding and future systems. On November 15, 1993, InfoWorlds concerns resembled the Osborne effect: "Now IBM needs to talk about this transition without also telling its customers to stop buying all the products it is already selling. Tough problem. Very little of the new platform that IBM is developing will be ready for mission-critical deployment until 1995 or 1996. So the company has to dance hard for two and maybe three years to keep already disaffected customers on board."
In 1994, an extensive analysis by Byte reported that the multiple personality concept in Workplace OS's beta design was more straightforward, foundational, and robust than that of the already-shipping Windows NT. The magazine said "IBM is pursuing multiple personalities, while Microsoft appears to be discarding them" while conceding that "it's easier to create a robust plan than a working operating system with robust implementations of multiple personalities".
Upon the January 1996 developer final release, InfoWorld relayed the industry's dismay that the preceding two years of delays had made the platform "too little, too late", "stillborn", and effectively immediately discontinued. An analyst was quoted, "The customer base would not accept OS/2 and the PowerPC at the same time" because by the time IBM would eventually ship a final retail package of OS/2 on PowerPC machines, "the power/price ratio of the PowerPC processor just wasn't good enough to make customers accept all of the other drawbacks" of migrating to a new operating system alone.
In 2013, Ars Technica retrospectively characterized the years of hype surrounding Workplace OS as supposedly being "the ultimate operating system, the OS to end all OSes ... It would run on every processor architecture under the sun, but it would mostly showcase the power of POWER. It would be all-singing and all-dancing."
Internal analysis
In January 1995, four years after the conception and one year before the cancellation of Workplace OS, IBM announced the results of a very late stage analysis of the project's initial assumptions. This concluded that it is impossible to unify the inherent disparity in endianness between different proposed personalities of legacy systems, resulting in the total abandonment of the flagship plan for an AIX personality.
In May 1997, one year after its cancellation, one of its architects reflected back on the intractable problems of the project's software design and the limits of available hardware.
Academic analysis
In September 1997, a case study of the history of the development of Workplace OS was published by the University of California with key details having been verified by IBM personnel. These researchers concluded that IBM had relied throughout the project's history upon multiple false assumptions and overly grandiose ambitions, and had failed to apprehend the inherent difficulty of implementing a kernel with multiple personalities. IBM considered the system mainly as its constituent components and not as a whole, in terms of system performance, system design, and corporate personnel organization. IBM had not properly researched and proven the concept of generalizing all these operating system personalities before starting the project, or at any responsible timeframe during it — especially its own flagship AIX. IBM assumed that all the resultant performance issues would be mitigated by eventual deployment upon PowerPC hardware. The Workplace OS product suffered the second-system effect, including feature creep, with thousands of global contributing engineers across many disparate business units nationwide. The Workplace OS project had spent four years and $2 billion (or 0.6% of IBM's revenue for that period), which the report described as "one of the most significant operating systems software investments of all time" and "one of the largest operating system failures in modern times".
See also
Taligent, sister project of Workplace OS
IBM Future Systems project, a previous grand unifying project
Copland, another second system prototype from Apple
64DD, Nintendo's ambitious 1990s platform known for extreme repeated lateness and commercial failure
Notes
References
Further reading
OS/2 PowerPC Toolkit, Developer Connection CD-ROMs. The first doc is a description of the OS/2 ABI on PowerPC32. The second is an API addendum, including a description of new 32-bit console APIs.
ARM operating systems
IBM operating systems
Mach (kernel)
Microkernel-based operating systems
OS/2
PowerPC operating systems
X86 operating systems
Microkernels |
26728804 | https://en.wikipedia.org/wiki/Public%20domain%20in%20the%20United%20States | Public domain in the United States | Works are in the public domain if they are not covered by intellectual property rights (such as copyright) at all, or if the intellectual property rights to the works have expired.
All works first published or released before January 1, , have lost their copyright protection, effective January 1, . In the same manner, each January 1 will result in literature, movies and other works released 96 years earlier entering the public domain until 2073. From 2073 works by creators who died seven decades earlier will expire each year. Works that were published without a copyright notice before 1977 are also in the public domain, as are those published before 1989 if the copyright was not registered within five years of the date of publication, and those published before 1964 if the copyright was not renewed 28 years later.
History
In the United States, copyright at the federal level began with the introduction of the Constitution in 1787. Prior to that several of the individual states had enacted copyright laws, the first being Connecticut in 1783. Creators of works created after the ratification of the Constitution could receive copyright, while works created before the Constitution went into effect remain in the public domain with respect to federal copyright.
Works additionally enter the public domain automatically when copyright has expired, though additional alterations to copyright laws since the original Constitution have extended the length of time for which a given copyright may be valid or can be renewed.
Every work first published prior to 1923 has been in the American public domain since 1998.
The United States Copyright Office is a federal agency tasked with maintaining copyright records.
Public domain works in the U.S.
Public domain literature
Public domain books within the United States include a number of notable titles, many of which are still commonly read and studied as part of the English-language "literary canon". Examples include:
Notes on the State of Virginia by Thomas Jefferson
"The Murders in the Rue Morgue" by Edgar Allan Poe
The Sketch Book of Geoffrey Crayon, Gent. by Washington Irving
The Scarlet Letter by Nathaniel Hawthorne
David Copperfield by Charles Dickens
Moby-Dick by Herman Melville
Uncle Tom's Cabin by Harriet Beecher Stowe
The Adventures of Tom Sawyer by Mark Twain
Mrs Dalloway by Virginia Woolf
The Invisible Man by H. G. Wells
Ulysses by James Joyce
The Great Gatsby by F. Scott Fitzgerald
Public domain images
Thousands of paintings and photographs are under public domain in the USA; these include photographs taken by Jacob Riis, Mathew Brady and Alfred Stieglitz.
Sound recordings under public domain
Sound recordings fixed in a tangible form before February 15, 1972, have been generally covered by common law or in some cases by anti-piracy statutes enacted in certain states, not by federal copyright law, and the anti-piracy statutes typically have no duration limit. As such, virtually all sound recordings, regardless of age, are presumed to still be under copyright protection in the United States. The 1971 Sound Recordings Act, effective 1972, and the 1976 Copyright Act, effective 1978, provide federal copyright for unpublished and published sound recordings fixed on or after February 15, 1972. Recordings fixed before February 15, 1972, are still covered, to varying degrees, by common law or state statutes. Any rights or remedies under state law for sound recordings fixed before February 15, 1972, are not annulled or limited by the 1976 Copyright Act until February 15, 2067. On that date, all sound recordings fixed before February 15, 1972, will go into the public domain in the United States. The extent to which state statutes provide protection is inconsistent and unclear.
The Music Modernization Act was passed on October 11, 2018. Under this act, recordings published before 1923 will expire on January 1, 2022; recordings published between 1923 and 1946 will be protected for 100 years after release; recordings published between 1947 and 1956 will be protected for 110 years; and all recordings published after 1956 that were fixed prior to February 15, 1972 will have their protection terminate on February 15, 2067.
For sound recordings fixed on or after February 15, 1972, the earliest year that any will go out of copyright and into the public domain in the U.S. will be 2043, and not in any substantial number until 2048. Sound recordings fixed and published on or after February 15, 1972, and before 1978, which did not carry a proper copyright notice on the recording or its cover entered the public domain on publication. From 1978 to March 1, 1989, the owners of the copyrights had up to five years to remedy this omission without losing the copyright. Since March 1, 1989, no copyright notice has been required.
Public domain videos
Since the invention of video capture and animation techniques, thousands of films or videos have entered the public domain. Some examples include:
Television series
A number of television series, because they were released before 1964 and did not have their copyright renewed (such as almost all of the extant DuMont Television Network archive), were originally recorded before 1989 without a valid copyright notice, or were works of the United States government, have episodes in the public domain.
Public domain status of television episodes is made complicated by derivative work considerations and disputes over what constitutes "publication" for legal purposes (a network may claim a broadcast telecast once over a network but never syndicated may be an unpublished work); for example, 16 episodes of The Andy Griffith Show are, due to expired copyright, in the public domain by themselves, but in 2007, CBS was able to claim an indirect copyright on the episodes in question by claiming they were derivative works of earlier episodes still under copyright. Likewise, the 1964 special Rudolph the Red-Nosed Reindeer was published with an invalid copyright notice but uses copious amounts of copyrighted music and is loosely based on an original story that is still under copyright.
Public domain films
Hundreds of American live-action films are in the public domain because they were never copyrighted or because their copyrights have since expired. These movies can be viewed online at websites such as Internet Archive and can also be downloaded from websites like Public Domain Torrents.
Notable examples of such public domain films include:
Charade (1963)
Night of the Living Dead (1968)
The Little Shop of Horrors (1960)
Public domain animated films
Hundreds of American animated films are in the public domain, including:
Gulliver's Travels (1939 film)
Popeye the Sailor Meets Sindbad the Sailor
The Mummy Strikes (featuring Superman)
Pantry Panic
Sita Sings the Blues (2008 film)
Who's Who in the Zoo (1942, part of Looney Tunes series by Warner Bros.)
Public domain in copyrighted works in the United States
Congress has restored expired copyrights several times: "After World War I and after World War II, there were special amendments to the Copyright Act to permit for a limited time and under certain conditions the recapture of works that might have escaped into the public domain, principally by aliens of countries with which we had been at war." Works published with notice of copyright or registered in unpublished form in the years 1964 through 1977 automatically had their copyrights renewed for a second term. Works published with notice of copyright or registered in unpublished form on or after January 1, 1923, and prior to January 1, 1964, had to be renewed during the 28th year of their first term of copyright to maintain copyright for a full 95-year term. With the exception of maps, music, and movies, the vast majority of works published in the United States before 1964 were never renewed for a second copyright term.
Works "prepared by an officer or employee of the U.S. government as part of that person's official duties" are automatically in the public domain by law. Examples include military journalism, federal court opinions, congressional committee reports, and census data. However, works created by a contractor for the government are still subject to copyright. Even public domain documents may have their availability limited by laws limiting the spread of classified information. This rule does not apply to works of U.S. state & local governments, though the separate edict of government doctrine automatically places all state legislative enactments and court opinions, among other things, in the public domain.
The claim that "pre- works are in the public domain" is correct only for published works; unpublished works are under federal copyright for at least the life of the author plus 70 years. For a work for hire, the copyright in a work created before 1978, but not theretofore in the public domain or registered for copyright, subsists from January 1, 1978, and endures for a term of 95 years from the year of its first publication, or a term of 120 years from the year of its creation, whichever expires first. If the work was created before 1978 but first published 1978–2002, the federal copyright will not expire before 2047.
Until the Berne Convention Implementation Act of 1988, the lack of a proper copyright notice would place an otherwise copyrightable work into the public domain, although for works published between January 1, 1978, and February 28, 1989, this could be prevented by registering the work with the Library of Congress within five years of publication. After March 1, 1989, an author's copyright in a work begins when it is fixed in a tangible form; neither publication nor registration is required, and a lack of a copyright notice does not place the work into the public domain.
On January 1, 2019, published works from 1923 entered the Public Domain under the Copyright Term Extension Act. Works from 1923 that have been identified as entering the public domain in this period include The Murder on the Links, by Agatha Christie; The Great American Novel, by William Carlos Williams; the original silent version of the film The Ten Commandments, by Cecil B. DeMille; the hymn "Great Is Thy Faithfulness"; and the musical London Calling!, by Noel Coward. Whose Body? by Dorothy L. Sayers was published in the U.S. in 1923, but US copyright for this edition expired in 1951, when copyright was not renewed as required in the 28th year.
On January 1, 2020, published works from 1924 entered the public domain. Among the more notable entries into the public domain in 2020 was George Gershwin's "Rhapsody in Blue", a musical work that the Gershwin estate famously fought to keep in copyright.
Sound recordings
Very few sound recordings are in the public domain in the United States. Sound recordings fixed in a tangible form before February 15, 1972, have been generally covered by common law or in some cases by anti-piracy statutes enacted in certain states, not by federal copyright law, and the anti-piracy statutes typically have no duration limit. The 1971 Sound Recordings Act, effective 1972, and the 1976 Copyright Act, effective 1978, provide federal copyright for unpublished and published sound recordings fixed on or after February 15, 1972. Recordings fixed before February 15, 1972, are still covered, to varying degrees, by common law or state statutes. Any rights or remedies under state law for sound recordings fixed before February 15, 1972, are not annulled or limited by the 1976 Copyright Act until February 15, 2067. On that date, all sound recordings fixed before February 15, 1972, will go into the public domain in the United States.
For sound recordings fixed on or after February 15, 1972, the earliest year that any will go out of copyright and into the public domain in the U.S. will be 2043, and not in any substantial number until 2048. Sound recordings fixed and published on or after February 15, 1972, and before 1978, which did not carry a proper copyright notice on the recording or its cover entered the public domain on publication. From 1978 to March 1, 1989, the owners of the copyrights had up to five years to remedy this omission without losing the copyright. Since March 1, 1989, no copyright notice has been required.
In September 2018, the US Senate passed the Music Modernization Act, where sound recordings released before 1957 will enter the public domain 95 years after their first release. Recordings from 1957 to 1972 will enter the public domain in 2067.
Examples
In the United States, the images of Frank Capra's film It's a Wonderful Life (1946) entered into the public domain in 1974, because the copyright holder failed to file a renewal application with the Copyright Office during the 28th year after the film's release or publication. However, in 1993, Republic Pictures utilized the 1990 United States Supreme Court ruling in Stewart v. Abend to enforce its claim of copyright because the film was a derivative work of a short story that was under a separate, existing copyright, to which Republic owned the film adaptation rights, effectively regaining control of the work in its complete form. Currently, Paramount Pictures owns the film's copyrightable elements.
Charles Chaplin re-edited and scored his 1925 film The Gold Rush for reissue in 1942. Subsequently, the 1925 version fell into the public domain when Chaplin's company failed to renew its copyright in 1953, although the 1942 version is still under US copyright.
The distributor of the cult film Night of the Living Dead, after changing the film's title at the last moment before release in 1968, failed to include a proper copyright notice in the new titles, thereby immediately putting the film into the public domain after its release. This provision of US copyright law was revised with the United States Copyright Act of 1976, which allowed such negligence to be remedied within five years of publication.
A number of TV series in America have lapsed into the public domain, in whole or only in the case of certain episodes, giving rise to wide distribution of some shows on DVD. Series that have only certain episodes in the public domain include Petticoat Junction, The Beverly Hillbillies, The Dick Van Dyke Show, The Andy Griffith Show, The Lucy Show, Bonanza, Annie Oakley, and Decoy.
Laws may make some types of works and inventions ineligible for monopoly; such works immediately enter the public domain upon publication. Many kinds of mental creations, such as publicized baseball statistics, are never covered by copyright. However, any special layout of baseball statistics, or the like, would be covered by copyright law. For example, while a phone book is not covered by copyright law, any special method of laying out the information would be.
Copyright notice
In the past, a work would enter the public domain in the United States if it was released without a copyright notice. This was true prior to March 1, 1989, but is no longer the case. Any work (of certain, enumerated types) now receives copyright as soon as it is fixed in a tangible medium.
Computer Software Rental Amendments Act
There are several references to putting copyrighted work into the public domain. The first reference is actually in a statute passed by Congress, in the Computer Software Rental Amendments Act of 1990 (Public Law 101–650, 104 Stat. 5089 (1990)). Although most of the Act was codified into Title 17 of the United States Code, there is a very interesting provision relating to "public domain shareware" which was not, and is therefore often overlooked.
Sec. 805. Recordation of Shareware
(a) IN GENERAL— The Register of Copyrights is authorized, upon receipt of any document designated as pertaining to computer shareware and the fee prescribed by section 708 of title 17, United States Code, to record the document and return it with a certificate of recordation.
(b) MAINTENANCE OF RECORDS; PUBLICATION OF INFORMATION—The Register of Copyrights is authorized to maintain current, separate records relating to the recordation of documents under subsection (a), and to compile and publish at periodic intervals information relating to such recordations. Such publications shall be offered for sale to the public at prices based on the cost of reproduction and distribution.
(c) DEPOSIT OF COPIES IN LIBRARY OF CONGRESS—In the case of public domain computer shareware, at the election of the person recording a document under subsection (a), 2 complete copies of the best edition (as defined in section 101 of title 17, United States Code) of the computer shareware as embodied in machine-readable form may be deposited for the benefit of the Machine-Readable Collections Reading Room of the Library of Congress.
(d) REGULATIONS—The Register of Copyrights is authorized to establish regulations not inconsistent with law for the administration of the functions of the Register under this section. All regulations established by the Register are subject to the approval of the Librarian of Congress.
One purpose of this legislation appears to be to allow "public domain shareware" to be filed at the Library of Congress, presumably so that the shareware would be more widely disseminated. Therefore, one way to release computer software into the public domain might be to make the filing and pay the $20 fee. This could have the effect of "certifying" that the author intended to release the software into the public domain. It does not seem that registration is necessary to release the software into the public domain, because the law does not state that public domain status is conferred by registration. Judicial rulings support this conclusion; see below.
By comparing paragraph (a) and (c), one can see that Congress distinguishes "public domain" shareware as a special kind of shareware. Because this law was passed after the Berne Convention Implementation Act of 1988, Congress was well aware that newly created computer programs (two years worth, since the Berne Act was passed) would automatically have copyright attached. Therefore, one reasonable inference is that Congress intended that authors of shareware would have the power to release their programs into the public domain. This interpretation is followed by the Copyright Office in 37 C.F.R. § 201.26.
Berne Convention Implementation Act
The Berne Convention Implementation Act of 1988 states in section twelve that the Act "does not provide copyright protection for any work that is in the public domain." The congressional committee report explains that this means simply that the Act does not apply retroactively.
Although the only part of the act that does mention "public domain" does not speak to whether authors have the right to dedicate their work to the public domain, the remainder of the committee report does not say that they intended copyright to be an indestructible form of property. Rather the language speaks about getting rid of copyright formalities in order to comply with Berne (non-compliance had become a severe impediment in trade negotiations) and making registration and marking optional, but encouraged. A fair reading is that the Berne Act did not intend to take away author's right to dedicate works to the public domain, which they had (by default) under the 1976 Act.
Section 203 of the Copyright Act
Although there is support in the statutes for allowing work to be dedicated to the public domain, there cannot be an unlimited right to dedicate work to the public domain because of a quirk of U.S. copyright law which grants the author of a work the right to cancel "the exclusive or nonexclusive grant of a transfer or license of copyright or of any right under a copyright" thirty-five years later, unless the work was originally a work for hire.
Case law
Another form of support comes from the case Computer Associates Int'l v. Altai, 982 F.2d 693, which set the standard for determining copyright infringement of computer software. This case discusses the public domain.
(c) Elements Taken from the Public Domain
Closely related to the non-protectability of scenes a faire, is material found in the public domain. Such material is free for the taking and cannot be appropriated by a single author even though it is included in a copyrighted work. ... We see no reason to make an exception to this rule for elements of a computer program that have entered the public domain by virtue of freely accessible program exchanges and the like. See 3 Nimmer Section 13.03 [F] ; see also Brown Bag Software, slip op. at 3732 (affirming the district court’s finding that "[p]laintiffs may not claim copyright protection of an ... expression that is, if not standard, then commonplace in the computer software industry."). Thus, a court must also filter out this material from the allegedly infringed program before it makes the final inquiry in its substantial similarity analysis.
This decision holds that computer software may enter the public domain through "freely accessible program exchanges and the like," or by becoming "commonplace in the computer industry." Relying only on this decision, it is unclear whether an author can dedicate his work to the public domain simply by labeling it as such, or whether dedication to the public domain requires widespread dissemination.
This could make a distinction in a CyberPatrol-like case, where a software program is released, leading to litigation, and as part of a settlement the author assigns his copyright. If the author has the power to release his work into the public domain, there would be no way for the new owner to stop the circulation of the program. A court may look on an attempt to abuse the public domain in this way with disfavor, particularly if the program has not been widely disseminated. Either way, a fair reading is that an author may choose to release a computer program to the public domain if he can arrange for it to become popular and widely disseminated.
Future public domain works
Assuming no changes to U.S. copyright law, the following works will enter the public domain on January 1 of the indicated year.
2023 – Books, films and other works published in 1927
2024 – Books, films and other works published in 1928
2025 – Books, films and other works published in 1929
2026 – Books, films and other works published in 1930
2027 – Books, films and other works published in 1931
2028 – Books, films and other works published in 1932
2029 – Books, films and other works published in 1933
2030 – Books, films and other works published in 1934
2031 – Books, films and other works published in 1935
2032 – Books, films and other works published in 1936
2033 – Books, films and other works published in 1937
2034 – Books, films and other works published in 1938
2035 – Books, films and other works published in 1939
2036 – Books, films and other works published in 1940
2037 – Books, films and other works published in 1941
2038 – Books, films and other works published in 1942
2039 – Books, films and other works published in 1943
2040 – Books, films and other works published in 1944
2041 – Books, films and other works published in 1945
2042 – Books, films and other works published in 1946
2043 – Books, films and other works published in 1947
2044 – Books, films and other works published in 1948
2045 – Books, films and other works published in 1949
2046 – Books, films and other works published in 1950
2047 – Books, films and other works published in 1951
2048 – Books, films and other works published in 1952
2049 – Books, films and other works published in 1953
2050 – Books, films and other works published in 1954
2051 – Books, films and other works published in 1955
2052 – Books, films and other works published in 1956
2053 – Books, films and other works published in 1957
2054 – Books, films and other works published in 1958
2055 – Books, films and other works published in 1959
2056 – Books, films and other works published in 1960
2057 – Books, films and other works published in 1961
2058 – Books, films and other works published in 1962
2059 – Books, films and other works published in 1963
2060 – Books, films and other works published in 1964
2061 – Books, films and other works published in 1965
2062 – Books, films and other works published in 1966
2063 – Books, films and other works published in 1967
2064 – Books, films and other works published in 1968
2065 – Books, films and other works published in 1969
2066 – Books, films and other works published in 1970
2067 – Books, films and other works published in 1971
2068 – Books, films and other works published in 1972
2069 – Books, films and other works published in 1973
2070 – Books, films and other works published in 1974
2071 – Books, films and other works published in 1975
2072 – Books, films and other works published in 1976
In addition, on February 15, 2067, sound recordings fixed from January 1, 1957, to 14 February 1972, including the entire catalog of The Beatles, which are currently subject to protection under state copyright law, will enter the public domain, under the CLASSICS Act provisions of the Music Modernization Act of 2018.
See also
Capitol Records, Inc. v. Naxos of America, Inc.
Copyright status of work by the U.S. government
Copyright status of works by subnational governments of the United States
Copyright Term Extension Act
Eldred v. Ashcroft
Fair dealing
Fair use
List of countries' copyright length
List of films in the public domain in the United States
Public Domain Enhancement Act
Rule of the shorter term
Uruguay Round Agreements Act
References
External links
Harvard Library Copyright Advisor - Public domain status of state and local government records (clickable map)
Articles containing video clips
Public domain
United States copyright law |
1340830 | https://en.wikipedia.org/wiki/African%20Herbsman | African Herbsman | African Herbsman is a 1973 Trojan Records repackage of Bob Marley and the Wailers' 1971 album Soul Revolution Part II produced by Lee "Scratch" Perry, which had had a limited Jamaica only release. African Herbsman was released shortly after the band's major-label debut album Catch a Fire had been released by Island Records.
The album differs from Soul Revolution Part II by adding five other tracks from the period. Four of the added tracks are non-album singles, including two of the group's self-productions, "Trenchtown Rock" and "Lively Up Yourself", as well as "400 Years" by Peter Tosh from the album Soul Rebels.
Several of the songs would later be re-recorded by Marley for his later albums; examples are "Lively Up Yourself" (on Natty Dread), "Duppy Conqueror", "Put It On" and "Small Axe" (on Burnin'), and "Sun Is Shining" (on Kaya).
Track listing
Original album (1973)
All tracks written by Bob Marley, unless noted.
Trojan Records reissue (2003)
References
Bob Marley and the Wailers compilation albums
1973 compilation albums
Albums produced by Lee "Scratch" Perry
Trojan Records albums |
32553446 | https://en.wikipedia.org/wiki/2010%20TK7 | 2010 TK7 | is a sub-kilometer Near-Earth asteroid and the first Earth trojan discovered; it precedes Earth in its orbit around the Sun. Trojan objects are most easily conceived as orbiting at a Lagrangian point, a dynamically stable location (where the combined gravitational force acts through the Sun's and Earth's barycenter) 60 degrees ahead of or behind a massive orbiting body, in a type of 1:1 orbital resonance. In reality, they oscillate around such a point. Such objects had previously been observed in the orbits of Mars, Jupiter, Neptune, and the Saturnian moons Tethys and Dione.
has a diameter of about . Its path oscillates about the Sun–Earth Lagrangian point (60 degrees ahead of Earth), shuttling between its closest approach to Earth and its closest approach to the point (180 degrees from Earth).
The asteroid was discovered in October 2010 by the NEOWISE team of astronomers using NASA's Wide-field Infrared Survey Explorer (WISE).
Discovery
WISE, a space telescope launched into Earth orbit in December 2009, imaged in October 2010 while carrying out a program to scan the entire sky from January 2010 to February 2011. Spotting an asteroid sharing Earth's orbit is normally difficult from the ground, because their potential locations are generally in the daytime sky. After follow-up work at the University of Hawaii and the Canada–France–Hawaii Telescope, its orbit was evaluated on 21 May 2011 and the trojan character of its motion was published in July 2011. The orbital information was published in the journal Nature by Paul Wiegert of the University of Western Ontario, Martin Connors of Athabasca University and Christian Veillet, the executive director of the Canada–France–Hawaii Telescope.
Physical and orbital characteristics
has an absolute magnitude of luminosity (determinable because of its known location) of about 20.8. Based on an assumed albedo of 0.1, its estimated diameter is about 300 meters. No spectral data are yet available to shed light on its composition. would exert a surface gravitational force of less than that of Earth.
At the time of discovery, the asteroid orbited the Sun with a period of 365.389 days, close to Earth's 365.256 days. As long as it remains in 1:1 resonance with Earth, its average period over long time intervals will exactly equal that of Earth. On its eccentric (e = 0.191) orbit, 's distance from the Sun varies annually from 0.81 AU to 1.19 AU. It orbits in a plane inclined about 21 degrees to the plane of the ecliptic.
Trojans do not orbit right at Lagrangian points but oscillate in tadpole-shaped loops around them (as viewed in a corotating reference frame in which the planet and Lagrangian points are stationary); traverses its loop over a period of 395 years. 's loop is so elongated that it sometimes travels nearly to the opposite side of the Sun with respect to Earth. Its movements do not bring it any closer to Earth than 20 million kilometers (12.4 million miles), which is more than 50 times the distance to the Moon. was at the near-Earth end of its tadpole in 2010–2011, which facilitated its discovery.
's orbit has a chaotic character, making long-range predictions difficult. Prior to 500 AD, it may have been oscillating about the Lagrangian point (60 degrees behind Earth), before jumping to via . Short-term unstable libration about , and transitions to horseshoe orbits are also possible. Newer calculations based on an improved orbit determination confirm these results.
Accessibility from Earth
Because Earth trojans share Earth's orbit and have little gravity of their own, less energy might be needed to reach them than the Moon, even though they are much more distant. However, is not an energetically attractive target for a space mission because of its orbital inclination: It moves so far above and below Earth's orbit that the required change in velocity for a spacecraft to match its trajectory coming from Earth's would be 9.4 km/s, whereas some other near-Earth asteroids require less than 4 km/s.
During the 5 December 2012 Earth close approach of , the asteroid had an apparent magnitude of about 21.
See also
, the second Earth trojan discovered
Claimed moons of Earth
Provisional designation in astronomy, the naming convention used for astronomical objects immediately following their discovery
Notes
References
External links
MPC Database entry for
NASA animation of its motion
Alternate NASA animation of motion
Minor planet object articles (unnumbered)
Earth co-orbital asteroids
Earth trojans
20101001 |
7849021 | https://en.wikipedia.org/wiki/Keith%20Gottfried | Keith Gottfried | Keith Evan Gottfried (born 1966 in Brooklyn, New York) is an American lawyer, most notably nominated by President George W. Bush on July 29, 2005, and unanimously confirmed by the U.S. Senate on October 7, 2005, to serve as the 19th General Counsel for the United States Department of Housing and Urban Development (HUD).
Personal history
Early years
Gottfried was born in 1966 in Brooklyn, New York. At or around his first birthday, Gottfried's family moved from an apartment in the Sheepshead Bay, Brooklyn section of New York City to Howard Beach, Queens where they moved into a then newly built two-level ranch house in the Rockwood Park, Queens section of Howard Beach, Queens.
Gottfried is the son of Rosalie Gottfried and the late Bertram Gottfried. Gottfried's mother is the daughter of immigrants from Çanakkale, Turkey and a long-time resident of Howard Beach, Queens. She worked as a middle-school math teacher for the New York City Board of Education for over 40 years until her retirement in 2004; her last teaching assignment prior to her retirement was the Albert Shanker School Of Visual And Performing Arts in Queens, New York. She also previously taught at the Horace Greeley Middle School JHS 10 in Long Island City, Queens. Prior to his death in 2011, Gottfried's father was a resident of Melbourne, Florida and a sales representative for various companies in the tobacco and cigar industries.
Gottfried spent much of his childhood growing up in the Howard Beach, Queens area of New York City. His mother continues to live in the same house in Howard Beach that Gottfried's parents first moved into in 1967.
Gottfried has commented on both his personal life and his upbringing in Queens in regards to his views on housing and urban development; this can be seen in his public comments to the media and in his prepared statement (while a Nominee to be General Counsel of the U.S. Department of Housing and Urban Development) given to the U.S. Senate during a hearing of the United States Senate Committee on Banking, Housing, and Urban Affairs on September 15, 2005.
Gottfried attended the following New York City public schools: P.S. 207 in Howard Beach, Queens, Robert H. Goddard Junior High School 202 in Ozone Park, Queens, and
John Adams High School (Queens) in Ozone Park, Queens.
Gottfried graduated from John Adams High School (Queens) in 1983. At John Adams High School, Gottfried served as the Editor-in-Chief of the high school's newspaper, The Campus.
Family
In August 2004, Gottfried became engaged to Cindy Goldwasser, an attorney, who was then employed as a mediator with the Peninsula Conflict Resolution Center in San Mateo, California.
On April 1, 2005, Gottfried and Goldwasser were married in a civil ceremony held in San Jose, California. Two weeks later, on April 17, 2005, they were married in a religious ceremony held at the Tierra del Sol Resort, Spa & Country Club in Aruba.
Mrs. Gottfried, a graduate of the McGeorge School of Law of the University of the Pacific in Sacramento, California, was admitted to the practice of law in the State of California on December 1, 2004. Mrs. Gottfried, a naturalized U.S. Citizen, is a native of Bogota, Colombia, having emigrated with her family to the United States as a young child in 1978.
On March 14, 2006, the Gottfrieds announced the birth of their daughter, Sophie, at the University of Maryland Medical Center. On July 12, 2008, they announced the birth of their son, Benjamin, at the Shady Grove Medical Center in Rockville, Maryland. Gottfried and his family resides in the Washington, DC suburb of Rockville, Maryland.
Post-secondary education
Gottfried received his undergraduate education at the University of Pennsylvania in Philadelphia where he graduated from its Wharton School in 1983 with a Bachelor of Science degree in Economics concentrated in Accounting.
Gottfried received his law degree cum laude from the Boston University School of Law in Boston where he was named an Edward F. Hennessey Distinguished Scholar of Law and a G. Joseph Tauro Scholar of Law. He also holds an M.B.A., with high honors, from the Boston University Graduate School of Management.
Experience as a certified public accountant
Arthur Young & Company
Following his graduation from the University of Pennsylvania's Wharton School, Gottfried practiced public accounting as an auditor with the Philadelphia office of the accounting firm Arthur Young & Company, and became a certified public accountant in 1989. At Arthur Young & Company, Gottfried advised clients in the hospitality, computer software, technology, manufacturing, retailing and defense sectors. He was also worked on a number of the firm's Atlantic City casino audits, including Bally's Park Place and the former Bally's Grand. In addition to working out of the Philadelphia office, Gottfried worked from the Berwyn, Pennsylvania office of Arthur Young & Company which served as the base of operations for the firm's Entrepreneurial Services Group.
Experience as a corporate/M&A lawyer
Skadden, Arps, Slate, Meagher & Flom LLP
From 1994 until 2000, Gottfried was a corporate attorney in the New York City office of the law firm Skadden, Arps, Slate, Meagher & Flom LLP where he practiced in the mergers & acquisitions group.
Gottfried left Skadden, Arps, Slate, Meagher & Flom LLP in June 2000 to join the firm's client Borland Software Corporation, then known as Inprise Corporation, as Senior Vice President, General Counsel and Corporate Secretary.
Blank Rome LLP
As a law student, Gottfried worked as a summer associate, during the summers of 1990 and 1991, in the Philadelphia office of the law firm Blank Rome LLP. Following graduation from Boston University School of Law in 1992, Gottfried joined the Philadelphia office of Blank Rome LLP as a corporate associate. He left Blank Rome LLP in 1994 to join the mergers and acquisitions practice of Skadden, Arps, Slate, Meagher & Flom LLP in New York City. More than a dozen years later, following his service in the administration of President George W. Bush, Gottfried rejoined Blank Rome LLP in March 2007 as a partner in its Washington, D.C. office where he practiced in the firm's public companies group.
Alston & Bird LLP
In April 2012, Gottfried left Blank Rome LLP to join the Washington, D.C. office of the law firm Alston & Bird LLP as a partner in its Corporate Transactions & Securities Group.
Morgan, Lewis & Bockius LLP
In June 2014, Gottfried left Alston & Bird LLP to join the Washington, D.C. office of the law firm Morgan, Lewis & Bockius LLP as a partner in its Business & Finance Group. Gottfried's practice is primarily concentrated on M&A transactions, defending companies against proxy contests and other shareholder activism campaigns, contested control transactions, corporate governance, SEC reporting issues, NYSE and NASDAQ compliance, and general corporate matters.
Bar and court admissions
Gottfried was admitted to the practice of law in 1992 and is admitted to the state bars of California, New York, New Jersey and the District of Columbia. Gottfried is also admitted to practice before a number of federal courts, including the Ninth Circuit, the Northern District of California, the Southern District of California, the Eastern District of New York, the Southern District of New York, the Eastern District of Pennsylvania, and the District of New Jersey.
Experience as a Silicon Valley executive
Borland Software Corporation
From 2000 to 2004, Gottfried served as a senior executive with Borland Software Corporation, a global provider of software development solutions located in Scotts Valley, California, formerly known as Inprise Corporation, having represented Borland (as its outside counsel) prior to joining Borland.
At Borland, Gottfried initially held the position of Senior Vice President, General Counsel, Chief Legal Officer and Corporate Secretary, which made him responsible for all aspects of the company's worldwide legal function. He was later named Borland's Senior Vice President of Corporate Affairs and Special Advisor to the CEO.
As Senior Vice President of Corporate Affairs and Special Advisor to the CEO, Gottfried was responsible for enhancing the company relationships with industry leaders, potential strategic partners, focal sales account, competitors, domestic and foreign government leaders, lobbyists and trade associations. He was also responsible for exploring new revenue generating initiatives that would leverage the company's existing assets.
Trade missions
During his time as a Borland executive, Gottfried also spearheaded Borland's exploration of new geographic markets (e.g., China, Malaysia, Thailand, India, Mexico, Morocco, Egypt, Poland, Czech Republic, Ghana and South Africa) and identifying potential customer/partner and strategic opportunities within such markets. Gottfried represented Borland on numerous trade missions across the world, including to China, Singapore, Malaysia, Thailand, Singapore, Mexico, Morocco, Egypt, Ghana and South Africa. A number of these trade missions were led by the then U.S. Secretary of Commerce, Donald L. Evans.
Business Software Alliance
During his tenure as an executive at Borland Software Corporation, Gottfried served as Borland's representative on the Board of Directors of the Business Software Alliance.
As a member of the Board of Directors of the Business Software Alliance, Gottfried represented the U.S. software industry in numerous meetings domestically and abroad.
Among other issues, Gottfried was a strong advocate for the passage of free trade agreements. On June 10, 2003, Gottfried testified on behalf of the Business Software Alliance before the Subcommittee on Trade, Committee on Ways and Means of the U.S. House of Representatives to advocate for the implementation of U.S. bilateral free trade agreements with Chile and Singapore.
Political involvement
From 2003 to 2004, Gottfried was a significant fundraiser for the campaign to re-elect President George W. Bush to a second term.
Together with numerous other Silicon Valley executives and representatives from the venture capital community, Gottfried served as a co-host for numerous events to raise funds for the President's re-election campaign (San Francisco, CA - June 2003; Fresno, CA, October 2003).
In July 2003, in recognition of his efforts on behalf of the President's re-election campaign, he was among the supporters of President George W. Bush invited to visit with President Bush and the First Lady Laura Bush in Crawford, Texas.
During his time as a Silicon Valley executive, Gottfried, together with numerous other Silicon Valley executives and representatives from the venture capital community, was active in supporting other Republican candidates for national office. Gottfried was also among the California Republicans to lend early support to the campaign to elect former U.S. Treasurer Rosario Marin to the U.S. Senate representing California.
Administration of President George W. Bush
General Counsel of the U.S. Department of Housing and Urban Development
On July 29, 2005, Gottfried was nominated by President George W. Bush to serve as General Counsel for the United States Department of Housing and Urban Development (HUD).
On September 15, 2005, Gottfried appeared before the United States Senate Committee on Banking, Housing, and Urban Affairs and provided testimony in connection with the Committee's consideration of his nomination to be General Counsel of the HUD.
On October 7, 2005, Gottfried's nomination was unanimously confirmed by the U.S. Senate.
On December 7, 2005, at a ceremony held at HUD's headquarters in Washington, D.C., the Robert C. Weaver Federal Building, Gottfried was sworn-in as the 19th General Counsel of the HUD.
In addition to remarks by HUD Secretary Alphonso Jackson and HUD Deputy Secretary Roy Bernardi, the Honorable Nelson Diaz, himself a former General Counsel of the HUD in the administration of President Bill Clinton, was among the dignitaries providing remarks.
Service as General Counsel
As General Counsel of the HUD, Gottfried led a nationwide organization of approximately 700 employees, including close to 400 attorneys and 300 non-attorneys with headquarters in Washington, D.C., ten regional counsel offices and close to forty field counsel offices around the country. At the time, the Office of General Counsel of HUD had an annual budget of approximately $100 million.
Gottfried served as this cabinet agency's Chief Legal Officer and was the Senior Legal Advisor to the Secretary, Deputy Secretary and other agency principal staff in the department providing advice on all aspects of Federal laws, regulations and policies applicable to public and Indian housing, community development programs, mortgage insurance programs, complex mixed financing transactions for residential development and health care facilities, fair housing enforcement and urban development programs as well as federal laws, regulations and policies governing ethics, procurement, personnel management and labor relations.
As General Counsel of HUD, Gottfried was a member of the Federal Housing Administration's Mortgagee Review Board. He also served as the Chief Legal Officer for the Government National Mortgage Association (Ginnie Mae).
During his tenure as General Counsel of HUD, Gottfried was perhaps best known for his push for enhanced regulatory transparency at HUD as he pushed for HUD to adopt no-action and interpretative letter processes similar to those he had been familiar with as a securities lawyer practicing before the Securities and Exchange Commission.
During Gottfried's tenure as General Counsel, HUD announced the then largest settlement of an enforcement action in the history of the Federal Housing Administration (FHA).
Speeches given as General Counsel of HUD
Remarks at the Swearing-In Ceremony of the General Counsel (December 7, 2005—Washington, DC)
Remarks at the National Settlement Services Summit (June 14, 2006—Cleveland, Ohio)
Articles by Keith Gottfried
"100 Issues to Clarify with Your M&A Counsel," ACC Docket, May 2011
"Planning the Integration of an Acquired Company's Legal Department," ACC Docket, November 2011.
"Due Diligence & Your M&A Success Story," ACC Docket, September 2011.
"The Ten Elements of a Proxy Contest Settlement," ACC Docket, April 2008
Press Release on Keith Gottfried Joining Alston & Bird LLP
References
External links
Keith Gottfried's Bio on Alston & Bird Web Site
HUD Press Release on Keith Gottfried Being Named General Counsel
Keith Gottfried's Confirmation Hearing Testimony Before the U.S. Senate Committee on Banking, Housing and Urban Affairs in Connection with His Nomination by President George W. Bush to Become General Counsel of the U.S. Department of Housing and Urban Development
Living people
1966 births
Boston University School of Law alumni
Wharton School of the University of Pennsylvania alumni
United States Department of Housing and Urban Development officials
George W. Bush administration personnel
Boston University School of Management alumni
American accountants
California lawyers
New York (state) lawyers
New Jersey lawyers
Lawyers from Washington, D.C.
Pennsylvania lawyers
Corporate lawyers
Skadden, Arps, Slate, Meagher & Flom alumni
People from Sheepshead Bay, Brooklyn
People from Howard Beach, Queens
John Adams High School (Queens) alumni
Ernst & Young people |
3026234 | https://en.wikipedia.org/wiki/KateOS | KateOS | KateOS was a Linux distribution originally based on Slackware. It was designed for intermediate users. Its package management system used so called TGZex (.tgz) packages, which unlike Slackware packages support dependency tracking (optional), internationalized descriptions, and were designed for ease of update. There were two native tools for package management: PKG and Updateos.
The last version released was KateOS III (3.6), including as a Live CD, in 2007.
History
The KateOS project was founded at the end of 2003 by Damian Rakowski.
Kate Linux 1.0 Rabbit (series I)
The first version of the system was published on 2004-10-09. The system was based on Slackware 9.0 but unlike Slackware, it used the PAM authentication mechanism, and possessed an enriched package set. Due to the problems with the main server which worked only infrequently, hardly anyone learned about the existence of Kate 1.0. After a move to another server, the project has begun to gradually acquire users. After some time, Kate 1.0.1 (a fix release including UpdatePack 1), and a Live version were published.
Kate Linux 2.0 Zyklon (series II)
Version 2.0 was published on 2005-04-09, and was no longer based on Slackware. It was a long-term edition, the base for further development. It was also the first edition using Linux 2.6.
On 2005-05-06 the name of the project has been changed to KateOS.
On 2005-05-22 version 2.0.1 was published, providing a tool for managing and remote updating the TGZex packages. The tool was called Updates, and was written by Piotr Korzuszek.
On 2005-06-23 version 2.1 was published. Updates now could install packages remotely.
On 12 August 2005, the first Live edition of series II was published. It had a more distinctive graphical design, used the squashfs technology (2 GB of data packed on just one CD) and unionfs. It detected the hardware and configured the X Server automatically.
On 2005-10-13 the last version of Version II was published - 2.3. Updates gained new possibilities, and the system had a better hardware autodetection with the Discover tool.
KateOS 3.0 Virgen (series III)
On 12 April 2006 the first snapshot of KateOS 3.0 was published.
On 2006-07-09 version 3.0.1 was published. The packaging system was completely rewritten resulting in the PKG and Updates2 tools, and the libupdateos and libsmarttools libraries. The functionality of the TGZex packages was widened, this time to include dependency tracking and descriptions in many languages. The installation process was simplified to allow a full install in only 15 minutes. The system used udev, D-Bus and HAL to detect hardware and mount devices automatically.
On 4 August 2006 the first 'Live edition of Version III was published. It was aimed to demonstrate the possibilities of KateOS 3.0, and to be used as a data rescue system. The CD contained 2GB of data, including the Xfce desktop environment and many office and multimedia applications. It detected and configured hardware automatically.
On 7 October 2006 version 3.1 was published. It contained fixes, and the updated GNOME desktop environment. It was the first edition of Gnome adjusted especially to the KateOS system. It also included Update-notifier, a daemon for which the system trace icon changed and blinked when new updates were discovered. It let the user choose packages to be updated and update them. It was based on the libupdateos library, and only supported the KateOS packages and repositories.
On 21 December 2006 version 3.2 was published. Apart from fixes and updates, it included a new tool, KatePKG. KatePKG is a graphical package manager written in PHP with the PHP-GTK library, making KateOS the first system to include this library in the default distribution. It was designed to allow users to easily install, update, and remove packages from the system. It supports any number of repositories, including local ones (located on the user's hard drive). Also, the libsmarttools library had been optimized, resulting in up to a 60% speed boost in the applications using it (such as Updateos2). With this version KateOS has switched its bootloader from LILO to GRUB, to make kernel updates easier.
On 17 September 2007 version 3.6 was released after eight months of development. This version brought several new and updated features to KateOS such as software driven suspend mode, improved internalization support, and the addition of several new programs such as KateLAN and Realm to help make configuring the system more user friendly. The Live CD version of 3.6 was the first KateOS to provide an on disc installer called Install Agent, allowing the user to directly install to their hard disk after trying the system live.
Additional information
All new KateOS releases were planned to be supported for around two years. Users were encouraged to update via the updateos command to newer versions of the distribution, although major version updates (series updates), e.g. II--->III were not recommended.
Damian Rakowski, the 'project initiator, leader, and 1st developer', stated that the project was named after a friend and because the name Kate is "simple, nice and everybody knows it."
References
External links
Preston St. Pierre (May 10, 2005) Review: Kate OS 2.0, Linux.com
Michael Larabel (February 12, 2007) KateOS 3.2: Installation Made Easy, michaellarabel.com
Slackware
Discontinued Linux distributions
Linux distributions |
1998313 | https://en.wikipedia.org/wiki/Terry%20Bollinger | Terry Bollinger | Terry Benton Bollinger (born February 6, 1955, Fredericktown, Missouri) is an American computer scientist who works at the MITRE Corporation. In 2003 he wrote an influential report for the U.S. Department of Defense (U.S. DoD) in which he showed that free and open source software (FOSS) had already become a vital part of the United States Department of Defense software infrastructure, and that banning or restricting its use would have had serious detrimental impacts on DoD security, research capabilities, operational capabilities, and long-term cost efficiency. His report ended a debate about whether FOSS should be banned from U.S. DoD systems, and in time helped lead to the current official U.S. DoD policy of treating FOSS and proprietary software as equals. The report is referenced on the DoD CIO web site and has been influential in promoting broader recognition of the importance of free and open source software in government circles. Bollinger is also known for his activity in the IEEE Computer Society, where he was an editor for IEEE Software for six years, wrote the founding charter for IEEE Security & Privacy Magazine, and received an IEEE Third Millennium Medal for lifetime contributions to IEEE. He has written about a wide range of software issues including effective development processes, cyber security, and distributed intelligence.
Life and work
Bollinger received Bachelor's and master's degrees in Computer Science at the Missouri University of Science and Technology (S&T), from which he also received a Professional Degree in December 2009 for lifetime accomplishments. He has had a lifelong interest in multi-component (crowd) intelligence as an aspect of artificial intelligence, as well as a strong interest in the hard sciences, including the possible relevance of quantum theory to faster but fully classical, energy-efficient information processing in biological systems. His metaphors for understanding quantum entanglement and encryption have been quoted in the Russian technical press.
From 2004 to 2010, Bollinger was the chief technology analyst for the U.S. DoD Defense Venture Catalyst Initiative (DeVenCI), an effort created by the Secretary of Defense after the September 11, 2001 terrorist attacks. DeVenCI selects qualified applicants from leading venture capital firms to contribute voluntary time and expertise to finding emerging commercial companies and technologies that could be relevant to DoD technology needs.
Bollinger currently works full-time for the Office of Naval Research (ONR) research arm of the Marine Corps, where he helps assess and support research into the science of autonomy, robotics, and artificial intelligence.
See also
Use of Free and Open Source Software (FOSS) in the U.S. Department of Defense
Publications
See DBLP Bibliography for Terry Bollinger
References
1955 births
Living people
Missouri University of Science and Technology alumni
Senior Members of the IEEE
American computer scientists
Artificial intelligence researchers
Cognitive scientists
Scientists from Missouri |
64295042 | https://en.wikipedia.org/wiki/Law%20%26%20Order%3A%20Organized%20Crime | Law & Order: Organized Crime | Law & Order: Organized Crime is an American crime-drama television series that premiered on April 1, 2021, on NBC. The seventh series in the Law & Order franchise and a spin off of Law & Order: Special Victims Unit, the series stars Christopher Meloni as Elliot Stabler, reprising his role from SVU. The show features a "single-arc" storyline that takes multiple episodes to resolve. In May 2021, the series was renewed for a second season which is set to consist of 24 episodes. The second season premiered on September 23, 2021.
Premise
The series centers on Law & Order: Special Victims Unit character Elliot Stabler, a veteran Detective who returns to the NYPD in New York following his wife's murder. Stabler joins the Organized Crime Task Force, led by Sergeant Ayanna Bell.
Cast and characters
Main
Christopher Meloni as Detective 1st Grade Elliot Stabler, a former Manhattan Special Victims Unit detective who returns to New York after retiring from the police department several years earlier. He joins a task force within the Organized Crime Control Bureau to find his wife's killers and becomes its second-in-command.
Danielle Moné Truitt as Sergeant Ayanna Bell, squad supervisor of the OCCB task force and Stabler's current partner.
Tamara Taylor as Prof. Angela "Angie" Wheatley, a math professor at Hudson University, ex-wife of Richard Wheatley, and a suspect in the hit ordered on Kathy Stabler.
Ainsley Seiger as Detective 3rd Grade Jet Slootmaekers, a former independent hacker who is recruited to the OCCB task force by Stabler's recommendation. She was requalified as an NYPD Officer to work with the OCCB task force.
Dylan McDermott as Richard Wheatley, son of notorious mobster Manfredi Sinatra, now a businessman and owner of an online pharmaceutical company who leads a second life as a crime boss, and was a suspect in the murder of Stabler's wife.
Recurring
Ben Chase as Detective 1st Grade Freddie Washburn (season 1), a detective from the Narcotics unit recruited to the OCCB task force, and Bell's former senior partner in Narcotics. He presumably leaves the OCCB after making a critical mistake that allows Richard Wheatley to try and kill Angela.
Michael Rivera as Detective 2nd Grade Diego Morales (season 1), a detective originally from the Gun Violence Suppression Division recruited to the OCCB task force. He is eventually uncovered as the mole for Richard Wheatley, and is killed by Bell in the season 1 finale during a last attempt to assassinate Angela.
Shauna Harley as Pilar Wheatley, Richard's current wife.
Nick Creegan as Richard "Richie" Wheatley Jr., Richard and Prof. Angela Wheatley's older son, who aspires to follow in the family business. He is eventually arrested alongside his father in a bust, and later puts a hit out on him in the season 1 finale after learning that he was responsible for the murder of his grandfather, Manfredi Sinatra.
Jaylin Fletcher as Ryan Wheatley (season 1), Richard's and Pilar's son.
Christina Marie Karis as Dana Wheatley, Richard and Prof. Angela Wheatley's only daughter who assists her father in his crimes including the robbery of several COVID-19 vaccines.
Ibrahim Renno as Izak Bekher (season 1), Richard's right-hand man who begins working for the NYPD to trap Wheatley. He is later killed at Richard Sr.'s command because he was the sole witness to Richard Jr.'s involvement in Gina's murder.
Charlotte Sullivan as Detective 3rd Grade Gina Cappelletti (season 1), an undercover detective assigned to the OCCB task force who has infiltrated a club run by the mafia to keep an eye on Richard. However, Gina is eventually caught by the Wheatleys and is subsequently executed by Richie.
Nicky Torchia as Elliot "Eli" Stabler Jr., Stabler's youngest son.
Autumn Mirassou as Maureen "Mo" Stabler, Stabler's eldest child.
Kaitlyn Davidson as Elizabeth "Lizzie" Stabler, Stabler's youngest daughter and Dickie's twin sister.
Keren Dukes as Denise Bullock, the wife of Ayanna Bell, who filed a lawsuit against the NYPD after her nephew was brutally beaten by police.
Diany Rodriguez as ADA Maria Delgado (season 1), an ADA who formerly worked with the task force.
Wendy Moniz as ADA Anne Frasier, the prosecutor on the Wheatley case.
Daniel Oreskes as Lieutenant Marv Moennig, the commanding officer of the OCCB task force. He steps down from the OCCB during season 2 and is succeeded by Bill Brewster as CO.
Nicholas Baroudi as Joey Raven (season 1), the owner of the Seven Knights club.
Steve Harris as Ellsworth Lee (season 1), Angela Wheatley's attorney.
Guillermo Díaz as Sergeant/Lieutenant William "Bill" Brewster (season 2), the sergeant of a Narcotics task force and was previously Ayanna Bell's boss before she was transferred to the Organized Crime Control Bureau. He is promoted to Lieutenant and takes over as commanding officer following Lieutenant Moennig's departure.
Mike Cannon as Detective 3rd Grade Carlos Maldonado (season 2), a detective formerly under Brewster's command, but now works under Bell's.
Rachel Lin as Detective 1st Grade Victoria Cho (season 2), a detective formerly under Brewster's command, but now works under Bell's.
Nona Parker-Johnson as Detective 3rd Grade Carmen "Nova" Riley (season 2), an undercover Narcotics detective working under Brewster's command to infiltrate the Marcy Killers.
Ron Cephas Jones as Congressman Leon Kilbride (season 2), a politician who fosters connections and seems to have one with the Wheatleys. He is also the mentor of Preston Webb, the leader of the Marcy Killers crime organization.
Vinnie Jones as Albi Briscu (season 2), an Eastern European gangster who is the last remaining member of his organization from the old country. He serves as Jon Kosta's underboss.
Lolita Davidovich as Flutura Briscu (season 2), the wife of Albanian mobster and gang leader, Albi Briscu. She was also a madam who trafficked Albanian women to America and forced them into the sex trade.
Mykelti Williamson as Preston Webb (season 2), a dangerous crime kingpin affiliated with Congressman Kilbride.
Dash Mihok as Reggie Bogdani (season 2), Albi Briscu's nephew who serves as Stabler's boss during his time undercover infiltrating the Kosta organization.
Michael Raymond-James as Jon Kosta (season 2), the founder and leader of the Kosta Organization.
Izabela Vidovic as Rita Lasku (season 2), a waitress who was trafficked by the Kosta Organization.
Caroline Lagerfelt as Agniezjka "Agnes" Bogdani (season 2), Reggie Bogdani's mother and Albi Briscu's sister.
Robin Lord Taylor as Sebastian "Constantine" McClane (season 2), a notorious hacker and high-security convict who escaped prison.
Gregg Henry as Edmund Ross (season 2), a businessman who was involved in a sex trafficking ring with the Kosta Organization.
Jack Kilmer as Louis Chinasky (season 2), the son of Eddie Wagner, Stabler's undercover persona.
Wesam Keesh as Adam "Malachi" Mintock (season 2), a hacker who created an app for the Kosta Organization, now forced to assist the NYPD in order to avoid prosecution.
Jennifer Beals as Cassandra Webb, the wife of Preston Webb (season 2)
Denis Leary as Frank Donnelly (season 2), a member of the NYPD who has a history with Stabler.
James Cromwell as Miles Darman (season 2), the neighbour of Stabler who was hired by Wheatley to charm Bernadette.
Crossover characters
Mariska Hargitay as Olivia Benson, Captain of the Manhattan Special Victims Unit, and Stabler's former partner.
Peter Scanavino as Dominick "Sonny" Carisi Jr., Manhattan Assistant District Attorney. (season 1)
Demore Barnes as Christian Garland, Deputy Chief of all NYPD Special Victims units. (season 1)
Allison Siko as Kathleen Stabler, Stabler's second eldest daughter.
Jeffrey Scaperrotta as Richard "Dickie" Stabler, Stabler's eldest son.
Ryan Buggle as Noah Porter-Benson, Olivia's son. (season 1)
Isabel Gillies as Kathy Stabler, Stabler's deceased wife (season 1)
Ellen Burstyn as Bernadette "Bernie" Stabler, Elliot Stabler's mother (season 2).
Ice-T as Odafin "Fin" Tutuola, Sergeant of the Manhattan Special Victims Unit, and former co-worker of Stabler. (season 2)
Raúl Esparza as Defense Attorney (former ADA) Rafael Barba. (season 2)
Episodes
Season 1 (2021)
Season 2 (2021–22)
Production
Development
On March 31, 2020, NBC had given a 13-episode order to a new crime drama starring Meloni as his character from Law & Order: Special Victims Unit, Elliot Stabler. Dick Wolf, Arthur W. Forney, and Peter Jankowski, serve as the executive producers, with Matt Olmstead being looked at as showrunner and writer. The series came following Wolf's five-year deal with Universal Television, which will serve as the series' production company along with Wolf Entertainment.
The series was originally planned to be set up in the twenty-first season finale of Law & Order: Special Victims Unit, with Stabler's wife and son returning. The episode would also have revealed the whereabouts of the Stabler family following Meloni's departure as the character in season twelve. When asked whether or not the storyline would instead happen in the twenty-second season premiere, Law & Order: Special Victims Unit showrunner Warren Leight said that "it's pretty clear that Elliot will be in the SVU season opener". Craig Gore was set to be a writer for the series, but was fired by Wolf on June 2, 2020, for controversial Facebook posts about looters and the curfew put in place in Los Angeles due to protests about the murder of George Floyd. Gore had listed himself as co-executive producer on the series on his Facebook profile, but Meloni announced Olmstead would be the showrunner for the series, not Gore. The same day, the series title was revealed to be Law & Order: Organized Crime. The first teaser for the series was released during the 30 Rock: A One-Time Special on July 17. In July, Meloni stated he had not yet seen a script, and the writers were still working on the story. By October, Olmstead had stepped down as showrunner, and he was later replaced by Ilene Chaiken in December. In February 2022, Chaiken was replaced as showrunner by Hannah Montana co-creator Barry O'Brien. On May 14, 2021, NBC renewed the series for a second season, which premiered on September 23, 2021.
Casting
During the production of the series, in July 2020, Meloni announced Mariska Hargitay would make a guest appearance as her character from Law & Order: Special Victims Unit, Olivia Benson. On January 27, 2021, Dylan McDermott had been cast in the series, with Tamara Taylor, Danielle Moné Truitt, Ainsley Seiger, Jaylin Fletcher, Charlotte Sullivan, Nick Creegan, and Ben Chase joining the following month. At the end of March, it was reported that Nicky Torchia, Michael Rivera, and Ibrahim Renno would appear in recurring roles. In March, it was revealed that some of the actors who played members of the Stabler family as far back as 1999, in episodes of Law & Order: Special Victims Unit, would appear in the new series, including Allison Siko as daughter Kathleen and Jeffrey Scaperrotta as son Dickie, while Isabel Gillies appeared as soon to be murdered wife Kathy in the Law & Order: Special Victims Unit episode that sees the Stablers return to New York, setting the scene for the new series. In August, Ron Cephas Jones, Vinnie Jones, Lolita Davidovich, Mykelti Williamson, Guillermo Díaz and Dash Mihok joined the cast in recurring roles for the second season. In early 2022, Jennifer Beals and Denis Leary joined the cast in recurring roles.
Filming
Like Law & Order: Special Victims Unit, the series is filmed on location in New York. Production was set to begin on the series in August 2020, but was announced in September that the series was the only one produced by Wolf Entertainment to not be given a start date for production. The series later began production on January 27, 2021, during the COVID-19 pandemic, with Meloni and Hargitay sharing pictures on set. In the following months, the production on the series had been halted twice due to two positive COVID-19 tests; despite the halt, it was announced the series would still premiere on the same date.
Release
Broadcast
On June 16, 2020, it was announced the series would air on Thursdays on NBC at 10 p.m. Eastern Time, the former timeslot of Law & Order: Special Victims Unit, with the latter moving up an hour to 9 p.m. The series was the only new series on NBC's fall lineup at the moment for the 2020–21 television season. In August 2020, the series was pushed back to 2021 and on February 4, 2021, it was announced the series would premiere on April 1, 2021, as part of a two-hour crossover with Law & Order: Special Victims Unit. The first season consists of eight episodes.
Streaming
The series is available on the streaming service, Peacock, with episodes being released on the service a week after they air on NBC for the service's free tier, and the next day for the paid tier.
International
In Canada, Organized Crime airs on Citytv in simulcast with NBC, unlike past U.S.-set Law & Order series which have all aired on CTV. Because of commitments to other Thursday night programming like Grey's Anatomy, CTV aired the direct lead-in episode of SVU out of simulcast in the 10:00 p.m. ET/PT timeslot, airing directly against the premiere of its spin-off on Citytv.
In Australia, Organized Crime airs on the Nine Network on Monday night timeslot starting April 12, 2021.
Ratings
Overall
Season 1
Season 2
Notes
References
External links
on Wolf Entertainment
on NBC
2021 American television series debuts
2020s American crime drama television series
2020s American police procedural television series
American television spin-offs
Fictional portrayals of the New York City Police Department
Law & Order (franchise)
Law & Order: Special Victims Unit
NBC original programming
Television productions suspended due to the COVID-19 pandemic
Television series about organized crime
Television series about widowhood
Television series created by Dick Wolf
Television series by Universal Television
Television shows filmed in New York City
Television shows set in Manhattan
Works about the American Mafia |
31279414 | https://en.wikipedia.org/wiki/Point%20Cloud%20Library | Point Cloud Library | The Point Cloud Library (PCL) is an open-source library of algorithms for point cloud processing tasks and 3D geometry processing, such as occur in three-dimensional computer vision. The library contains algorithms for filtering, feature estimation, surface reconstruction, 3D registration, model fitting, object recognition, and segmentation. Each module is implemented as a smaller library that can be compiled separately (for example, libpcl_filters, libpcl_features, libpcl_surface, ...). PCL has its own data format for storing point clouds - PCD (Point Cloud Data), but also allows datasets to be loaded and saved in many other formats. It is written in C++ and released under the BSD license.
These algorithms have been used, for example, for perception in robotics to filter outliers from noisy data, stitch 3D point clouds together, segment relevant parts of a scene, extract keypoints and compute descriptors to recognize objects in the world based on their geometric appearance, and create surfaces from point clouds and visualize them.
PCL requires several third-party libraries to function, which must be installed. Most mathematical operations are implemented using the Eigen library. The visualization module for 3D point clouds is based on VTK. Boost is used for shared pointers and the FLANN library for quick k-nearest neighbor search. Additional libraries such as Qhull, OpenNI, or Qt are optional and extend PCL with additional features.
PCL is cross-platform software that runs on the most commonly used operating systems: Linux, Windows, macOS and Android. The library is fully integrated with the Robot Operating System (ROS) and provides support for OpenMP and Intel Threading Building Blocks (TBB) libraries for multi-core parallelism.
The library is constantly updated and expanded, and its use in various industries is constantly growing. For example, PCL participated in the Google Summer of Code 2020 initiative with three projects. One was the extension of PCL for use with Python using Pybind11.
A large number of examples and tutorials are available on the PCL website, either as C++ source files or as tutorials with a detailed description and explanation of the individual steps.
Applications
Point cloud library is widely used in many different fields, here are some examples:
stitching 3D point clouds together
recognize 3D objects on their geometric appearance
filtering and smoothing out noisy data
create surfaces from point clouds
aligning a previously captured model of an object to some newly captured data
cluster recognition and 6DOF pose estimation
point cloud streaming to mobile devices with real-time visualization
3rd party libraries
PCL requires for its installation several third-party libraries, which are listed below. Some libraries are optional and extend PCL with additional features. The PCL library is built with the CMake build system (http://www.cmake.org/) at least in version 3.5.0.
Mandatory libraries:
Boost (http://www.boost.org/) at least version 1.46.1. This set of C++ libraries is used for threading and mainly for shared pointers, so there is no need to re-copy data that is already in the system.
Eigen (http://eigen.tuxfamily.org/) is required at least in version 3.0.0. It is an open-source template library for linear algebra (matrices, vectors). Most mathematical operations (SSE optimized) in PCL are implemented with Eigen.
FLANN (http://www.cs.ubc.ca/research/flann/) in version 1.6.8 or higher. It is a library that performs a fast approximate nearest neighbor search in high dimensional spaces. In PCL, it is especially important in the kdtree module for fast k-nearest neighbor search operations.
VTK - Visualization ToolKit (http://www.vtk.org/) at least version 5.6.1. Multi-platform software system for rendering 3D point cloud, modeling, image processing, volume rendering. Used in visualization module for point cloud rendering and visualization.
Optional libraries that enable some additional features:
QHULL in version >= 2011.1 (http://www.qhull.org/) implements computation of the convex hull, Delaunay triangulation, Voronoi diagram, and so on. In PCL it is used for convex/concave hull decomposition on the surface.
OpenNI in version >= 1.1.0.25 (http://www.openni.org/) provides a single unified interface to depth sensors. It is used to retrieve point clouds from devices.
Qt version >= 4.6 (https://www.qt.io/) is a cross-platform C++ framework used for developing applications with a graphical user interface (GUI).
Googletest in version >= 1.6.0 (http://code.google.com/p/googletest/) is a C++ testing framework. In PCL, it is used to build test units.
PCD File Format
The PCD (Point Cloud Data) is a file format for storing 3D point cloud data. It was created because existing formats did not support some of the features provided by the PCL library. PCD is the primary data format in PCL, but the library also offers the ability to save and load data in other formats (such as PLY, IFS, VTK, STL, OBJ, X3D). However, these other formats do not have the flexibility and speed of PCD files. One of the PCD advantages is the ability to store and process organized point cloud datasets. Another is very fast saving and loading of points that are stored in binary form.
Versions
The PCD version is specified with the numbers 0.x (e.g., 0.5, 0.6, etc.) in the header of each file. The official version in 2020 is PCD 0.7 (PCD_V7). The main difference compared to version 0.6 is that a new header - VIEWPOINT has been added. It specifies the information about the orientation of the sensor relative to the dataset.
File structure
The PCD file is divided into two parts - header and data. The header has a precisely defined format and contains the necessary information about the point cloud data that are stored in it. The header must be encoded in ASCII, however, the data can be stored in ASCII or binary format. Thanks to the fact that the ASCII format is more human readable, it can be opened in standard software tools and easily edited.
In version 0.7 the version of the PCD file is at the beginning of the header, followed by the name, size, and type of each dimension of the stored data. It also shows a number of points (height*width) in the whole cloud and information about whether the point cloud dataset is organized or unorganized. The data type specifies in which format the point cloud data are stored (ASCII or binary). The header is followed by a set of points. Each point can be stored on a separate line (unorganized point-cloud) or they are stored in an image-like organized structure (organized point-cloud). More detailed information about header entries can be found in documentation. Below is an example of a PCD file. The order of header entries is important!
# .PCD v.7 - Point Cloud Data file format
VERSION .7
FIELDS x y z rgb
SIZE 4 4 4 4
TYPE F F F F
COUNT 1 1 1 1
WIDTH 213
HEIGHT 1
VIEWPOINT 0 0 0 1 0 0 0
POINTS 213
DATA ascii
0.93773 0.33763 0 4.2108e+06
0.90805 0.35641 0 4.2108e+06
0.81915 0.32 0 4.2108e+06
0.97192 0.278 0 4.2108e+06
...
...
History
The development of the Point Cloud Library started in March 2010 at Willow Garage. The project initially resided on a sub domain of Willow Garage then moved to a new website www.pointclouds.org in March 2011. PCL's first official release (Version 1.0) was released two months later in May 2011.
Modules
PCL is divided into several smaller code libraries that can be compiled separately. Some of the most important modules and their functions are described below.
Filters
When scanning a 3D point cloud, errors and various deviations can occur, which causes noise in the data. This complicates the estimation of some local point cloud characteristics, such as surface normals. These inaccuracies can lead to significant errors in further processing and it is therefore advisable to remove them with a suitable filter. The pcl_filters library provides several useful filters for removing outliers and noise and also downsampling the data. Some of them use simple criteria to trim points, others use statistical analysis.
PassThrough filter - is used to filter points in one selected dimension. This means that it can cut off points that are not within the range specified by the user.
VoxelGrid filter - creates a grid of voxels in a point cloud. The points inside each voxel are then approximated by their centroid. This leads to downsampling (reduction of the number of points) in the point cloud data.
StatisticalOutlierRemoval filter - It removes noise from a point cloud dataset using statistical analysis techniques applied to each point's neighborhood and trim all points whose mean distances are outside a defined interval.
RadiusOutlierRemoval filter - removes those points that have less than the selected number of neighbors in the defined neighborhood.
Features
The pcl_features library contains algorithms and data structures for 3D feature estimation. Mostly used local geometric features are the point normal and underlying surface's estimated curvature. The features describe geometrical patterns at a certain point based on selected k-neighborhood (data space selected around the point). The neighborhood can be selected by determining a fixed number of points in the closest area or defining a radius of a sphere around the point.
One of the easiest implemented methods for estimating the surface normal is an analysis of the eigenvectors and eigenvalues of a covariance matrix created from the neighborhood of the point. Point Feature Histograms (or faster FPFH) descriptors are an advanced feature representation and depend on normal estimations at each point. It generalizes the mean curvature around the point using a multidimensional histogram of values. Some of other descriptors in the library are Viewpoint Feature Histogram (VFH) descriptor, NARF descriptors, Moment of inertia and eccentricity based descriptors, Globally Aligned Spatial Distribution (GASD) descriptors, and more.
Segmentation
The pcl_segmentation library contains algorithms for segmenting a point cloud into different clusters. Clustering is often used to divide the cloud into individual parts, that can be further processed. There are implemented several classes, that support various segmentation methods:
Plane model segmentation - simple algorithm that finds all the points that support a plane model in the point cloud
Euclidean clustering - creates clusters of points based on Euclidean distance
Conditional Euclidean clustering - clustering points based on Euclidean distance and a user-defined condition
Region growing segmentation - merge the points that are close enough in terms of the smoothness constraint
Color-based region growing segmentation - same concept as the Region growing, but uses color instead of normals
Min-Cut based binary segmentation - divides the cloud on foreground and background sets of points
Difference of Normals Based Segmentation - scale based segmentation, finding points that belong within the scale parameters given
Supervoxel clustering - generates volumetric over-segmentations of 3D point cloud data
Visualization
The pcl_visualization library is used to quickly and easily visualize 3D point cloud data. The package makes use of the VTK library for 3D rendering of clouds and range images. The library offers:
The CloudViewer class is for a simple point cloud visualization.
RangeImageVisualizer can be used to visualize a range image as a 3D point cloud or as a picture where the colors correspond to range values.
PCLVisualizer is a visualization class with several applications. It can display both simple point cloud and point cloud that contains colour data. Unlike CloudViewer, it can also draw interesting point cloud information such as normals, principal curvatures and geometries. It can display multiple point clouds side-by-side so they can be easily compared, or draw various primitive shapes (e.g., cylinders, spheres, lines, polygons, etc.) either from sets of points or from parametric equations.
PCLPlotter class is used for easy plotting graphs, from polynomial functions to histograms. It can process different types of plot input (coordinates, functions) and does auto-coloring.
PCLHistogramVisualizer is a histogram visualization module for 2D plots.
Registration
The registration is the problem of aligning various point cloud datasets acquired from different views into a single point cloud model. The pcl_registration library implements number of point cloud registration algorithms for both organized and unorganized datasets. The task is to identify the corresponding points between the data sets and find a transformation that minimizes their distance.
Iterative Closest Point algorithm is minimizing the distances between the points of two pointclouds. It can be used for determining if one PointCloud is just a rigid transformation of another. Normal Distributions Transform (NDT) is a registration algorithm that can be used to determine a rigid transformation between two point clouds that have over 100,000 points.
Sample Consensus
The sample_consensus library holds SAmple Consensus (SAC) methods like RANSAC and models to detect specific objects in point clouds. Some of the models implemented in this library include plane models that are often used to detect interior surfaces such as walls and floors. Next models are the lines, 2D and 3D circles in a plane, sphere, cylinder, cone, a model for determining a line parallel with a given axis, a model for determining a plane perpendicular to a user-specified axis, plane parallel to a user-specified axis, etc. These can be used to detect objects with common geometric structures (e.g., fitting a cylinder model to a mug).
Robust sample consensus estimators that are available in the library:
SAC_RANSAC - RANdom SAmple Consensus
SAC_LMEDS - Least Median of Squares
SAC_MSAC - M-Estimator SAmple Consensus
SAC_RRANSAC - Randomized RANSAC
SAC_RMSAC - Randomized MSAC
SAC_MLESAC - Maximum LikeLihood Estimation SAmple Consensus
SAC_PROSAC - PROgressive SAmple Consensus
Surface
Several algorithms for surface reconstruction of 3D point clouds are implemented in the pcl_surface library. There are several ways to reconstruct the surface. One of the most commonly used is meshing, and the PCL library has two algorithms: very fast triangulation of original points and slower networking, which also smooths and fills holes. If the cloud is noisy, it is advisable to use surface smoothing using one of the implemented algorithms.
The Moving Least Squares (MLS) surface reconstruction method is a resampling algorithm that can reconstruct missing parts of a surface. Thanks to higher order polynomial interpolations between surrounding data points, MLS can correct and smooth out small errors caused by scanning.
Greedy Projection Triangulation implements an algorithm for fast surface triangulation on an unordered PointCloud with normals. The result is a triangle mesh that is created by projecting the local neighborhood of a point along the normal of the point. It works best if the surface is locally smooth and there are smooth transitions between areas with different point densities. Many parameters can be set that are taken into account when connecting points (how many neighbors are searched, the maximum distance for a point, minimum and maximum angle of a triangle).
The library also implements functions for creating a concave or convex hull polygon for a plane model, Grid projection surface reconstruction algorithm, marching cubes, ear clipping triangulation algorithm, Poisson surface reconstruction algorithm, etc.
I/O
The io_library allows you to load and save point clouds to files, as well as capture clouds from various devices. It includes functions that allow you to concatenate the points of two different point clouds with the same type and number of fields. The library can also concatenate fields (e.g., dimensions) of two different point clouds with same number of points.
Starting with PCL 1.0 the library offers a new generic grabber interface that provides easy access to different devices and file formats. The first devices supported for data collection were OpenNI compatible cameras (tested with Primesense Reference Design, Microsoft Kinect and Asus Xtion Pro cameras). As of PCL 1.7, point cloud data can be also obtained from the Velodyne High Definition LiDAR (HDL) system, which produces 360 degree point clouds. PCL supports both the original HDL-64e and HDL-32e. There is also a new driver for Dinast Cameras (tested with IPA-1110, Cyclopes II and IPA-1002 ng T-Less NG). PCL 1.8 brings support for IDS-Imaging Ensenso cameras, DepthSense cameras (e.g. Creative Senz3D, DepthSense DS325), and davidSDK scanners.
KdTree
The pcl_kdtree library provides the kd-tree data-structure for organizing a set of points in a space with k dimensions. Used to find the K nearest neighbors (using FLANN) of a specific point or location.
Octree
The pcl_octree library implements the octree hierarchical tree data structure for point cloud data. The library provides nearest neighbor search algorithms, such as “Neighbors within Voxel Search”, “K Nearest Neighbor Search” and “Neighbors within Radius Search”. There are also several octree types that differ by their leaf node's properties. Each leaf node can hold a single point or a list of point indices, or it does not store any point information. The library can be also used for detection of spatial changes between multiple unorganized point clouds by recursive comparison of octet tree structures.
Search
The pcl_search library implements methods for searching for nearest neighbors using different data structures, that can be found in other modules, such as KdTree, Octree, or specialized search for organized datasets.
Range Image
The range_image library contains two classes for representing and working with range images whose pixel values represent a distance from the sensor. The range image can be converted to a point cloud if the sensor position is specified or the borders can be extracted from it.
Keypoints
The pcl_keypoints library contains implementations of point cloud keypoint detection algorithms (AGAST corner point detector, Harris detector, BRISK detector, etc.).
Common
The pcl_common library contains the core data structures for point cloud, types for point representation, surface normals, RGB color values, etc. There are also implemented useful methods for computing distances, mean values and covariance, geometric transformations, and more. The common library is mainly used by other PCL modules.
References
External links
Point Cloud Library
Point Cloud Library (PCL) Users
Point Cloud Library (PCL) Developers
GitHub Repository
Tutorials
3D graphics software
Free 3D graphics software
Free software programmed in C++
Software using the BSD license
Point (geometry) |
34153572 | https://en.wikipedia.org/wiki/ThinkCentre%20Edge | ThinkCentre Edge | The ThinkCentre Edge is a series of desktop computers from Lenovo, designed primarily for home offices and small businesses. The product series features desktops in both tower and All-in-One form factors, designed to save up to 70% desk space as compared to traditional tower desktop PCs.
The ThinkCentre Edge desktop series represents the first time the 'Edge' brand has been used for any Lenovo product outside of the ThinkPad product line. The first desktop in the series was the Edge 91z AIO, announced on May 16, 2011.
Design
According to Paul Scaini, the Segment Manager for the ThinkCentre product line, the ThinkCentre Edge desktops were the result of a large amount of time spent on refining the overall product appearance. The Edge 91z was described in the article as being the epitome of that effort, with its Infinity Glass design.
Scaini wrote that the Edge AIO desktops had the same serviceability and mounting features as the ThinkCentre M Series AIOs. They used second-generation Intel Core i desktop CPUs and Lenovo Enhanced Experience 2.0.
2014
ThinkCentre Edge 73
Up to 4th generation Intel Core i7 processor
Up to Windows 8 64-bit
Up to 2TB HDD / Up to 128GB SSD
Up to 16GB memory
2013
ThinkCentre Edge 72
2011
Four desktops in the ThinkCentre Edge series were launched in 2011. These were:
ThinkCentre Edge 91z (AIO)
ThinkCentre Edge 91 (tower)
ThinkCentre Edge 71z (AIO)
ThinkCentre Edge 71 (tower)
ThinkCentre Edge 91z
The ThinkCentre Edge 91z AIO was summarized by PCMag as being a reasonably priced, powerful desktop with the capacity to "give the iMac a run for its place in a design studio." The Edge 91z was 2.5 inches thick, and was described as being "less flashy" than the IdeaCentre desktops and AIOs.
The Edge 91z was described as being simple with a seamless front and two removable feet that could be detached from the AIO. The space between the two feet was open and meant to store a keyboard. This was described as being different from the IdeaCentre B520 desktop, in which the keyboard storage was blocked by the speaker bar.
The display on offer with the Edge 91z AIO was a true 1080p HD. A drawback of the screen was the reflective glass front panel.
An optional DVD writer was available on the Edge 91z AIO. The AIO was reported by PCMag to be lacking in "would be nice features" such as eSATA ports, USB 3.0 ports, and HDMI-in.
The software on the Edge 91z AIO was reported to be a useful set, with the system free of unnecessary software. Software preinstalled on the AIO included Lenovo Rescue and Recovery and utilities for the DVD burner and the web camera.
In comparison with the Apple iMac 21.5 inch (Thunderbolt) the ThinkCentre Edge 91z was reported to be similar in terms of performance and specifications. The iMac was described as being slightly faster on 3D-related and everyday tasks, while the ThinkCentre Edge 91z was slightly faster on multimedia benchmarks. Both desktops contained similarly sized widescreens, AMD graphics, Intel Core processors, 1TB storage space, and could be used as external monitors for laptops.
Detailed specifications of the Edge 91z AIO are:
Processor: up to Intel Core i7-2600S (2.8 GHz, 8MB L3 cache)
Chipset: Intel B65
Display: 21.5-inch LED 16:9 widescreen
Operating system: Windows 7 Professional/Ultimate
Storage:
Up to 1TB 7200 RPM SATA
Up to 80GB mSATA (solid state drive)
Graphics:
Intel HD Graphics
AMD Radeon HD6650A
RAM: Up to 8GB PC3-10600 1333 MHz
Dimensions (W x D x H): 21.46in x 16.31in x 3.18in
Weight: 8.5 kg
Additional features: VGA, Microphone, Gigabit Ethernet, Bluetooth, 6 USB ports, 2 SATA connectors, Kensington lock
The ThinkCentre Edge 91z had preloaded ThinkVantage Technologies software, including Rescue and Recovery, Power Manager, and System Update.
ThinkCentre Edge 91
The ThinkCentre Edge 91 desktop was announced on October 20, 2011, by Lenovo. Unlike the Edge 91z, this desktop was not an AIO, but a traditional tower desktop in a small form factor. The Edge 91 desktop was described as being a desktop designed for a "premium computing experience".
Detailed specifications of the Edge 91 desktop are as follows:
Processor: Up to Intel Core i7-2400 (3.4 GHz, 8MB cache)
Chipset: Intel B65
RAM: Up to 16GB PC3-10600 1333 MHz DDR3
Audio: Integrated
Operating System:
Windows 7 Home Basic/Home Premium/Professional (in 32bit and 64bit variants)
DOS
Linux - Redhat, Novell, SUSE, Ubuntu
Storage:
Up to 1TB 7200RPM/10000RPM SATA
Up to 160GB SSD
Graphics:
Integrated
ATI Radeon HD5450
ATI Radeon HD6450
ThinkCentre Edge 71z
The ThinkCentre Edge 71z AIO was announced on October 20, 2011, by Lenovo. Technology News described the Edge 71z as having a "glossy black shell" and an "impressive appearance". The AIO offered a 20 inch display, up to Intel Core i5 processors, up to 1TB hard disk drives or a 160GB solid state drive, an optional Display Port and support for dual independent display.
As with the Edge 91z, the ThinkCentre Edge 71z AIO offered a suite of ThinkVantage Technologies including Rescue and Recovery, Power Manager, and System Update.
Detailed specifications of the Edge 71z are as follows:
Processor: Up to Intel Core i5-2500S (2.7 GHz)
Chipset: Intel H61
RAM: Up to 8GB PC3-10600 1333 MHz DDR3
Audio: Integrated
Operating System:
Windows 7 Home Basic/Home Premium/Professional
DOS
Linux - RedHat, Novel SUSE, Ubuntu
Storage: Up to 1TB 7200 RPM SATA
Graphics:
ATI Onega integrated graphics
Optional 1GB discrete graphics
ThinkCentre Edge 71
The ThinkCentre Edge 71 desktop, like the Edge 91, was a tower desktop available in a small form factor. It was announced with the Edge 91 and Edge 71z on October 20, 2011.
Detailed specifications of the Edge 71 desktop are as follows:
Processor: Up to Intel Core i5-2390T (2.7 GHz, 3MB cache)
Chipset: Intel H61
RAM: Up to 8GB PC3-10600 1333 MHz DDR3
Audio: Integrated
Operating System: up to Windows 7 Professional
Graphics:
Intel HD Graphics 2000
ATI Radeon HD5450
ATI Radeon HD 6450
References
Lenovo |
2524790 | https://en.wikipedia.org/wiki/Bloody%20Christmas%20%281951%29 | Bloody Christmas (1951) | Bloody Christmas was the name given to the severe beating of seven civilians by members of the Los Angeles Police Department (LAPD) on December 25, 1951. The attacks, which left five Mexican American and two white young men with broken bones and ruptured organs, were properly investigated only after lobbying from the Mexican American community. The internal inquiry by Los Angeles Chief of Police William H. Parker resulted in eight police officers being indicted for the assaults, 54 being transferred, and 39 suspended.
The event was fictionalized in the 1990 novel L.A. Confidential by James Ellroy, which was made into a film of the same name in 1997.
Background
In 1938, reforms of the LAPD were started by Mayor Frank Shaw. Throughout the 1940s, this led to the firing of corrupt officers, the raise of entrance standards, the creation of rigorous training programs, and better pay for officers.
Police autonomy was already guaranteed in Section 202 of the Los Angeles city charter since 1934. It stated that officers had a vested right to their jobs and could not be removed or disciplined without due process, which meant that authority regarding departmental discipline belonged to a board of review made up of police officers.
Despite the reforms, the LAPD was faced with a continual deterioration in relations with the Mexican American community since the 1943 Zoot Suit Riots during the Second World War. After William H. Parker was appointed chief of police in 1950, reforms continued with improving policing in Los Angeles by placing emphasis on police professionalism. Parker believed better personnel would lead to more "police autonomy," allowing the LAPD to focus on its "war-on-crime approach" to policing and for dealing with its own internal discipline. Proponents believed a professional police department should be free from political influence and control.
Despite previous police chiefs trying to improve relationships by quelling public fears of Mexican American crime, community leaders hoped Parker's appointment would really lead to an improvement in the situation. Problems occurred because of anti-Mexican sentiment among LAPD officers, many of whom believed Mexican Americans were generally delinquent and violent. That racial profiling led to numerous violent encounters between the police and Mexican Americans because each side expected the other to use force.
Officers assaulted
On Christmas Eve 1951, LAPD officers Julius Trojanowski and Nelson Brownson responded to a report that minors were drinking alcohol at the Showboat Bar on Riverside Drive. On arrival, they found inside seven men: Daniel Rodela, Elias Rodela, Jack Wilson, William Wilson, Raymond Marquez, Manuel Hernandez, and Eddie Nora. Even though the men had identification proving they were legally old enough to drink alcohol, the officers told them to leave. When they refused to go, the officers used force, which led to a fight starting in the parking lot. Both police officers were injured; one received a black eye, the other a cut that required stitches.
Seven hours after the fight, LAPD officers arrested all the men at their own homes. Six were taken straight to the Los Angeles Central City Jail. However, the seventh, Daniel Rodela, was dragged to a squad car by his hair and driven to the city's Elysian Park, where he was savagely beaten by several police officers. Rodela suffered multiple facial fractures; he required two blood transfusions because of the extent of his injuries.
Prisoners beaten
On Christmas morning, a large number of police officers attending a departmental Christmas party were getting drunk, in violation of the LAPD's policy on alcohol. When they became aware of a rumor that Trojanowski had lost an eye in the fight, the drunken officers decided to avenge their fellow policeman.
The six prisoners were taken from their cells in the Central City Jail and lined up. As many as 50 officers then participated in a beating that lasted for 95 minutes. All the prisoners received major injuries including punctured organs and broken facial bones. At least 100 people knew of or witnessed the beatings.
Cover-up
Senior LAPD management kept the attack on the prisoners out of the mainstream news for almost three months. Media coverage ignored the beatings on Christmas Day and focused on the brawl the night before. The initial headline of the Los Angeles Times on the incident was "Officers Beaten in Bar Brawl; Seven Men Jailed". However, as Mexican Americans pushed for a focus on police brutality and more reports of violence flooded in, the media began to turn against the LAPD, running stories condemning police tactics and even suggesting the amendment of Section 202 of the Los Angeles city charter.
In March 1952, six of the seven men were charged with battery and disturbing the peace. The prosecution argued that the fight started when the officers asked Jack Wilson to leave the bar peacefully. The defendants testified that the fight began when Officer Trojanowski began hitting Wilson on the head with a blackjack. Judge Joseph L. Call also allowed them to describe how they were beaten after being arrested. The jury found the defendants guilty of two counts of battery and one of disturbing the peace. However, after the verdict was delivered, Judge Call reprimanded the police force for its brutality, calling for an independent investigation of the assault.
Internal investigation
Chief Parker's response to this criticism was defensive. The police department's "war-on-crime" policy had given it an "us versus them" mentality. Parker used the argument that the public had to support the police force to prevent anarchy and lawlessness, saying that any criticism against the LAPD damaged the police’s ability to enforce the law. He even suggested that criminals were alleging police brutality to get him fired so the L.A. underworld could re-establish its illegal activities.
However, as the internal investigation into the beatings progressed, more complaints from other incidents were reported by the media, forcing Parker to act. Eventually a 204-page internal report was compiled by the LAPD. Although it included interviews with more than 400 witnesses, many members of LAPD had tried to impede the investigation through perjury or vague testimony. The report was also contradictory because it revealed that several police officers witnessed the beatings but concluded that "none of the prisoners was physically abused in the manner alleged."
Criminal indictments
The report led to grand jury hearings against the LAPD. Throughout the proceedings, the victims gave vivid accounts of their beatings, but the officers' testimonies were vague and contradictory as none could remember seeing the prisoners being beaten or remember who was taking part. Officers who had previously given detailed information to internal affairs investigators could remember very little in court.
The hearings resulted in eight officers being indicted for assault. The grand jury also issued a report that criticized the LAPD's senior officers for allowing the situation to get out of control and reminded the police department that it functioned "for the benefit of the public and not as a fraternal organization for the benefit of fellow officers."
The eight indicted officers were tried between July and November 1952. Five of them were convicted, but only one received a sentence of more than a year in prison. A further 54 officers were transferred, and 39 were temporarily suspended without pay.
See also
Rodney King
References
1951 in Los Angeles
Hispanic and Latino American history
History of Los Angeles
Los Angeles Police Department
1951 crimes in the United States
Police brutality in the United States
December 1951 events
Racially motivated violence against Hispanic and Latino Americans |
224672 | https://en.wikipedia.org/wiki/Dianne%20Feinstein | Dianne Feinstein | Dianne Goldman Berman Feinstein ( ; born Dianne Emiel Goldman; June 22, 1933) is an American politician who serves as the senior United States senator from California, a seat she has held since 1992. A member of the Democratic Party, she was mayor of San Francisco from 1978 to 1988.
Born in San Francisco, Feinstein graduated from Stanford University in 1955. In the 1960s, she worked in local government in San Francisco. Feinstein was elected to the San Francisco Board of Supervisors in 1969. She served as the board's first female president in 1978, during which time the assassinations of Mayor George Moscone and City Supervisor Harvey Milk by Dan White drew national attention. Feinstein succeeded Moscone as mayor and became the first woman to serve in that position. During her tenure, she led the renovation of the city's cable car system and oversaw the 1984 Democratic National Convention. Despite a failed recall attempt in 1983, Feinstein was a very popular mayor and was named the most effective mayor in the country by City & State in 1987.
After losing a race for governor in 1990, Feinstein won a 1992 special election to the U.S. Senate. The special election was triggered by the resignation of Pete Wilson, who defeated her in the 1990 gubernatorial election. Despite being elected on the same ballot as her peer Barbara Boxer, Feinstein became California's first female U.S. senator, as she was elected in a special election and sworn in before Boxer. She became California's senior senator a few weeks later in 1993 when Alan Cranston retired. Feinstein has been reelected five times and in the 2012 election received 7.86 million votesthe most popular votes in any U.S. Senate election in history.
Feinstein authored the 1994 Federal Assault Weapons Ban, which expired in 2004. She introduced a new assault weapons bill in 2013 that failed to pass. Feinstein is the first woman to have chaired the Senate Rules Committee and the Senate Intelligence Committee, and the first woman to have presided over a U.S. presidential inauguration. She was the ranking member of the Senate Judiciary Committee from 2017 to 2021 and had chaired the International Narcotics Control Caucus from 2009 to 2015.
At , Feinstein is the oldest sitting U.S. senator. In March 2021, Feinstein became the longest-serving U.S. senator from California, surpassing Hiram Johnson. Upon Barbara Mikulski's retirement in January 2017, Feinstein became the longest-tenured female senator currently serving; should she serve through November 5, 2022, Feinstein will surpass Mikulski's record as the longest-tenured female senator. In January 2021, Feinstein filed the initial Federal Election Commission paperwork needed to seek reelection in 2024, when she will be 91. Feinstein's staff later clarified that this was due to election law technicalities, and did not indicate her intentions in 2024.
Early life and education
Feinstein was born Dianne Emiel Goldman in San Francisco to Leon Goldman, a surgeon, and his wife Betty (née Rosenburg), a former model. Her paternal grandparents were Jewish immigrants from Poland. Her maternal grandparents, the Rosenburgs, were from Saint Petersburg, Russia. While they were of German-Jewish ancestry, they practiced the Russian Orthodox (Christian) faith, as was required for Jews in Saint Petersburg. Christianity was passed down to Feinstein's mother, who insisted on her transferral from a Jewish day school to a prestigious local Catholic school, but Feinstein lists her religion as Judaism. She graduated from Convent of the Sacred Heart High School in 1951 and from Stanford University in 1955 with a Bachelor of Arts in history.
Early political career
Feinstein was a fellow at the Coro Foundation in San Francisco from 1955 to 1956. Governor Pat Brown appointed her to the California Women's Parole Board in 1960. She served on the board until 1966.
San Francisco Board of Supervisors
Feinstein was elected to the San Francisco Board of Supervisors in 1969. She remained on the board for nine years.
During her tenure on the Board of Supervisors, she unsuccessfully ran for mayor of San Francisco twice, in 1971 against Mayor Joseph Alioto, and in 1975, when she lost the contest for a runoff slot (against George Moscone) by one percentage point to Supervisor John Barbagelata.
Because of her position, Feinstein became a target of the New World Liberation Front, an anti-capitalist terrorist group that carried out bombings in California in the 1970s. In 1976 the NWLF placed a bomb on the windowsill of her home that failed to explode. The group later shot out the windows of a beach house she owned.
Feinstein was elected president of the San Francisco Board of Supervisors in 1978 with initial opposition from Quentin L. Kopp.
Mayor of San Francisco
On November 27, 1978, Mayor George Moscone and Supervisor Harvey Milk were assassinated by former supervisor Dan White. Feinstein became acting mayor as she was president of the Board of Supervisors. Supervisors John Molinari, Ella Hill Hutch, Ron Pelosi, Robert Gonzales, and Gordon Lau endorsed Feinstein for an appointment as mayor by the Board of Supervisors. Gonzales initially ran to be appointed by the Board of Supervisors as mayor, but dropped out. The Board of Supervisors voted six to two to appoint Feinstein as mayor. She was inaugurated by Chief Justice Rose Bird of the Supreme Court of California on December 4, 1978, becoming San Francisco's first female mayor. Molinari was selected to replace Feinstein as president of the Board of Supervisors by a vote of eight to two.
One of Feinstein's first challenges as mayor was the state of the San Francisco cable car system, which was shut down for emergency repairs in 1979; an engineering study concluded that it needed comprehensive rebuilding at a cost of $60 million. Feinstein helped win federal funding for the bulk of the work. The system closed for rebuilding in 1982 and it was completed just in time for the 1984 Democratic National Convention. Feinstein also oversaw policies to increase the number of high-rise buildings in San Francisco.
Feinstein was seen as a relatively moderate Democrat in one of the country's most liberal cities. As a supervisor, she was considered part of the centrist bloc that included White and generally opposed Moscone. As mayor, Feinstein angered the city's large gay community by vetoing domestic partner legislation in 1982. In the 1980 presidential election, while a majority of Bay Area Democrats continued to support Senator Ted Kennedy's primary challenge to President Jimmy Carter even after it was clear Kennedy could not win, Feinstein strongly supported the Carter–Mondale ticket. She was given a high-profile speaking role on the opening night of the August Democratic National Convention, urging delegates to reject the Kennedy delegates' proposal to "open" the convention, thereby allowing delegates to ignore their states' popular vote, a proposal that was soundly defeated.
In the run-up to the 1984 Democratic National Convention, there was considerable media and public speculation that Mondale might pick Feinstein as his running mate. He chose Geraldine Ferraro instead. Also in 1984, Feinstein proposed banning handguns in San Francisco, and became subject to a recall attempt organized by the White Panther Party. She won the recall election and finished her second term as mayor on January 8, 1988.
Feinstein revealed sensitive details about the hunt for serial killer Richard Ramirez at a 1985 press conference, antagonizing detectives by publicizing details of his crimes known only to law enforcement, and thus jeopardizing their investigation.
City and State magazine named Feinstein the nation's "Most Effective Mayor" in 1987. She served on the Trilateral Commission during the 1980s.
Gubernatorial election
Feinstein made an unsuccessful bid for governor of California in 1990. She won the Democratic Party's nomination, but lost the general election to Republican Senator Pete Wilson, who resigned from the Senate to assume the governorship. In 1992, Feinstein was fined $190,000 for failure to properly report campaign contributions and expenditures in that campaign.
U.S. Senate career
Elections
Feinstein won the November 3, 1992, special election to fill the Senate seat vacated a year earlier when Wilson resigned to take office as governor. In the primary, she had defeated California State Controller Gray Davis.
The special election was held at the same time as the general election for U.S. president and other offices. Barbara Boxer was elected at the same time to the Senate seat being vacated by Alan Cranston. Because Feinstein was elected to an unexpired term, she became a senator as soon as the election was certified in November, while Boxer did not take office until the expiration of Cranston's term in January; thus Feinstein became California's senior senator, even though she was elected at the same time as Boxer and Boxer had previous congressional service. Feinstein also became the first female Jewish senator in the United States, though Boxer is also Jewish. Feinstein and Boxer were also the first female pair of U.S. senators to represent any state at the same time. Feinstein was reelected in 1994, 2000, 2006, 2012, and 2018. In 2012, she set the record for the most popular votes in any U.S. Senate election in history, with 7.75 million, making her the first Senate candidate to get 7 million votes in an election. The record was previously held by Boxer, who received 6.96 million votes in her 2004 reelection; and before that by Feinstein in 2000 and 1992, when she became the first Democrat to get more than 5 million votes in a Senate race.
In October 2017, Feinstein declared her intention to run for reelection in 2018. She lost the endorsement of the California Democratic Party's executive board, which opted to support State Senator Kevin de León, but finished first in the state's "jungle primary" and was reelected in the November 6 general election.
At , Feinstein is the oldest sitting U.S. senator. On March 28, 2021, Feinstein became the longest-serving U.S. senator from California, surpassing Hiram Johnson. Upon Barbara Mikulski's retirement in January 2017, Feinstein became the longest-tenured female U.S. senator currently serving. Should she serve through November 5, 2022, Feinstein will become the longest-serving woman in U.S. Senate history.
In January 2021, Feinstein filed the initial Federal Election Commission paperwork needed to seek reelection in 2024, when she will be 91.
Committee assignments
Feinstein is the first and only woman to have chaired the Senate Rules Committee (2007–09) and the Select Committee on Intelligence (2009–15).
Committee on Appropriations
Subcommittee on Agriculture, Rural Development, Food and Drug Administration, and Related Agencies
Subcommittee on Commerce, Justice, Science, and Related Agencies
Subcommittee on Defense
Subcommittee on Energy and Water Development (Ranking Member, 116th Congress; chair, 117th Congress)
Subcommittee on Interior, Environment, and Related Agencies
Subcommittee on Transportation, Housing and Urban Development, and Related Agencies
Committee on the Judiciary (Ranking Member, 115th and 116th Congresses)
Subcommittee on Crime and Terrorism
Subcommittee on Immigration, Border Security, and Refugees
Subcommittee on Privacy, Technology and the Law
Subcommittee on Human Rights and the Law (Chair, 117th Congress)
Committee on Rules and Administration (Chair, 110th Congress)
Select Committee on Intelligence (Chair, 111th, 112th, 113th Congresses)
Caucus memberships
Afterschool Caucuses
Congressional NextGen 9-1-1 Caucus
Senate New Democrat Coalition (defunct)
Political positions
According to the Los Angeles Times, Feinstein emphasized her centrism when she first ran for statewide offices in the 1990s, at a time when California was more conservative. Over time, she has moved left of center as California became one of the most Democratic states, although she has never joined the ranks of progressives, and was once a member of the Senate's moderate, now-defunct Senate New Democrat Coalition.
Military
While delivering the commencement address at Stanford Stadium on June 13, 1994, Feinstein said:
In 2017, she criticized the banning of transgender enlistments in the military under the Trump administration.
Feinstein voted for Trump's $675 billion defense budget bill for FY 2019.
National security
Feinstein voted for the extension of the Patriot Act and the FISA provisions in 2012.
Health care
Feinstein has supported the Affordable Care Act, repeatedly voting to defeat initiatives aimed against it. She has voted to regulate tobacco as a drug; expand the Children's Health Insurance Program; override the president's veto of adding 2 to 4 million children to SCHIP eligibility; increase Medicaid rebate for producing generic drugs; negotiate bulk purchases for Medicare prescription drugs; allow re-importation of prescription drugs from Canada; allow patients to sue HMOs and collect punitive damages; cover prescription drugs under Medicare, and means-test Medicare. She has voted against the Paul Ryan Budget's Medicare choice, tax and spending cuts; and allowing tribal Indians to opt out of federal healthcare. Feinstein's Congressional voting record was rated as 88% by the American Public Health Association (APHA), the figure ostensibly reflecting the percentage of time the representative voted the organization's preferred position.
At an April 2017 town hall meeting in San Francisco, Feinstein said, "[i]f single-payer health care is going to mean the complete takeover by the government of all health care, I am not there." During a news conference at the University of California, San Diego in July 2017, she estimated that Democratic opposition would prove sufficient to defeat Republican attempts to repeal the ACA. Feinstein wrote in an August 2017 op-ed that Trump could secure health care reform if he compromised with Democrats: "We now know that such a closed process on a major issue like health care doesn't work. The only path forward is a transparent process that allows every senator to bring their ideas to the table."
Capital punishment
When Feinstein first ran for statewide office in 1990, she favored capital punishment. In 2004, she called for the death penalty in the case of San Francisco police officer Isaac Espinoza, who was killed while on duty. By 2018, she opposed capital punishment.
Energy and environment
Feinstein achieved a score of 100% from the League of Conservation Voters in 2017. Her lifetime average score is 90%.
Feinstein co-sponsored (with Oklahoma Republican Tom Coburn) an amendment through the Senate to the Economic Development Revitalization Act of 2011 that eliminated the Volumetric Ethanol Excise Tax Credit. The Senate passed the amendment on June 16, 2011. Introduced in 2004, the subsidy provided a 45-cent-per-gallon credit on pure ethanol, and a 54-cent-per-gallon tariff on imported ethanol. These subsidies had resulted in an annual expenditure of $6 billion.
In February 2019, when youth associated with the Sunrise Movement confronted Feinstein about why she does not support the Green New Deal, she told them "there’s no way to pay for it" and that it could not pass a Republican-controlled Senate. In a tweet following the confrontation, Feinstein said that she remains committed "to enact real, meaningful climate change legislation."
Supreme Court nominations
In September 2005, Feinstein was one of five Democratic senators on the Senate Judiciary Committee to vote against Supreme Court nominee John Roberts, saying that Roberts had "failed to state his positions on such social controversies as abortion and the right to die".
Feinstein stated that she would vote against Supreme Court nominee Samuel Alito in January 2006, though she expressed disapproval of a filibuster: "When it comes to filibustering a Supreme Court appointment, you really have to have something out there, whether it's gross moral turpitude or something that comes to the surface. This is a man I might disagree with, [but] that doesn't mean he shouldn't be on the court."
On July 12, 2009, Feinstein stated her belief that the Senate would confirm Supreme Court nominee Sonia Sotomayor, praising her for her experience and for overcoming "adversity and disadvantage".
After President Obama nominated Merrick Garland to the Supreme Court in March 2016, Feinstein met with Garland on April 6 and later called on Republicans to do "this institution the credit of sitting down and meeting with him".
In February 2017, Feinstein requested that Supreme Court nominee Neil Gorsuch provide information on cases in which he had assisted with decision-making regarding either litigation or craft strategy. In mid-March, she sent Gorsuch a letter stating her request had not been met. Feinstein formally announced her opposition to his nomination on April 3, citing Gorsuch's "record at the Department of Justice, his tenure on the bench, his appearance before the Senate and his written questions for the record".
Following the nomination of Brett Kavanaugh to the Supreme Court of the United States, Feinstein received a July 30, 2018, letter from Christine Blasey Ford in which Ford accused Kavanaugh of having sexually assaulted her in the 1980s. Ford requested that her allegation be kept confidential. Feinstein did not refer the allegation to the FBI until September 14, 2018, after the Senate Judiciary Committee had completed its hearings on Kavanaugh's nomination and "after leaks to the media about [the Ford allegation] had reached a 'fever pitch'". Feinstein faced "sharp scrutiny" for her decision to keep quiet about the Ford allegation for several weeks; she responded that she kept the letter and Ford's identity confidential because Ford had requested it. After an additional hearing and a supplemental FBI investigation, Kavanaugh was confirmed to the Supreme Court on October 6, 2018.
Feinstein announced she would step down from her position on the Judiciary Committee after pressure from progressives due to her performance at the Supreme Court nomination hearings of Justice Amy Coney Barrett in October 2020. Articles in The New Yorker and The New York Times cited unnamed Democratic senators and aides expressing concern over her advancing age and ability to lead the committee.
Weapons sales
In September 2016, Feinstein backed the Obama administration's plan to sell more than $1.15 billion worth of weapons to Saudi Arabia.
Mass surveillance; citizens' privacy
Feinstein co-sponsored PIPA on May 12, 2011. She met with representatives of technology companies, including Google and Facebook, in January 2012. A Feinstein spokesperson said she "is doing all she can to ensure that the bill is balanced and protects the intellectual property concerns of the content community without unfairly burdening legitimate businesses such as Internet search engines".
Following her 2012 vote to extend the Patriot Act and the FISA provisions, and after the 2013 mass surveillance disclosures involving the National Security Agency (NSA), Feinstein promoted and supported measures to continue the information collection programs. Feinstein and Saxby Chambliss also defended the NSA's request to Verizon for all the metadata about phone calls made within the U.S. and from the U.S. to other countries. They said the information gathered by intelligence on the phone communications is used to connect phone lines to terrorists and that it did not contain the content of the phone calls or messages. Foreign Policy wrote that she had a "reputation as a staunch defender of NSA practices and [of] the White House's refusal to stand by collection activities targeting foreign leaders".
In October 2013, Feinstein criticized the NSA for monitoring telephone calls of foreign leaders friendly to the U.S. In November 2013, she promoted the FISA Improvements Act bill, which included a "backdoor search provision" that allows intelligence agencies to continue certain warrantless searches as long as they are logged and "available for review" to various agencies.
In June 2013, Feinstein called Edward Snowden a "traitor" after his leaks went public. In October 2013, she said she stood by that.
While praising the NSA, Feinstein had accused the CIA of snooping and removing files through Congress members' computers, saying, "[t]he CIA did not ask the committee or its staff if the committee had access to the internal review or how we obtained it. Instead, the CIA just went and searched the committee's computer." She claimed the "CIA's search may well have violated the separation of powers principles embodied in the United States Constitution".
After the 2016 FBI–Apple encryption dispute, Feinstein and Richard Burr sponsored a bill that would be likely to criminalize all forms of strong encryption in electronic communication between citizens. The bill would require technology companies to design their encryption so that they can provide law enforcement with user data in an "intelligible format" when required to do so by court order.
In 2020, Feinstein co sponsored the EARN IT act, which seeks to create a 19-member committee to decide a list of best practices websites must follow to be protected by section 230 of the Communications Decency Act. The EARN IT act effectively outlaws end-to-end encryption, depriving the world of secure, private communications tools.
Assault weapons ban
Feinstein introduced the Federal Assault Weapons Ban, which became law in 1994 and expired in 2004. In January 2013about a month after the Sandy Hook Elementary School shootingshe and Representative Carolyn McCarthy proposed a bill that would "ban the sale, transfer, manufacturing or importation of 150 specific firearms including semiautomatic rifles or pistols that can be used with a detachable or fixed ammunition magazines that hold more than 10 rounds and have specific military-style features, including pistol grips, grenade launchers or rocket launchers". The bill would have exempted 900 models of guns used for sport and hunting. Feinstein said of the bill, "The common thread in each of these shootings is the gunman used a semi-automatic assault weapon or large-capacity ammunition magazines. Military assault weapons only have one purpose, and in my opinion, it's for the military." The bill failed on a Senate vote of 60 to 40.
Marijuana legalization
Feinstein has opposed a number of reforms to cannabis laws at the state and federal level. In 2016 she opposed Proposition 64, the Adult Use of Marijuana Act, to legalize recreational cannabis in California. In 1996 she opposed Proposition 215 to legalize the medical use of cannabis in California. In 2015 she was the only Democrat at a Senate hearing to vote against the Rohrabacher–Farr amendment, legislation that limits the enforcement of federal law in states that have legalized medical cannabis. Feinstein cited her belief that cannabis is a gateway drug in voting against the amendment.
In 2018, Feinstein softened her views on marijuana and cosponsored the STATES Act, legislation that would protect states from federal interference regarding both medical and recreational use. She also supported legislation in 2015 to allow medical cannabis to be recommended to veterans in states where its use is legal.
Immigration
In September 2017, after Attorney General Jeff Sessions announced the rescinding of the Deferred Action for Childhood Arrivals program, Feinstein admitted the legality of the program was questionable while citing this as a reason for why a law should be passed. In her opening remarks at a January 2018 Senate Judiciary Committee hearing, she said she was concerned the Trump administration's decision to terminate temporary protected status might be racially motivated, based on comments Trump made denigrating African countries, Haiti, and El Salvador.
Iran
Feinstein announced her support for the Iran nuclear deal framework in July 2015, tweeting that the deal would usher in "unprecedented & intrusive inspections to verify cooperation" on the part of Iran.
On June 7, 2017, Feinstein and Senator Bernie Sanders issued dual statements urging the Senate to forgo a vote for sanctions on Iran in response to the Tehran attacks that occurred earlier in the day.
In July 2017, Feinstein voted for the Countering America's Adversaries Through Sanctions Act that grouped together sanctions against Iran, Russia and North Korea.
Israel
In September 2016in advance of UN Security Council resolution 2334 condemning Israeli settlements in the occupied Palestinian territoriesFeinstein signed an AIPAC-sponsored letter urging Obama to veto "one-sided" resolutions against Israel.
Feinstein opposed Trump's decision to recognize Jerusalem as Israel's capital, saying, "Recognizing Jerusalem as Israel's capitalor relocating our embassy to Jerusalemwill spark violence and embolden extremists on both sides of the debate."
North Korea
During a July 2017 appearance on Face the Nation after North Korea conducted a second test of an intercontinental ballistic missile, Feinstein said the country had proven itself a danger to the U.S. She also expressed her disappointment with China's lack of response.
Responding to reports that North Korea had achieved successful miniaturization of nuclear warheads, Feinstein issued an August 8, 2017, statement insisting isolation of North Korea had proven ineffective and Trump's rhetoric was not helping resolve potential conflict. She also called for the U.S. to "quickly engage North Korea in a high-level dialogue without any preconditions".
In September 2017, after Trump's first speech to the United Nations General Assembly in which he threatened North Korea, Feinstein released a statement disagreeing with his remarks: "Trump's bombastic threat to destroy North Korea and his refusal to present any positive pathways forward on the many global challenges we face are severe disappointments."
China
Feinstein supports a conciliatory approach between China and Taiwan and fostered increased dialogue between high-level Chinese representatives and U.S. senators during her first term as senator. When asked about her relation with Beijing, Feinstein said, "I sometimes say that in my last life maybe I was Chinese."
Feinstein has criticized Beijing's missile tests near Taiwan and has called for dismantlement of missiles pointed at the island. She promoted stronger business ties between China and Taiwan over confrontation, and suggested that the U.S. patiently "use two-way trade across Taiwan Strait as a platform for more political dialogue and closer ties".
She believes that deeper cross-strait economic integration "will one day lead to political integration and will ultimately provide the solution" to the Taiwan issue.
On July 27, 2018, reports surfaced that a Chinese staff member who worked as Feinstein's personal driver, gofer and liaison to the Asian-American community for 20 years, was caught reporting to China's Ministry of State Security. According to the reports, the FBI contacted Feinstein five years earlier warning her about the employee. The employee was later interviewed by authorities and forced to retire by Feinstein. No criminal charges were filed against them.
Torture
Feinstein has served on the Senate's Select Committee on Intelligence since before 9/11 and her time on the committee has coincided with the Senate Report on Pre-war Intelligence on Iraq and the debates on the torture/"enhanced interrogation" of terrorists and alleged terrorists. On the Senate floor on December 9, 2014, the day parts of the Senate Intelligence Committee report on CIA torture were released to the public, Feinstein called the government's detention and interrogation program a "stain on our values and on our history".
Fusion GPS interview transcript release
On January 9, 2018, Feinstein caused a stir when, as ranking member of the Senate Judiciary Committee, she released a transcript of its August 2017 interview with Fusion GPS co-founder Glenn Simpson about the dossier regarding connections between Trump's campaign and the Russian government. She did this unilaterally after the committee's chairman, Chuck Grassley, refused to release the transcript.
Presidential politics
During the 1980 presidential election, Feinstein served on President Jimmy Carter's steering committee in California and as a Carter delegate to the Democratic National Convention. She was selected to serve as one of the four chairs of the 1980 Democratic National Convention.
Feinstein endorsed former Vice President Walter Mondale during the 1984 presidential election. She and Democratic National Committee chairman Charles Manatt signed a contract in 1983, making San Francisco the host of the 1984 Democratic National Convention.
As a superdelegate in the 2008 Democratic presidential primaries, Feinstein said she would support Clinton for the nomination. But after Barack Obama became the presumptive nominee, she fully backed his candidacy. Days after Obama amassed enough delegates to win the nomination, Feinstein lent her Washington, D.C., home to Clinton and Obama for a private one-on-one meeting. She did not attend the 2008 Democratic National Convention in Denver because she had fallen and broken her ankle earlier in the month.
Feinstein chaired the United States Congress Joint Committee on Inaugural Ceremonies and acted as mistress of ceremonies, introducing each participant at the 2009 presidential inauguration. She is the first woman to have presided over a U.S. presidential inauguration.
Ahead of the 2016 presidential election, Feinstein was one of 16 female Democratic senators to sign an October 20, 2013, letter endorsing Hillary Clinton for president.
As the 2020 presidential election approached, Feinstein indicated her support for former Vice President Joe Biden. This came as a surprise to many pundits, due to the potential candidacy of fellow California senator Kamala Harris, of whom Feinstein said "I'm a big fan of Sen. Harris, and I work with her. But she's brand-new here, so it takes a little bit of time to get to know somebody."
Awards and honors
Feinstein was awarded the honorary degree of Doctor of Laws from Golden Gate University in San Francisco on June 4, 1977. She was awarded the Legion of Honour by France in 1984. Feinstein received with the Woodrow Wilson Award for public service from the Woodrow Wilson Center of the Smithsonian Institution on November 3, 2001, in Los Angeles. In 2002, Feinstein won the American Medical Association's Nathan Davis Award for "the Betterment of the Public Health". She was named as one of The Forward 50 in 2015.
Offices held
Personal life
Feinstein has been married three times. She married Jack Berman ( 2002), who was then working in the San Francisco District Attorney's Office, in 1956. She and Berman divorced three years later. Their daughter, Katherine Feinstein Mariano ( 1957), was the presiding judge of the San Francisco Superior Court for 12 years, through 2012. In 1962, shortly after beginning her career in politics, Feinstein married her second husband, neurosurgeon Bertram Feinstein, who died of colon cancer in 1978. Feinstein was then married to investment banker Richard C. Blum from 1980 until his death from cancer in 2022.
In 2003, Feinstein was ranked the fifth-wealthiest senator, with an estimated net worth of $26 million. Her net worth increased to between $43 and $99 million by 2005. Her 347-page financial-disclosure statement, characterized by the San Francisco Chronicle as "nearly the size of a phone book", claims to draw clear lines between her assets and her husband's, with many of her assets in blind trusts.
Feinstein had an artificial cardiac pacemaker inserted at George Washington University Hospital in January 2017. In the fall of 2020, following Ruth Bader Ginsburg's death and the confirmation hearings for Supreme Court Justice Amy Coney Barrett, there was concern about Feinstein's ability to continue performing her job. She said there was no cause for concern and that she had no plans to leave the Senate.
In mass media
The 2019 film The Report, about the Senate Intelligence Committee investigation into the CIA's use of torture, extensively features Feinstein, portrayed by Annette Bening.
Electoral history
See also
Rosalind Wiener Wyman, co-chair of Feinstein political campaigns.
Women in the United States Senate
2020 congressional insider trading scandal
References
Additional sources
Roberts, Jerry (1994). Dianne Feinstein: Never Let Them See You Cry, Harpercollins.
Talbot, David (2012). Season of the Witch: Enchantment, Terror and Deliverance in the City of Love, New York: Simon and Schuster. 480 p. .
Weiss, Mike (2010). Double Play: The Hidden Passions Behind the Double Assassination of George Moscone and Harvey Milk, Vince Emery Productions.
External links
Senator Dianne Feinstein official U.S. Senate website
Campaign website
Membership at the Council on Foreign Relations
Statements
Op-ed archives at Project Syndicate
Dianne Feinstein's Opening Remarks at the 2009 Presidential Inauguration at AmericanRhetoric.com, video, audio and text
|-
|-
|-
|-
|-
|-
|-
|-
|-
|-
|-
1933 births
20th-century American politicians
20th-century American women politicians
21st-century American politicians
21st-century American women politicians
Activists from California
Schools of the Sacred Heart alumni
American gun control activists
American people of German-Jewish descent
American people of Polish-Jewish descent
American people of Russian-Jewish descent
American women activists
California Democrats
Democratic Party United States senators from California
Female United States senators
Women in California politics
Jewish activists
Jewish mayors of places in the United States
Jewish United States senators
Jewish women politicians
Living people
Mayors of San Francisco
Members of the Council on Foreign Relations
San Francisco Board of Supervisors members
Women city councillors in California
Stanford University alumni
United States senators from California
Women mayors of places in California
Jewish American people in California politics
21st-century American Jews |
156259 | https://en.wikipedia.org/wiki/Price%20discrimination | Price discrimination | Price discrimination is a microeconomic pricing strategy where identical or largely similar goods or services are sold at different prices by the same provider in different markets. Price discrimination is distinguished from product differentiation by the more substantial difference in production cost for the differently priced products involved in the latter strategy. Price differentiation essentially relies on the variation in the customers' willingness to pay and in the elasticity of their demand. For price discrimination to succeed, a firm must have market power, such as a dominant market share, product uniqueness, sole pricing power, etc. All prices under price discrimination are higher than the equilibrium price in a perfectly-competitive market. However, some prices under price discrimination may be lower than the price charged by a single-price monopolist.
The term "differential pricing" is also used to describe the practice of charging different prices to different buyers for the same quality and quantity of a product, but it can also refer to a combination of price differentiation and product differentiation. Other terms used to refer to price discrimination include "equity pricing", "preferential pricing", "dual pricing" and "tiered pricing". Within the broader domain of price differentiation, a commonly accepted classification dating to the 1920s is:
"Personalized pricing" (or first-degree price differentiation) — selling to each customer at a different price; this is also called one-to-one marketing. The optimal incarnation of this is called "perfect price discrimination" and maximizes the price that each customer is willing to pay.
"Product versioning" or simply "versioning" (or second-degree price differentiation) — offering a product line by creating slightly different products for the purpose of price differentiation, i.e. a vertical product line. Another name given to versioning is "menu pricing".
"Group pricing" (or third-degree price differentiation) — dividing the market into segments and charging a different price to each segment (but the same price to each member of that segment). This is essentially a heuristic approximation that simplifies the problem in face of the difficulties with personalized pricing. Typical examples include student discounts and seniors' discounts.
Theoretical basis
In a theoretical market with perfect information, perfect substitutes, and no transaction costs or prohibition on secondary exchange (or re-selling) to prevent arbitrage, price discrimination can only be a feature of monopolistic and oligopolistic markets, where market power can be exercised. Otherwise, the moment the seller tries to sell the same good at different prices, the buyer at the lower price can arbitrage by selling to the consumer buying at the higher price but with a tiny discount. However, product heterogeneity, market frictions or high fixed costs (which make marginal-cost pricing unsustainable in the long run) can allow for some degree of differential pricing to different consumers, even in fully competitive retail or industrial markets.
The effects of price discrimination on social efficiency are unclear. Output can be expanded when price discrimination is very efficient. Even if output remains constant, price discrimination can reduce efficiency by misallocating output among consumers.
Price discrimination requires market segmentation and some means to discourage discount customers from becoming resellers and, by extension, competitors. This usually entails using one or more means of preventing any resale: keeping the different price groups separate, making price comparisons difficult, or restricting pricing information. The boundary set up by the marketer to keep segments separate is referred to as a rate fence. Price discrimination is thus very common in services where resale is not possible; an example is student discounts at museums: In theory, students, for their condition as students, may get lower prices than the rest of the population for a certain product or service, and later will not become resellers, since what they received, may only be used or consumed by them. Another example of price discrimination is intellectual property, enforced by law and by technology. In the market for DVDs, laws require DVD players to be designed and produced with hardware or software that prevents inexpensive copying or playing of content purchased legally elsewhere in the world at a lower price. In the US the Digital Millennium Copyright Act has provisions to outlaw circumventing of such devices to protect the enhanced monopoly profits that copyright holders can obtain from price discrimination against higher price market segments.
Price discrimination can also be seen where the requirement that goods be identical is relaxed. For example, so-called "premium products" (including relatively simple products, such as cappuccino compared to regular coffee with cream) have a price differential that is not explained by the cost of production. Some economists have argued that this is a form of price discrimination exercised by providing a means for consumers to reveal their willingness to pay.
Price discrimination differentiates the willingness to pay of the customers, in order to eliminate as much consumer surplus as possible. By understanding the elasticity of the customer's demand, a business could use its market power to identify the customers' willingness to pay. Different people would pay a different price for the same product when price discrimination exists in the market. When a company recognized a consumer that has a lower willingness to pay, the company could use the price discrimination strategy in order to maximized the firm's profit.
First degree
Exercising first degree (or perfect or primary) price discrimination requires the monopoly seller of a good or service to know the absolute maximum price (or reservation price) that every consumer is willing to pay. By knowing the reservation price, the seller is able to sell the good or service to each consumer at the maximum price they are willing to pay, and thus transform the consumer surplus into revenues, leading it to be the most profitable form of price discrimination. So the profit is equal to the sum of consumer surplus and producer surplus. First-degree price discrimination is the most profitable as it obtains all of the consumer surplus and each consumer buys the good at the highest price they are willing to pay. The marginal consumer is the one whose reservation price equals the marginal cost of the product, meaning that the social surplus comes entirely from producer surplus, which is obviously beneficial for the firm. The seller produces more of their product than they would achieve monopoly profits with no price discrimination, which means that there is no deadweight loss. During first-degree price discrimination, the firm produces the amount where marginal benefit equals marginal cost, and fully maximizes producer surplus. Examples of this might be observed in markets where consumers bid for tenders, though, in this case, the practice of collusive tendering could reduce the market efficiency.
Second degree
In second-degree price discrimination, price varies according to quantity demanded. Larger quantities are available at a lower unit price. This is particularly widespread in sales to industrial customers, where bulk buyers enjoy discounts.
Additionally to second-degree price discrimination, sellers are not able to differentiate between different types of consumers. Thus, the suppliers will provide incentives for the consumers to differentiate themselves according to preference, which is done by quantity "discounts", or non-linear pricing. This allows the supplier to set different prices to the different groups and capture a larger portion of the total market surplus.
In reality, different pricing may apply to differences in product quality as well as quantity. For example, airlines often offer multiple classes of seats on flights, such as first-class and economy class, with the first-class passengers receiving wine, beer and spirits with their ticket and the economy passengers offered only juice, pop, and water. This is a way to differentiate consumers based on preference, and therefore allows the airline to capture more consumer's surplus.
Third degree
Third-degree price discrimination means charging a different price to different consumers in a given number of groups and being able to distinguish between the groups to charge a separate monopoly price. For example, rail and tube (subway) travelers can be subdivided into commuters and casual travelers, and cinema goers can be subdivided into adults and children, with some theatres also offering discounts to full-time students and seniors. Splitting the market into peak and off-peak use of service is very common and occurs with gas, electricity, and telephone supply, as well as gym membership and parking charges. Some parking lots charge less for "early bird" customers who arrive at the parking lot before a certain time.
(Some of these examples are not pure "price discrimination", in that the differential price is related to production costs: the marginal cost of providing electricity or car parking spaces is very low outside peak hours. Incentivizing consumers to switch to off-peak usage is done as much to minimize costs as to maximize revenue.)
There are limits for price discrimination as well. When price discrimination exists in a market, the consumer surplus and producer surplus will be affected by its existence. In order to offer different prices for different groups of people in the aggregate market, the business has to use additional information to identify its consumers. Consequently, they will be involved in third-degree price discrimination. With third-degree price discrimination, the firms try to generate sales by identifying different market segments, such as domestic and industrial users, with different price elasticities. Markets must be kept separate by time, physical distance, and nature of use. For example, Microsoft Office Schools edition is available for a lower price to educational institutions than to other users. The markets cannot overlap so that consumers who purchase at a lower price in the elastic sub-market could resell at a higher price in the inelastic sub-market. The company must also have monopoly power to make price discrimination more effective.
Two part tariff
The two-part tariff is another form of price discrimination where the producer charges an initial fee then a secondary fee for the use of the product. This pricing strategy yields a result similar to second-degree price discrimination. An example of two-part tariff pricing is in the market for shaving razors. The customer pays an initial cost for the razor, and then pays again for the replacement blades. This pricing strategy works because it shifts the demand curve to the right: since the customer has already paid for the initial blade holder and will continue to buy the blades which are cheaper than buying disposable razors.
Combination
These types are not mutually exclusive. Thus a company may vary pricing by location, but then offer bulk discounts as well. Airlines use several different types of price discrimination, including:
Bulk discounts to wholesalers, consolidators, and tour operators
Incentive discounts for higher sales volumes to travel agents and corporate buyers
Seasonal discounts, incentive discounts, and even general prices that vary by location. The price of a flight from say, Singapore to Beijing can vary widely if one buys the ticket in Singapore compared to Beijing (or New York or Tokyo or elsewhere).
Discounted tickets requiring advance purchase and/or Saturday stays. Both restrictions have the effect of excluding business travelers, who typically travel during the workweek and arrange trips on shorter notice.
First degree price discrimination based on customer. Hotel or car rental firms may quote higher prices to their loyalty program's top tier members than to the general public.
Modern taxonomy
The first/second/third degree taxonomy of price discrimination is due to Pigou (Economics of Welfare, 3rd edition, 1929). However, these categories are not mutually exclusive or exhaustive. Ivan Png (Managerial Economics, 1998: 301-315) suggests an alternative taxonomy:
Complete discrimination where the seller prices each unit at a different price, so that each user purchases up to the point where the user's marginal benefit equals the marginal cost of the item;
Direct segmentation where the seller can condition price on some attribute (like age or gender) that directly segments the buyers;
Indirect segmentation where the seller relies on some proxy (e.g., package size, usage quantity, coupon) to structure a choice that indirectly segments the buyers;
Uniform pricing where the seller sets the same price for each unit of the product.
The hierarchy—complete/direct/indirect/uniform pricing—is in decreasing order of profitability and information requirement. Complete price discrimination is most profitable, and requires the seller to have the most information about buyers. Next most profitable and in information requirement is direct segmentation, followed by indirect segmentation. Finally, uniform pricing is the least profitable and requires the seller to have the least information about buyers is.
Explanation
The purpose of price discrimination is generally to capture the market's consumer surplus. This surplus arises because, in a market with a single clearing price, some customers (the very low price elasticity segment) would have been prepared to pay more than the single market price. Price discrimination transfers some of this surplus from the consumer to the producer/marketer. It is a way of increasing monopoly profit. In a perfectly-competitive market, manufacturers make normal profit, but not monopoly profit, so they cannot engage in price discrimination.
It can be argued that strictly, a consumer surplus need not exist, for example where fixed costs or economies of scale mean that the marginal cost of adding more consumers is less than the marginal profit from selling more product. This means that charging some consumers less than an even share of costs can be beneficial. An example is a high-speed internet connection shared by two consumers in a single building; if one is willing to pay less than half the cost of connecting the building, and the other willing to make up the rest but not to pay the entire cost, then price discrimination can allow the purchase to take place. However, this will cost the consumers as much or more than if they pooled their money to pay a non-discriminating price. If the consumer is considered to be the building, then a consumer surplus goes to the inhabitants.
It can be proved mathematically that a firm facing a downward sloping demand curve that is convex to the origin will always obtain higher revenues under price discrimination than under a single price strategy. This can also be shown geometrically.
In the top diagram, a single price is available to all customers. The amount of revenue is represented by area . The consumer surplus is the area above line segment but below the demand curve .
With price discrimination, (the bottom diagram), the demand curve is divided into two segments ( and ). A higher price is charged to the low elasticity segment, and a lower price is charged to the high elasticity segment. The total revenue from the first segment is equal to the area . The total revenue from the second segment is equal to the area . The sum of these areas will always be greater than the area without discrimination assuming the demand curve resembles a rectangular hyperbola with unitary elasticity. The more prices that are introduced, the greater the sum of the revenue areas, and the more of the consumer surplus is captured by the producer.
The above requires both first and second degree price discrimination: the right segment corresponds partly to different people than the left segment, partly to the same people, willing to buy more if the product is cheaper.
It is very useful for the price discriminator to determine the optimum prices in each market segment. This is done in the next diagram where each segment is considered as a separate market with its own demand curve. As usual, the profit maximizing output (Qt) is determined by the intersection of the marginal cost curve (MC) with the marginal revenue curve for the total market (MRt).
The firm decides what amount of the total output to sell in each market by looking at the intersection of marginal cost with marginal revenue (profit maximization). This output is then divided between the two markets, at the equilibrium marginal revenue level. Therefore, the optimum outputs are and . From the demand curve in each market the profit can be determined maximizing prices of and .
The marginal revenue in both markets at the optimal output levels must be equal, otherwise the firm could profit from transferring output over to whichever market is offering higher marginal revenue.
Given that Market 1 has a price elasticity of demand of and Market 2 of , the optimal pricing ration in Market 1 versus Market 2 is .
The price in a perfectly-competitive market will always be lower than any price under price discrimination (including in special cases like the internet connection example above, assuming that the perfectly competitive market allows consumers to pool their resources). In a market with perfect competition, no price discrimination is possible, and the average total cost (ATC) curve will be identical to the marginal cost curve (MC). The price will be the intersection of this ATC/MC curve and the demand line (Dt). The consumer thus buys the product at the cheapest price at which any manufacturer can produce any quantity.
Price discrimination is a sign that the market is imperfect, the seller has some monopoly power, and that prices and seller profits are higher than they would be in a perfectly competitive market.
Price discrimination in oligopoly
An oligopoly forms when a small group of business dominates an industry. When the dominating companies in an oligopoly model compete in prices, the motive for inter-temporal price discrimination would appear in the oligopoly market. Price discrimination can be facilitated by inventory controls in oligopoly.
Advantages of price discrimination
Firms that hold some monopolistic or oliogopolistic power will be able to increase their revenue. In theory, they might also use the money for investment which benefit consumers, like research and development, though this is more common in a competitive market where innovation brings temporary market power.
Lower prices (for some) than in a one-price monopoly. Even the lowest "discounted" prices will be higher than the price in a competitive market, which is equal to the cost of production. For example, trains tend to be near-monopolies (see natural monopoly). So old people may get lower train fares than they would if everyone got the same price, because the train company knows that old people are more likely to be poor. Also, customers willing to spend time in researching ‘special offers’ get lower prices; their effort acts as an honest signal of their price-sensitivity, by reducing their consumer surplus by the value of the time spent hunting.
True price discrimination occurs when exactly the same product is sold at multiple prices. It benefits only the seller, compared to a competitive market. It benefits some buyers at a (greater) cost to others, causing a net loss to consumers, compared to a single-price monopoly. For congestion pricing, which can benefit the buyer and is not price discrimination, see counterexamples below.
Disadvantages of price discrimination
Higher prices. Under price discrimination, all consumers will pay higher prices than they would in a competitive market. Some consumers will end up paying higher prices than they would in a single-price monopoly. These higher prices are likely to be allocatively inefficient because P MC.
Decline in consumer surplus. Price discrimination enables a transfer of money from consumers to firms – increasing wealth inequality.
Potentially unfair. Those who pay higher prices may not be the poorest. For example, adults paying full price could be unemployed, senior citizens can be very well off.
Administration costs. There will be administration costs in separating the markets, which could lead to higher prices.
Predatory pricing. Profits from price discrimination could be used to finance predatory pricing. Predatory pricing can be used to maintain the monopolistic power needed to price-discriminate.
Examples
Retail price discrimination
Manufacturers may sell their products to similarly situated retailers at different prices based solely on the volume of products purchased. Sometimes, the firm investigate the consumers’ purchase histories which would show the customer's unobserved willingness to pay. Each customer has a purchasing score which indicates his or her preferences; consequently, the firm will be able to set the price for the individual customer at the point that minimizes the consumer surplus. Oftentimes, consumer are not aware of the ways to manipulate that score. If he or she want to do to so, he or she could reduce the demand to reduce the average equilibrium price, which will reduce the firms price discriminating strategy.
Travel industry
Airlines and other travel companies use differentiated pricing regularly, as they sell travel products and services simultaneously to different market segments. This is often done by assigning capacity to various booking classes, which sell for different prices and which may be linked to fare restrictions. The restrictions or "fences" help ensure that market segments buy in the booking class range that has been established for them. For example, schedule-sensitive business passengers who are willing to pay $300 for a seat from city A to city B cannot purchase a $150 ticket because the $150 booking class contains a requirement for a Saturday-night stay, or a 15-day advance purchase, or another fare rule that discourages, minimizes, or effectively prevents a sale to business passengers.
Notice however that in this example "the seat" is not really always the same product. That is, the business person who purchases the $300 ticket may be willing to do so in return for a seat on a high-demand morning flight, for full refundability if the ticket is not used, and for the ability to upgrade to first class if space is available for a nominal fee. On the same flight are price-sensitive passengers who are not willing to pay $300, but who are willing to fly on a lower-demand flight ( one leaving an hour earlier), or via a connection city (not a non-stop flight), and who are willing to forgo refundability.
On the other hand, an airline may also apply differential pricing to "the same seat" over time, e.g. by discounting the price for an early or late booking (without changing any other fare condition). This could present an arbitrage opportunity in the absence of any restriction on reselling. However, passenger name changes are typically prevented or financially penalized by contract.
Since airlines often fly multi-leg flights, and since no-show rates vary by segment, competition for the seat has to take in the spatial dynamics of the product. Someone trying to fly A-B is competing with people trying to fly A-C through city B on the same aircraft. This is one reason airlines use yield management technology to determine how many seats to allot for A-B passengers, B-C passengers, and A-B-C passengers, at their varying fares and with varying demands and no-show rates.
With the rise of the Internet and the growth of low fare airlines, airfare pricing transparency has become far more pronounced. Passengers discovered it is quite easy to compare fares across different flights or different airlines. This helped put pressure on airlines to lower fares. Meanwhile, in the recession following the September 11, 2001, attacks on the U.S., business travelers and corporate buyers made it clear to airlines that they were not going to be buying air travel at rates high enough to subsidize lower fares for non-business travelers. This prediction has come true, as vast numbers of business travelers are buying airfares only in economy class for business travel.
There are sometimes group discounts on rail tickets and passes. This may be in view of the alternative of going by car together.
Coupons
The use of coupons in retail is an attempt to distinguish customers by their reserve price. The assumption is that people who go through the trouble of collecting coupons have greater price sensitivity than those who do not. Thus, making coupons available enables, for instance, breakfast cereal makers to charge higher prices to price-insensitive customers, while still making some profit off customers who are more price-sensitive.
Premium pricing
For certain products, premium products are priced at a level (compared to "regular" or "economy" products) that is well beyond their marginal cost of production. For example, a coffee chain may price regular coffee at $1, but "premium" coffee at $2.50 (where the respective costs of production may be $0.90 and $1.25). Economists such as Tim Harford in the Undercover Economist have argued that this is a form of price discrimination: by providing a choice between a regular and premium product, consumers are being asked to reveal their degree of price sensitivity (or willingness to pay) for comparable products. Similar techniques are used in pricing business class airline tickets and premium alcoholic drinks, for example.
This effect can lead to (seemingly) perverse incentives for the producer. If, for example, potential business class customers will pay a large price differential only if economy class seats are uncomfortable while economy class customers are more sensitive to price than comfort, airlines may have substantial incentives to purposely make economy seating uncomfortable. In the example of coffee, a restaurant may gain more economic profit by making poor quality regular coffee—more profit is gained from up-selling to premium customers than is lost from customers who refuse to purchase inexpensive but poor quality coffee. In such cases, the net social utility should also account for the "lost" utility to consumers of the regular product, although determining the magnitude of this foregone utility may not be feasible.
Segmentation by age group, student status, ethnicity and citizenship
Many movie theaters, amusement parks, tourist attractions, and other places have different admission prices per market segment: typical groupings are Youth/Child, Student, Adult, Senior Citizen, Local and Foreigner. Each of these groups typically have a much different demand curve. Children, people living on student wages, and people living on retirement generally have much less disposable income. Foreigners may be perceived as being more wealthy than locals and therefore being capable of paying more for goods and services - sometimes this can be even 35 times as much. Market stall-holders and individual public transport providers may also insist on higher prices for their goods and services when dealing with foreigners (sometimes called the "White Man Tax"). Some goods - such as housing - may be offered at cheaper prices for certain ethnic groups.
Discounts for members of certain occupations
Some businesses may offer reduced prices members of some occupations, such as school teachers (see below), police and military personnel. In addition to increased sales to the target group, businesses benefit from the resulting positive publicity, leading to increased sales to the general public.
Retail incentives
A variety of incentive techniques may be used to increase market share or revenues at the retail level. These include discount coupons, rebates, bulk and quantity pricing, seasonal discounts, and frequent buyer discounts.
Incentives for industrial buyers
Many methods exist to incentivize wholesale or industrial buyers. These may be quite targeted, as they are designed to generate specific activity, such as buying more frequently, buying more regularly, buying in bigger quantities, buying new products with established ones, and so on. They may also be designed to reduce the administrative and finance costs of processing each transaction. Thus, there are bulk discounts, special pricing for long-term commitments, non-peak discounts, discounts on high-demand goods to incentivize buying lower-demand goods, rebates, and many others. This can help the relations between the firms involved.
Gender-based examples
Gender-based price discrimination is the practice of offering identical or similar services and products to men and women at different prices when the cost of producing the products and services is the same. In the United States, gender-based price discrimination has been a source of debate. In 1992, the New York City Department of Consumer Affairs (“DCA”) conducted an investigation of “price bias against women in the marketplace”. The DCA's investigation concluded that women paid more than men at used car dealers, dry cleaners, and hair salons. The DCA's research on gender pricing in New York City brought national attention to gender-based price discrimination and the financial impact it has on women.
With consumer products, differential pricing is usually not based explicitly on the actual gender of the purchaser, but is achieved implicitly by the use of differential packaging, labelling, or colour schemes designed to appeal to male or female consumers. In many cases, where the product is marketed to make an attractive gift, the gender of the purchaser may be different from that of the end user.
In 1995, California Assembly's Office of Research studied the issue of gender-based price discrimination of services and estimated that women effectively paid an annual “gender tax” of approximately $1,351.00 for the same services as men. It was also estimated that women, over the course of their lives, spend thousands of dollars more than men to purchase similar products. For example, prior to the enactment of the Patient Protection and Affordable Care Act (“Affordable Care Act”), health insurance companies charged women higher premiums for individual health insurance policies than men. Under the Affordable Care Act, health insurance companies are now required to offer the same premium price to all applicants of the same age and geographical locale without regard to gender. However, there is no federal law banning gender-based price discrimination in the sale of products. Instead, several cities and states have passed legislation prohibiting gender-based price discrimination on products and services.
In Europe, motor insurance premiums have historically been higher for men than for women, a practice that the insurance industry attempts to justify on the basis of different levels of risk. The EU has banned this practice; however, there is evidence that it is being replaced by "proxy discrimination", that is, discrimination on the basis of factors that are strongly correlated with gender: for example, charging construction workers more than midwives.
International price discrimination
Pharmaceutical companies may charge customers living in wealthier countries a much higher price than for identical drugs in poorer nations, as is the case with the sale of antiretroviral drugs in Africa. Since the purchasing power of African consumers is much lower, sales would be extremely limited without price discrimination. The ability of pharmaceutical companies to maintain price differences between countries is often either reinforced or hindered by national drugs laws and regulations, or the lack thereof.
Even online sales for non material goods, which do not have to be shipped, may change according to the geographic location of the buyer.
Academic pricing
Companies will often offer discounted goods and software to students and faculty at school and university levels. These may be labeled as academic versions, but perform the same as the full price retail software. Academic versions of the most expensive software suites may be free or significantly cheaper than the retail price of standard versions. Some academic software may have differing licenses than retail versions, usually disallowing their use in activities for profit or expiring the license after a given number of months. This also has the characteristics of an "initial offer" - that is, the profits from an academic customer may come partly in the form of future non-academic sales due to vendor lock-in.
Sliding scale fees
Sliding scale fees are when different customers are charged different prices based on their income, which is used as a proxy for their willingness or ability to pay. For example, some nonprofit law firms charge on a sliding scale based on income and family size. Thus the clients paying a higher price at the top of the fee scale help subsidize the clients at the bottom of the scale. This differential pricing enables the nonprofit to serve a broader segment of the market than they could if they only set one price.
Weddings
Goods and services for weddings are sometimes priced at a higher rate than identical goods for normal customers.
Obstetric service
The welfare consequences of price discrimination were assessed by testing the differences in mean prices paid by patients from three income groups: low, middle and high. The results suggest that two different forms of price discrimination for obstetric services occurred in both these hospitals. First, there was price discrimination according to income, with the poorer users benefiting from a higher discount rate than richer ones. Secondly, there was price discrimination according to social status, with three high status occupational groups (doctors, senior government officials, and large businessmen) having the highest probability of receiving some level of discount.
Pharmaceutical industry
Price discrimination is common in the pharmaceutical industry. Drug-makers charge more for drugs in wealthier countries. For example, drug prices in the United States are some of the highest in the world. Europeans, on average, pay only 56% of what Americans pay for the same prescription drugs.
Textbooks
Physical one, not Boundless one: price discrimination is also prevalent within the publishing industry. Textbooks are much higher in the United States despite the fact that they are produced in the country. Copyright protection laws increase the price of textbooks. Also, textbooks are mandatory in the United States while schools in other countries see them as study aids.
Two necessary conditions for price discrimination
There are two conditions that must be met if a price discrimination scheme is to work. First the firm must be able to identify market segments by their price elasticity of demand and second the firms must be able to enforce the scheme. For example, airlines routinely engage in price discrimination by charging high prices for customers with relatively inelastic demand - business travelers - and discount prices for tourist who have relatively elastic demand. The airlines enforce the scheme by enforcing a no resale policy on the tickets preventing a tourist from buying a ticket at a discounted price and selling it to a business traveler (arbitrage). Airlines must also prevent business travelers from directly buying discount tickets. Airlines accomplish this by imposing advance ticketing requirements or minimum stay requirements — conditions that would be difficult for the average business traveler to meet.
User-controlled price discrimination
While the conventional theory of price discrimination generally assumes that prices are set by the seller, there is a variant form in which prices are set by the buyer, such as in the form of pay what you want pricing. Such user-controlled price discrimination exploits similar ability to adapt to varying demand curves or individual price sensitivities, and may avoid the negative perceptions of price discrimination as imposed by a seller.
In the matching markets, the platforms will internalize the impacts in revenue to create a cross-side effects. In return, this cross-side effect will differentiate price discrimination in matching intermediation from the standard markets.
Counterexamples
Some pricing patterns appear to be price discrimination but are not.
Congestion pricing
Price discrimination only happens when the same product is sold at more than one price. Congestion pricing is not price discrimination. Peak and off-peak fares on a train are not the same product; some people have to travel during rush hour, and travelling off-peak is not equivalent to them.
Some companies have high fixed costs (like a train company, which owns a railway and rolling stock, or a restaurant, which has to pay for premisses and equipment). If these fixed costs permit the company to additionally provide less-preferred products (like mid-morning meals or off-peak rail travel) at little additional cost, it can profit both seller and buyer to offer them at lower prices. Providing more product from the same fixed costs increases both producer and consumer surplus. This is not technically price discrimination (unlike, say, giving menus with higher prices to richer-looking customers, which the poorer-looking ones get an ordinary menu).
If different prices are charged for products that only some consumers will see as equivalent, the differential pricing can be used to manage demand. For instance, airlines can use price discrimination to encourage people to travel at unpopular times (early in the morning). This helps avoid over-crowding and helps to spread out demand. The airline gets better use out of planes and airports, and can thus charge less (or profit more) than if it only flew peak hours.
See also
Geo (marketing)
Interstate Commerce Act of 1887
Marketing
Microeconomics
Outline of industrial organization
Value-based pricing
Pay what you want
Ramsey problem
Redlining
Resale price maintenance
Robinson–Patman Act
Sliding scale fees
Ticket resale
Yield management
References
External links
Price Discrimination and Imperfect Competition Lars Stole
Pricing Information Hal Varian.
Price Discrimination for Digital Goods Arun Sundararajan.
Price Discrimination Discussion piece from The Filter
Joelonsoftware's blog entry on Price Discrimination
Taken to the Cleaners? Steven Landsburg's explanation of Dry Cleaner pricing.
Pricing
Ethically disputed business practices
Monopoly (economics)
Imperfect competition |
469578 | https://en.wikipedia.org/wiki/Decision%20support%20system | Decision support system | A decision support system (DSS) is an information system that supports business or organizational decision-making activities. DSSs serve the management, operations and planning levels of an organization (usually mid and higher management) and help people make decisions about problems that may be rapidly changing and not easily specified in advance—i.e. unstructured and semi-structured decision problems. Decision support systems can be either fully computerized or human-powered, or a combination of both.
While academics have perceived DSS as a tool to support decision making processes, DSS users see DSS as a tool to facilitate organizational processes. Some authors have extended the definition of DSS to include any system that might support decision making and some DSS include a decision-making software component; Sprague (1980) defines a properly termed DSS as follows:
DSS tends to be aimed at the less well structured, underspecified problem that upper level managers typically face;
DSS attempts to combine the use of models or analytic techniques with traditional data access and retrieval functions;
DSS specifically focuses on features which make them easy to use by non-computer-proficient people in an interactive mode; and
DSS emphasizes flexibility and adaptability to accommodate changes in the environment and the decision making approach of the user.
DSSs include knowledge-based systems. A properly designed DSS is an interactive software-based system intended to help decision makers compile useful information from a combination of raw data, documents, and personal knowledge, or business models to identify and solve problems and make decisions.
Typical information that a decision support application might gather and present includes:
inventories of information assets (including legacy and relational data sources, cubes, data warehouses, and data marts),
comparative sales figures between one period and the next,
projected revenue figures based on product sales assumptions.
History
The concept of decision support has evolved mainly from the theoretical studies of organizational decision making done at the Carnegie Institute of Technology during the late 1950s and early 1960s, and the implementation work done in the 1960s. DSS became an area of research of its own in the middle of the 1970s, before gaining in intensity during the 1980s.
In the middle and late 1980s, executive information systems (EIS), group decision support systems (GDSS), and organizational decision support systems (ODSS) evolved from the single user and model-oriented DSS. According to Sol (1987), the definition and scope of DSS have been migrating over the years: in the 1970s DSS was described as "a computer-based system to aid decision making"; in the late 1970s the DSS movement started focusing on "interactive computer-based systems which help decision-makers utilize data bases and models to solve ill-structured problems"; in the 1980s DSS should provide systems "using suitable and available technology to improve effectiveness of managerial and professional activities", and towards the end of 1980s DSS faced a new challenge towards the design of intelligent workstations.
In 1987, Texas Instruments completed development of the Gate Assignment Display System (GADS) for United Airlines. This decision support system is credited with significantly reducing travel delays by aiding the management of ground operations at various airports, beginning with O'Hare International Airport in Chicago and Stapleton Airport in Denver Colorado. Beginning in about 1990, data warehousing and on-line analytical processing (OLAP) began broadening the realm of DSS. As the turn of the millennium approached, new Web-based analytical applications were introduced.
DSS also have a weak connection to the user interface paradigm of hypertext. Both the University of Vermont PROMIS system (for medical decision making) and the Carnegie Mellon ZOG/KMS system (for military and business decision making) were decision support systems which also were major breakthroughs in user interface research. Furthermore, although hypertext researchers have generally been concerned with information overload, certain researchers, notably Douglas Engelbart, have been focused on decision makers in particular.
The advent of more and better reporting technologies has seen DSS start to emerge as a critical component of management design. Examples of this can be seen in the intense amount of discussion of DSS in the education environment.
Applications
DSS can theoretically be built in any knowledge domain. One example is the clinical decision support system for medical diagnosis. There are four stages in the evolution of clinical decision support system (CDSS): the primitive version is standalone and does not support integration; the second generation supports integration with other medical systems; the third is standard-based, and the fourth is service model-based.
DSS is extensively used in business and management. Executive dashboard and other business performance software allow faster decision making, identification of negative trends, and better allocation of business resources. Due to DSS, all the information from any organization is represented in the form of charts, graphs i.e. in a summarized way, which helps the management to take strategic decisions. For example, one of the DSS applications is the management and development of complex anti-terrorism systems. Other examples include a bank loan officer verifying the credit of a loan applicant or an engineering firm that has bids on several projects and wants to know if they can be competitive with their costs.
A growing area of DSS application, concepts, principles, and techniques is in agricultural production, marketing for sustainable development. Agricultural DSSes began to be developed and promoted in the 1990s. For example, the DSSAT4 package, The Decision Support System for Agrotechnology Transfer developed through financial support of USAID during the 80s and 90s, has allowed rapid assessment of several agricultural production systems around the world to facilitate decision-making at the farm and policy levels. Precision agriculture seeks to tailor decisions to particular portions of farm fields. There are, however, many constraints to the successful adoption of DSS in agriculture.
DSS is also prevalent in forest management where the long planning horizon and the spatial dimension of planning problems demands specific requirements. All aspects of Forest management, from log transportation, harvest scheduling to sustainability and ecosystem protection have been addressed by modern DSSs. In this context, the consideration of single or multiple management objectives related to the provision of goods and services that traded or non-traded and often subject to resource constraints and decision problems. The Community of Practice of Forest Management Decision Support Systems provides a large repository on knowledge about the construction and use of forest Decision Support Systems.
A specific example concerns the Canadian National Railway system, which tests its equipment on a regular basis using a decision support system. A problem faced by any railroad is worn-out or defective rails, which can result in hundreds of derailments per year. Under a DSS, the Canadian National Railway system managed to decrease the incidence of derailments at the same time other companies were experiencing an increase.
DSS has been used for risk assessment to interpret monitoring data from large engineering structures such as dams, towers, cathedrals, or masonry buildings. For instance, Mistral is an expert system to monitor dam safety, developed in the 1990s by Ismes (Italy). It gets data from an automatic monitoring system and performs a diagnosis of the state of the dam. Its first copy, installed in 1992 on the Ridracoli Dam (Italy), is still operational 24/7/365. It has been installed on several dams in Italy and abroad (e.g., Itaipu Dam in Brazil), and on monuments under the name of Kaleidos. Mistral is a registered trade mark of CESI. GIS has been successfully used since the ‘90s in conjunction with DSS, to show on a map real-time risk evaluations based on monitoring data gathered in the area of the Val Pola disaster (Italy).
Components
Three fundamental components of a DSS architecture are:
the database (or knowledge base),
the model (i.e., the decision context and user criteria)
the user interface.
The users themselves are also important components of the architecture.
Taxonomies
Using the relationship with the user as the criterion, Haettenschwiler differentiates passive, active, and cooperative DSS. A passive DSS is a system that aids the process of decision making, but that cannot bring out explicit decision suggestions or solutions. An active DSS can bring out such decision suggestions or solutions. A cooperative DSS allows for an iterative process between human and system towards the achievement of a consolidated solution: the decision maker (or its advisor) can modify, complete, or refine the decision suggestions provided by the system, before sending them back to the system for validation, and likewise the system again improves, completes, and refines the suggestions of the decision maker and sends them back to them for validation.
Another taxonomy for DSS, according to the mode of assistance, has been created by D. Power: he differentiates communication-driven DSS, data-driven DSS, document-driven DSS, knowledge-driven DSS, and model-driven DSS.
A communication-driven DSS enables cooperation, supporting more than one person working on a shared task; examples include integrated tools like Google Docs or Microsoft SharePoint Workspace.
A data-driven DSS (or data-oriented DSS) emphasizes access to and manipulation of a time series of internal company data and, sometimes, external data.
A document-driven DSS manages, retrieves, and manipulates unstructured information in a variety of electronic formats.
A knowledge-driven DSS provides specialized problem-solving expertise stored as facts, rules, procedures or in similar structures like interactive decision trees and flowcharts.
A model-driven DSS emphasizes access to and manipulation of a statistical, financial, optimization, or simulation model. Model-driven DSS use data and parameters provided by users to assist decision makers in analyzing a situation; they are not necessarily data-intensive. Dicodess is an example of an open-source model-driven DSS generator.
Using scope as the criterion, Power differentiates enterprise-wide DSS and desktop DSS. An enterprise-wide DSS is linked to large data warehouses and serves many managers in the company. A desktop, single-user DSS is a small system that runs on an individual manager's PC.
Development frameworks
Similarly to other systems, DSS systems require a structured approach. Such a framework includes people, technology, and the development approach.
The Early Framework of Decision Support System consists of four phases:
Intelligence – Searching for conditions that call for decision;
Design – Developing and analyzing possible alternative actions of solution;
Choice – Selecting a course of action among those;
Implementation – Adopting the selected course of action in decision situation.
DSS technology levels (of hardware and software) may include:
The actual application that will be used by the user. This is the part of the application that allows the decision maker to make decisions in a particular problem area. The user can act upon that particular problem.
Generator contains Hardware/software environment that allows people to easily develop specific DSS applications. This level makes use of case tools or systems such as Crystal, Analytica and iThink.
Tools include lower level hardware/software. DSS generators including special languages, function libraries and linking modules
An iterative developmental approach allows for the DSS to be changed and redesigned at various intervals. Once the system is designed, it will need to be tested and revised where necessary for the desired outcome.
Classification
There are several ways to classify DSS applications. Not every DSS fits neatly into one of the categories, but may be a mix of two or more architectures.
Holsapple and Whinston classify DSS into the following six frameworks: text-oriented DSS, database-oriented DSS, spreadsheet-oriented DSS, solver-oriented DSS, rule-oriented DSS, and compound DSS. A compound DSS is the most popular classification for a DSS; it is a hybrid system that includes two or more of the five basic structures.
The support given by DSS can be separated into three distinct, interrelated categories: Personal Support, Group Support, and Organizational Support.
DSS components may be classified as:
Inputs: Factors, numbers, and characteristics to analyze
User knowledge and expertise: Inputs requiring manual analysis by the user
Outputs: Transformed data from which DSS "decisions" are generated
Decisions: Results generated by the DSS based on user criteria
DSSs which perform selected cognitive decision-making functions and are based on artificial intelligence or intelligent agents technologies are called intelligent decision support systems (IDSS)
The nascent field of decision engineering treats the decision itself as an engineered object, and applies engineering principles such as design and quality assurance to an explicit representation of the elements that make up a decision.
See also
Argument map
Cognitive assets (organizational)
Decision theory
Enterprise decision management
Expert system
Judge–advisor system
Knapsack problem
Land allocation decision support system
List of concept- and mind-mapping software
Morphological analysis (problem-solving)
Online deliberation
Participation (decision making)
Predictive analytics
Project management software
Self-service software
Spatial decision support system
Strategic planning software
References
Further reading
Marius Cioca, Florin Filip (2015). Decision Support Systems - A Bibliography 1947-2007.
Borges, J.G, Nordström, E.-M. Garcia Gonzalo, J. Hujala, T. Trasobares, A. (eds). (2014). " Computer-based tools for supporting forest management. The experience and the expertise world-wide. Dept of Forest Resource Management, Swedish University of Agricultural Sciences. Umeå. Sweden.
Delic, K.A., Douillet, L. and Dayal, U. (2001) "Towards an architecture for real-time decision support systems:challenges and solutions.
Diasio, S., Agell, N. (2009) "The evolution of expertise in decision support technologies: A challenge for organizations," cscwd, pp. 692–697, 13th International Conference on Computer Supported Cooperative Work in Design, 2009. https://web.archive.org/web/20121009235747/http://www.computer.org/portal/web/csdl/doi/10.1109/CSCWD.2009.4968139
Gadomski, A.M. et al.(2001) "An Approach to the Intelligent Decision Advisor (IDA) for Emergency Managers", Int. J. Risk Assessment and Management, Vol. 2, Nos. 3/4.
Ender, Gabriela; E-Book (2005–2011) about the OpenSpace-Online Real-Time Methodology: Knowledge-sharing, problem solving, results-oriented group dialogs about topics that matter with extensive conference documentation in real-time. Download https://web.archive.org/web/20070103022920/http://www.openspace-online.com/OpenSpace-Online_eBook_en.pdf
Matsatsinis, N.F. and Y. Siskos (2002), Intelligent support systems for marketing decisions, Kluwer Academic Publishers.
Omid A.Sianaki, O Hussain, T Dillon, AR Tabesh - ... Intelligence, Modelling and Simulation (CIMSiM), 2010, Intelligent decision support system for including consumers' preferences in residential energy consumption in smart grid
Power, D. J. (2000). Web-based and model-driven decision support systems: concepts and issues. in proceedings of the Americas Conference on Information Systems, Long Beach, California.
Sauter, V. L. (1997). Decision support systems: an applied managerial approach. New York, John Wiley.
Silver, M. (1991). Systems that support decision makers: description and analysis. Chichester ; New York, Wiley.
Information systems
Knowledge engineering
Business software |
25814967 | https://en.wikipedia.org/wiki/Record%20%28software%29 | Record (software) | Record is a music software program developed by Swedish software developers Propellerhead Software. Designed for recording, arrangement and mixing, it emulates a recording studio, with a mixing desk, a rack of virtual instruments and effects and an audio and MIDI sequencer. Record can be used either as a complete virtual recording studio in itself, or together with Propellerhead Software's Reason.
General
Record was released on September 9, 2009, after a two-month trial period, open to users who signed up at www.recordyou.com.
The program's design mimics an SSL 9000k mixing desk and a rack into which users can insert virtual devices such as instruments and effects processors. These modules can be controlled from Record's built-in MIDI and audio sequencer or from other sequencing applications via Propellerhead's ReWire protocol. Hotkeys are used for switching between these three main areas.
Recording of external audio sources is handled in Record's built-in sequencer, which includes tools for comping together multiple takes into a single recording, and automatic timestretch when the tempo is changed.
Like Propellerhead's Reason, Record's interface includes a Toggle Rack command, which flips the rack around to display the devices from the rear. Here the user can route virtual audio and CV cables from one piece of equipment to another. This cable layout enables the creation of complex effects chains and allows devices to modulate one another.
In reviews, Record has been praised for its stability, the quality of the time stretch algorithm and built-in mixer, as well as the seamless integration with Reason, while it has been criticized for its lack of support for third party plug-ins.
In 2011, Record was merged into Reason 6, and the standalone version was discontinued.
Instruments and effects
Record contains a limited set of instruments and effects that can be expanded with Propellerhead's Reason. Included with Record are:
Combinator: combines multiple modules into on, to create new instruments and effect chains
ID-8: a sample playback device featuring drums, pianos, bass, strings, etc.
MClass Equalizer: a four-band EQ
MClass Stereo Imager: a two band stereo imaging processor
MClass Compressor: single band compressor
MClass Maximizer: limiter device
Line 6 Guitar Amp: a virtual version of Line 6's guitar POD, emulating three guitar amplifiers and speaker cabinets
Line 6 Bass Amp: a virtual version of Line 6's bass POD, emulating two bass amplifiers and speaker cabinets
Neptune Pitch Correction & Voice Synth: a hybrid effect and instrument device for correcting pitch and adding synthesized harmonies
RV7000 Advanced Reverb: a reverb with nine reverb algorithms, including Small Space, Room, Hall, Arena, Plate, Spring, Echo, Multitap and Reverse
Scream 4 Distortion: a distortion module with ten different distortion models: Overdrive, Distortion, Fuzz, Tube, Tape, Feedback, Modulate, Warp, Digital and Scream
DDL-1 Digital Delay Line: a simple delay effect
CF-101 Chorus/Flanger: a chorus and flanger effect
Additional Interface
Pressing the tab key on the computer keyboard reveals the Record racks back side, where you can access additional parameters for the rackmounted devices, including signal cables for audio and CV. This allows users to virtually route cables connecting the devices in the rack like in a traditional hardware based studio. For example, a device's output can be split into two signal chains for different processing, and the connected to different mixer channels. Users can choose where to draw the line between simplicity and precision, allowing Record to remain useful at various levels of knowledge and ambition on the user's part.
Record 1.5
A new version was released on 25 August 2010. New features include Blocks, a non-linear sequencer mode for arranging and writing music, and the Neptune device, a pitch-editing tool and voice synth designed for vocals. Other features include: use of multiple USB keyboards, tap-tempo, and reverse audio, as well as integration with Reason 5.
See also
Reason
Ableton Live
Cubase
FL Studio
Logic Pro
Pro Tools
REAPER
Mixcraft
List of music software
References
External links
Record web page
Electronic Musician review
Record review from musicradar.com
Digital audio workstation software |
28376120 | https://en.wikipedia.org/wiki/CodeCharge%20Studio | CodeCharge Studio | CodeCharge Studio is a rapid application development (RAD) and integrated development environment (IDE) for creating database-driven web applications. It is a code generator and templating engine that separates the presentation layer from the coding layer, with the aim of allowing designers and programmers to work cohesively in a web application (the model-view-controller design pattern).
CodeCharge is the first product released by Yes Software, Inc., after two years of development.
Software
CodeCharge utilizes point-and-click wizards for creating record and search forms, grids, and editable grids without the need for programming. The databases it supports include MySQL, MS SQL Server, MS Access, PostgreSQL, and Oracle, as well as any other database that supports web connectivity. CodeCharge can export code to all major programing languages, such as ASP.NET, ASP, Java, ColdFusion, PHP, and Perl.
CodeCharge employs an interactive user interface (UI) designed for the creation of web applications. When generating code, CodeCharge automatically structures the code, using naming conventions and comments to describe the code's purpose. Moreover, CodeCharge keeps the application separate from the code it generates, so that projects may be converted to any language at any time.
Without additional programming, a CodeCharge-generated project is not a routed web site (where everything is routed through, for example, index.asp); rather, every page is accessible by reference to its own name or URL.
Technologies
Here are listed technologies which used, when the application is ready and running.
OOP - The generated application is Object Oriented. Every structural element, like database connection, grid, navigation bar, the visible page itself etc. are all objects.
The application uses the Microsoft .NET 2 Framework and will also install when the .NET 3.5 framework is detected on the host computer.
Templating - Codecharge uses HTML template pages to generate visible internet sites. Templates of web pages may be previewed before making it "live." There are xxxx.html files, accordingly xxxx.asp (xxxx.php etc.) code files and for server side events a separate xxxx_events.asp (xxxx_events.php etc.) files.
Customization - CodeCharge provides its users a standard way to set up custom code for handling events not fully addressed by the built-in features.
Application generating technologies
PHP
Perl
.NET
Java
ASP
Coldfusion
xml
Reception
In 2003, regarding the original version of CodeCharge Studio, Arbi Arzoumani of PHP Architect wrote:
Kevin Yank of SitePoint Tech Times was impressed "by the many ways in which experienced developers could draw added power out of the software, instead of being limited by it, as is the case with most RAD tools for Web development."
In his review of CodeCharge Studio 2.0, Troy Dreier wrote in Intranet Journal, "CodeCharge Studio [allows] Web application developers [to] shave literally months off their development times."
CodeCharge Studio 3.0 received a rating of 3.5 out of 5 from Peter B. MacIntyre of php|architect.
See also
Comparison of web frameworks
Web template system
formats of web applications
References
External links
List of third-party product reviews of CodeCharge
Official Documentation
Official User Forums
Community website
2006 CodeCharge Studio Awards winner (website also done in CodeCharge Studio)
Integrated development environments
Template engines
Web development software
Web frameworks
C Sharp software
PHP software |
2523232 | https://en.wikipedia.org/wiki/Bulletproof%20hosting | Bulletproof hosting | Bulletproof hosting (BPH) is technical infrastructure service provided by a web hosting provider that is resilient to complaints of illicit activities, which serves criminal actors as a basic building block for streamlining various cyberattacks. BPH providers allow online gambling, illegal pornography, botnet command and control servers, spam, copyrighted materials, hate speech and misinformation, despite takedown court orders and law enforcement subpoenas, allowing such material in their acceptable use policies. BPH providers usually operate in jurisdictions which have lenient laws against such conduct. Most non-BPH service providers prohibit transferring materials over their network that would be in violation of their terms of service and the local laws of the incorporated jurisdiction, and oftentimes any abuse reports would result in takedowns to avoid their autonomous system's IP block being blacklisted by other providers and by Spamhaus.
History
BPH was first became the subject of research in 2006 when security researchers from VeriSign revealed Russian Business Network, the internet service provider that hosted a phishing group responsible for about $150 million in phishing-related scams. RBN also become known for identity thefts, child pornography, and botnets. The following year, McColo, the web hosting provider responsible for more than 75% of global spam was shut down and de-peered by Global Crossing and Hurricane Electric after the public disclosure by then-Washington Post reporter Brian Krebs on his Security Fix blog on that newspaper.
Difficulties
Since any abuse reports to the BPH will be disregarded, in most cases, the whole IP block ("netblock") assigned to the BPH's autonomous system will be blacklisted by other providers and third party spam filters. Additionally, BPH also have difficulty in finding network peering points for establishing Border Gateway Protocol sessions, since routing a BPH provider's network can affect the reputation of upstream autonomous systems and transit provider. This makes it difficult for BPH services to provide stable network connectivity, and in extreme cases, they can be completely de-peered; therefore BPH providers evade AS's reputation based fortification such as BGP Ranking and ASwatch through unconventional methodologies.
Web hosting reseller
According to a report, due to their mounting difficulties, BPH providers engage in establishing reseller relationships with lower-end hosting providers; although these providers are not complicit in supporting the illegitimate activities, they tend to be lenient on abuse reports and do not actively engage in fraud detection. Therefore, BPH conceals itself behind lower-end hosting providers, leveraging their better reputation and simultaneously operating both bulletproof and legitimate resells through the sub-allocated network blocks. However, if the BPH services are caught, providers of BPH migrate their clients to a newer internet infrastructure—newer lower-end AS, or IP space—effectively making the blacklisted IP addresses of the previous AS ephemeral; thus continuing to engage in criminal conduct by modifying the DNS server's resource records of the listening services and making it point to the newer IP addresses belonging to the current AS's IP space. Due to privacy concerns, the customary modes of contact for BPH providers include ICQ, Skype, and XMPP (or Jabber).
Admissible abuses
Most BPH providers promise immunity against copyright infringement and court order takedown notices, notably Digital Millennium Copyright Act (DMCA), Electronic Commerce Directive (ECD) and law enforcement subpoenas. They also allow users to operate phishing, scams (such as high-yield investment program), botnet masters and unlicensed online pharmacy websites. In these cases, the BPH providers (known as "offshore providers") operate in jurisdictions which do not have any extradition treaty or mutual legal assistance treaty (MLAT) signed with the five eye countries, particularly the United States. However, most BPH providers have a zero-tolerance policy towards child pornography and terrorism, although a few allow cold storage of such material given forbidden open-accessibility via the public internet.
Prevalent jurisdictions for incorporation and location of the data centers for BPH providers include Russia (being more permissive), Ukraine, China, Moldova, Romania, Bulgaria, Belize, Panama and the Seychelles.
Impacts
BPH services act as vital network infrastructure providers for activities such as cybercrime and online illicit economies, and the well-established working model of the cybercrime economies surrounds upon tool development and skill-sharing among peers. The development of exploits, such as zero-day vulnerabilities, are done by a very small community of highly-skilled actors, who encase them in convenient tools which are usually bought by low-skilled actors (known as script kiddies), who make use of BPH providers for carry out cyberattacks, usually targeting low-profile unpretentious network services and individuals. According to a report produced by Carnegie Mellon University for the United States Department of Defense, low-profile amateur actors are also potent in causing harmful consequences, especially to small businesses, inexperienced internet users, and miniature servers.
Criminal actors also run specialized computer programs on BPH providers knowns as port scanners which scan the entire IPv4 address space for open ports, services run on those open ports, and the version of their service daemons, searching for vulnerable versions for exploitation. One such notable vulnerability scanned by the port scanners is Heartbleed, which affected millions of internet servers. Furthermore, BPH clients also host click fraud, adware (such as DollarRevenue), and money laundering recruitment sites, which lure untried internet users into honey trapping and causing financial losses to the individuals while unrestrictedly keeping their illicit sites online, despite court orders and takedown attempts by law enforcement.
Counterinitiatives against BPH
The Spamhaus Project is an international nonprofit organization that monitors cyber threats and provides realtime blacklist reports (known as the "Badness Index") on malicious ASs, netblocks, and registrars that are involved in spam, phishing, or cybercrime activities. The Spamhaus team works closely with law enforcement agencies such as National Cyber-Forensics and Training Alliance (NCFTA) and Federal Bureau of Investigation (FBI), and the data compiled by Spamhaus is used by the majority of the ISPs, email service providers, corporations, educational institutes, governments and uplink gateways of military networks. Spamhaus publishes various data feeds that list netblocks of the criminal actors, and is designed for use by gateways, firewalls and routing equipments to filter out (or "nullroute") traffic originating from these netblocks:
Spamhaus Don't Route Or Peer List (DROP) lists netblocks allocated by an established Regional Internet Registry (RIR) or National Internet Registry (NIR) that are used by criminal actors, and doesn't include abused IP address spaces sub-allocated netblocks of a reputable AS.
Spamhaus Domain Block List (DBL) lists domain names with poor reputation in DNSBL format.
Spamhaus Botnet Controller List (BCL) lists single IPv4 addresses of botnet masters.
Notable closed services
The following are some of the notable defunct BPH providers:
CyberBunker, taken down in September 2019.
McColo, taken down in November 2008.
Russian Business Network (RBN), taken down in November 2007.
Atrivo, taken down in September 2008.
3FN, taken down by FTC in June 2009.
Proxiez, taken down in May 2010.
See also
Freedom Hosting
Fast flux
Security theater
References
Bibliography
Web hosting
Spamming
Cybercrime |
22451327 | https://en.wikipedia.org/wiki/Result%20Group | Result Group | Result Group sold a specialist equipment management software application: rentalresult to companies renting and managing assets, e.g., cranes, tools, heavy equipment, aerial, modular space, computer hardware and test & measuring equipment since 1994. It was privately owned and incorporated in Delaware and the United Kingdom. The company was acquired by Wynne Systems Inc of Irvine, CA in December 2015 and the RentalResult product continues as a brand under the Wynne umbrella.
Markets
With customers in 26 countries across a number of different services industries including asset management, equipment rental, tool rental, accommodation rental, aerial rental, computer and electronics rental, crane rental, operated equipment (wet rental) and internal rental within the construction industry and oilfield services industry supported from offices in Elland UK, Phoenix AZ and Atlanta GA.
Technology
Result Group provides a Java based, Java EE compliant application which brings in technologies such as asset tracking systems, Mobile Apps (iOS and Android), portals and touchscreen technology. rentalresult runs on a web services based SOA platform providing connectivity to other third party applications and allowing for custom development of interactive modules.
Software
The rentalresult software manages aspects of rental and equipment management including rental order processing & fulfillment, sales management, asset lifecycle & inventory, maintenance and field-based servicing, financials and CRM. The software focuses on utilization, charging and billing methodologies, re-rental management, accessory & attachment management, pricing and discounting strategies including on-demand pricing and full financial asset management.
Specific modules also exist for operated equipment and timesheet management, used primarily by heavy equipment rental companies in the middle and far east, and by specialist rental companies; for example cranes, concrete pumps, large aerial equipment, in Europe and North America.
Business intelligence
Reporting provided through IBM Cognos Reportnet which is incorporated within the rentalresult licence. The system provides standard reports as well as allowing customers to add their own. Reports can be provided in listing format, or as Excel, PDF, HTML, XML or other formats. The tools also allow dashboards and scorecards to be created to monitor specific details. rentalresult also provides an Analytics package concentrating on metrics and KPI data relevant to the rental industry and equipment management.
The rentalresult application can also be referred to as a Tier 2 ERP system with 90% of Result Group’s customers using its fully integrated financial software.
rentalresult also operate a blog focusing on the rental industry.
References
Software companies of the United Kingdom
British companies established in 1994
Companies based in Elland
1994 establishments in England |
64707757 | https://en.wikipedia.org/wiki/2019%E2%80%9320%20Little%20Rock%20Trojans%20women%27s%20basketball%20team | 2019–20 Little Rock Trojans women's basketball team | The 2019–20 Little Rock Trojans women's basketball team represented the University of Arkansas at Little Rock during the 2019–20 NCAA Division I women's basketball season. The Trojans, led by seventeenth year head coach Joe Foley, play their home games at the Jack Stephens Center and were members of the Sun Belt Conference. They finished the season 12–19, 9–9 in Sun Belt play to finish in a tie for fifth place with South Alabama. In the Sun Belt tournament, the Trojans, placed fifth, defeated No. 8 Appalachian State 48-47 before being defeated by No. 4 Louisiana 46–49. Shortly after being eliminated, the conference canceled the remainder of the tournament due to the COVID-19 pandemic which was followed with the NCAA canceling all post-season play.
Preseason
Sun Belt coaches poll
On October 30, 2019, the Sun Belt released their preseason coaches poll with the Trojans predicted to finish as champions of the conference.
Sun Belt Preseason All-Conference team
1st team
Kyra Collier – SR, Guard
2nd team
Tori Lasker – JR, Guard
Roster
Schedule
|-
!colspan=9 style=| Non-conference regular season
|-
!colspan=9 style=| Sun Belt Conference regular season
|-
!colspan=9 style=| Sun Belt Women's Tournament
See also
2019–20 Little Rock Trojans men's basketball team
References
Little Rock Trojans women's basketball seasons
Little Rock |
2594628 | https://en.wikipedia.org/wiki/Inotify | Inotify | inotify (inode notify) is a Linux kernel subsystem created by John McCutchan, which monitors changes to the filesystem, and reports those changes to applications. It can be used to automatically update directory views, reload configuration files, log changes, backup, synchronize, and upload. The inotifywait and inotifywatch commands allow using the inotify subsystem from the command line. One major use is in desktop search utilities like Beagle, where its functionality permits reindexing of changed files without scanning the filesystem for changes every few minutes, which would be very inefficient.
inotify replaced an earlier facility, dnotify, which had similar goals. It was merged into the Linux kernel mainline in kernel version 2.6.13, released on August 29, 2005; later kernel versions included further improvements. The required library interfaces were added into the GNU C Library (glibc) in its version 2.4, released in March 2006, while the support for inotify was completed in glibc version 2.5, released in September 2006.
Limitations
Limitations imposed by inotify include the following:
Inotify does not support recursively watching directories, meaning that a separate inotify watch must be created for every subdirectory.
Inotify does report some but not all events in sysfs and procfs.
Notification via inotify requires the kernel to be aware of all relevant filesystem events, which is not always possible for networked filesystems such as NFS where changes made by one client are not immediately broadcast to other clients.
Rename events are not handled directly; i.e., inotify issues two separate events that must be examined and matched in a context of potential race conditions.
History
July 2004: the first release announcement
August 29, 2005: Linux kernel version 2.6.13 released, containing merged inotify code
March 2006: GNU C Library (glibc) version 2.4 released, bringing initial inotify support
September 2006: Glibc version 2.5 released, bringing complete inotify support
Advantages over dnotify
There are a number of advantages when using inotify when compared to the older dnotify API that it replaced. With dnotify, a program had to use one file descriptor for each directory that it was monitoring. This can become a bottleneck since the limit of file descriptors per process could be reached. Later, fanotify was created to overcome this issue. The use of file descriptors along with dnotify also proved to be a problem when using removable media. Devices could not be unmounted since file descriptors kept the resource busy.
Another drawback of dnotify is the level of granularity, since programmers can only monitor changes at the directory level. To access detailed information about the environmental changes that occur when a notification message is sent, a stat structure must be used; this is considered a necessary evil in that a cache of stat structures has to be maintained, for every new stat structure generated a comparison is run against the cached one.
The inotify API uses fewer file descriptors, allowing programmers to use the established select and poll interface, rather than the signal notification system used by dnotify. This also makes integration with existing select- or poll-based libraries (like GLib) easier.
See also
File Alteration Monitor (SGI)
Gamin (Linux, FreeBSD)
DMAPI
kqueue (FreeBSD)
FSEvents (macOS)
References
External links
Kernel Korner an introduction to inotify by Robert Love (2005)
LWN Article on Inotify watching filesystem events with inotify (partly out of date)
IBM Article monitoring Linux filesystem events with inotify (6 September 2008).
Filesystem notification, part 1: An overview of dnotify and inotify an LWN.net article by Michael Kerrisk (2014)
Linux kernel features |
5309 | https://en.wikipedia.org/wiki/Software | Software | Software is a collection of instructions that tell a computer how to work. This is in contrast to hardware, from which the system is built and actually performs the work.
At the lowest programming level, executable code consists of machine language instructions supported by an individual processor—typically a central processing unit (CPU) or a graphics processing unit (GPU). Machine language consists of groups of binary values signifying processor instructions that change the state of the computer from its preceding state. For example, an instruction may change the value stored in a particular storage location in the computer—an effect that is not directly observable to the user. An instruction may also invoke one of many input or output operations, for example displaying some text on a computer screen; causing state changes which should be visible to the user. The processor executes the instructions in the order they are provided, unless it is instructed to "jump" to a different instruction, or is interrupted by the operating system. , most personal computers, smartphone devices and servers have processors with multiple execution units or multiple processors performing computation together, and computing has become a much more concurrent activity than in the past.
The majority of software is written in high-level programming languages. They are easier and more efficient for programmers because they are closer to natural languages than machine languages. High-level languages are translated into machine language using a compiler or an interpreter or a combination of the two. Software may also be written in a low-level assembly language, which has a strong correspondence to the computer's machine language instructions and is translated into machine language using an assembler.
History
An algorithm for what would have been the first piece of software was written by Ada Lovelace in the 19th century, for the planned Analytical Engine. She created proofs to show how the engine would calculate Bernoulli numbers. Because of the proofs and the algorithm, she is considered the first computer programmer.
The first theory about software, prior to the creation of computers as we know them today, was proposed by Alan Turing in his 1935 essay, On Computable Numbers, with an Application to the Entscheidungsproblem (decision problem). This eventually led to the creation of the academic fields of computer science and software engineering; both fields study software and its creation. Computer science is the theoretical study of computer and software (Turing's essay is an example of computer science), whereas software engineering is the application of engineering principles to development of software. Prior to 1946, software was not yet the programs stored in the memory of stored-program digital computers, as we now understand it; the first electronic computing devices were instead rewired in order to "reprogram" them.
In 2000, Fred Shapiro, a librarian at the Yale Law School, published a letter revealing that John Wilder Tukey's 1958 paper "The Teaching of Concrete Mathematics" contained the earliest known usage of the term "software" found in a search of JSTOR's electronic archives, predating the OED's citation by two years. This led many to credit Tukey with coining the term, particularly in obituaries published that same year, although Tukey never claimed credit for any such coinage. In 1995, Paul Niquette claimed he had originally coined the term in October 1953, although he could not find any documents supporting his claim. The earliest known publication of the term "software" in an engineering context was in August 1953 by Richard R. Carhart, in a Rand Corporation Research Memorandum.
Types
On virtually all computer platforms, software can be grouped into a few broad categories.
Purpose, or domain of use
Based on the goal, computer software can be divided into:
Application software uses the computer system to perform special functions beyond the basic operation of the computer itself. There are many different types of application software because the range of tasks that can be performed with a modern computer is so large—see list of software.
System software manages hardware behaviour, as to provide basic functionalities that are required by users, or for other software to run properly, if at all. System software is also designed for providing a platform for running application software, and it includes the following:
Operating systems are essential collections of software that manage resources and provide common services for other software that runs "on top" of them. Supervisory programs, boot loaders, shells and window systems are core parts of operating systems. In practice, an operating system comes bundled with additional software (including application software) so that a user can potentially do some work with a computer that only has one operating system.
Device drivers operate or control a particular type of device that is attached to a computer. Each device needs at least one corresponding device driver; because a computer typically has at minimum at least one input device and at least one output device, a computer typically needs more than one device driver.
Utilities are computer programs designed to assist users in the maintenance and care of their computers.
Malicious software, or malware, is software that is developed to harm or disrupt computers. Malware is closely associated with computer-related crimes, though some malicious programs may have been designed as practical jokes.
Nature or domain of execution
Desktop applications such as web browsers and Microsoft Office, as well as smartphone and tablet applications (called "apps").
JavaScript scripts are pieces of software traditionally embedded in web pages that are run directly inside the web browser when a web page is loaded without the need for a web browser plugin. Software written in other programming languages can also be run within the web browser if the software is either translated into JavaScript, or if a web browser plugin that supports that language is installed; the most common example of the latter is ActionScript scripts, which are supported by the Adobe Flash plugin.
Server software, including:
Web applications, which usually run on the web server and output dynamically generated web pages to web browsers, using e.g. PHP, Java, ASP.NET, or even JavaScript that runs on the server. In modern times these commonly include some JavaScript to be run in the web browser as well, in which case they typically run partly on the server, partly in the web browser.
Plugins and extensions are software that extends or modifies the functionality of another piece of software, and require that software be used in order to function.
Embedded software resides as firmware within embedded systems, devices dedicated to a single use or a few uses such as cars and televisions (although some embedded devices such as wireless chipsets can themselves be part of an ordinary, non-embedded computer system such as a PC or smartphone). In the embedded system context there is sometimes no clear distinction between the system software and the application software. However, some embedded systems run embedded operating systems, and these systems do retain the distinction between system software and application software (although typically there will only be one, fixed application which is always run).
Microcode is a special, relatively obscure type of embedded software which tells the processor itself how to execute machine code, so it is actually a lower level than machine code. It is typically proprietary to the processor manufacturer, and any necessary correctional microcode software updates are supplied by them to users (which is much cheaper than shipping replacement processor hardware). Thus an ordinary programmer would not expect to ever have to deal with it.
Programming tools
Programming tools are also software in the form of programs or applications that developers use to create, debug, maintain, or otherwise support software.
Software is written in one or more programming languages; there are many programming languages in existence, and each has at least one implementation, each of which consists of its own set of programming tools. These tools may be relatively self-contained programs such as compilers, debuggers, interpreters, linkers, and text editors, that can be combined to accomplish a task; or they may form an integrated development environment (IDE), which combines much or all of the functionality of such self-contained tools. IDEs may do this by either invoking the relevant individual tools or by re-implementing their functionality in a new way. An IDE can make it easier to do specific tasks, such as searching in files in a particular project. Many programming language implementations provide the option of using both individual tools or an IDE.
Topics
Architecture
People who use modern general purpose computers (as opposed to embedded systems, analog computers and supercomputers) usually see three layers of software performing a variety of tasks: platform, application, and user software.
Platform software The platform includes the firmware, device drivers, an operating system, and typically a graphical user interface which, in total, allow a user to interact with the computer and its peripherals (associated equipment). Platform software often comes bundled with the computer. On a PC one will usually have the ability to change the platform software.
Application software Application software is what most people think of when they think of software. Typical examples include office suites and video games. Application software is often purchased separately from computer hardware. Sometimes applications are bundled with the computer, but that does not change the fact that they run as independent applications. Applications are usually independent programs from the operating system, though they are often tailored for specific platforms. Most users think of compilers, databases, and other "system software" as applications.
User-written software End-user development tailors systems to meet users' specific needs. User software includes spreadsheet templates and word processor templates. Even email filters are a kind of user software. Users create this software themselves and often overlook how important it is. Depending on how competently the user-written software has been integrated into default application packages, many users may not be aware of the distinction between the original packages, and what has been added by co-workers.
Executionpammi
Computer software has to be "loaded" into the computer's storage (such as the hard drive or memory). Once the software has loaded, the computer is able to execute the software. This involves passing instructions from the application software, through the system software, to the hardware which ultimately receives the instruction as machine code. Each instruction causes the computer to carry out an operation—moving data, carrying out a computation, or altering the control flow of instructions.
Data movement is typically from one place in memory to another. Sometimes it involves moving data between memory and registers which enable high-speed data access in the CPU. Moving data, especially large amounts of it, can be costly; this is sometimes avoided by using "pointers" to data instead. Computations include simple operations such as incrementing the value of a variable data element. More complex computations may involve many operations and data elements together.
Quality and reliability
Software quality is very important, especially for commercial and system software. If software is faulty, it can delete a person's work, crash the computer and do other unexpected things. Faults and errors are called "bugs" which are often discovered during alpha and beta testing. Software is often also a victim to what is known as software aging, the progressive performance degradation resulting from a combination of unseen bugs.
Many bugs are discovered and fixed through software testing. However, software testing rarely—if ever—eliminates every bug; some programmers say that "every program has at least one more bug" (Lubarsky's Law). In the waterfall method of software development, separate testing teams are typically employed, but in newer approaches, collectively termed agile software development, developers often do all their own testing, and demonstrate the software to users/clients regularly to obtain feedback. Software can be tested through unit testing, regression testing and other methods, which are done manually, or most commonly, automatically, since the amount of code to be tested can be large. Programs containing command software enable hardware engineering and system operations to function much easier together.
License
The software's license gives the user the right to use the software in the licensed environment, and in the case of free software licenses, also grants other rights such as the right to make copies.
Proprietary software can be divided into two types:
freeware, which includes the category of "free trial" software or "freemium" software (in the past, the term shareware was often used for free trial/freemium software). As the name suggests, freeware can be used for free, although in the case of free trials or freemium software, this is sometimes only true for a limited period of time or with limited functionality.
software available for a fee, which can only be legally used on purchase of a license.
Open-source software comes with a free software license, granting the recipient the rights to modify and redistribute the software.
Patents
Software patents, like other types of patents, are theoretically supposed to give an inventor an exclusive, time-limited license for a detailed idea (e.g. an algorithm) on how to implement a piece of software, or a component of a piece of software. Ideas for useful things that software could do, and user requirements, are not supposed to be patentable, and concrete implementations (i.e. the actual software packages implementing the patent) are not supposed to be patentable either—the latter are already covered by copyright, generally automatically. So software patents are supposed to cover the middle area, between requirements and concrete implementation. In some countries, a requirement for the claimed invention to have an effect on the physical world may also be part of the requirements for a software patent to be held valid—although since all useful software has effects on the physical world, this requirement may be open to debate. Meanwhile, American copyright law was applied to various aspects of the writing of the software code.
Software patents are controversial in the software industry with many people holding different views about them. One of the sources of controversy is that the aforementioned split between initial ideas and patent does not seem to be honored in practice by patent lawyers—for example the patent for aspect-oriented programming (AOP), which purported to claim rights over any programming tool implementing the idea of AOP, howsoever implemented. Another source of controversy is the effect on innovation, with many distinguished experts and companies arguing that software is such a fast-moving field that software patents merely create vast additional litigation costs and risks, and actually retard innovation. In the case of debates about software patents outside the United States, the argument has been made that large American corporations and patent lawyers are likely to be the primary beneficiaries of allowing or continue to allow software patents.
Design and implementation
Design and implementation of software varies depending on the complexity of the software. For instance, the design and creation of Microsoft Word took much more time than designing and developing Microsoft Notepad because the latter has much more basic functionality.
Software is usually developed in integrated development environments (IDE) like Eclipse, IntelliJ and Microsoft Visual Studio that can simplify the process and compile the software. As noted in a different section, software is usually created on top of existing software and the application programming interface (API) that the underlying software provides like GTK+, JavaBeans or Swing. Libraries (APIs) can be categorized by their purpose. For instance, the Spring Framework is used for implementing enterprise applications, the Windows Forms library is used for designing graphical user interface (GUI) applications like Microsoft Word, and Windows Communication Foundation is used for designing web services. When a program is designed, it relies upon the API. For instance, a Microsoft Windows desktop application might call API functions in the .NET Windows Forms library like Form1.Close() and Form1.Show() to close or open the application. Without these APIs, the programmer needs to write these functionalities entirely themselves. Companies like Oracle and Microsoft provide their own APIs so that many applications are written using their software libraries that usually have numerous APIs in them.
Data structures such as hash tables, arrays, and binary trees, and algorithms such as quicksort, can be useful for creating software.
Computer software has special economic characteristics that make its design, creation, and distribution different from most other economic goods.
A person who creates software is called a programmer, software engineer or software developer, terms that all have a similar meaning. More informal terms for programmer also exist such as "coder" and "hacker"although use of the latter word may cause confusion, because it is more often used to mean someone who illegally breaks into computer systems.
See also
Computer program
Independent software vendor* Open-source software
Outline of software
Software asset management
Software release life cycle
References
Sources
External links
Mathematical and quantitative methods (economics) |
791 | https://en.wikipedia.org/wiki/Asteroid | Asteroid | An asteroid is a minor planet of the inner Solar System. Historically, these terms have been applied to any astronomical object orbiting the Sun that did not resolve into a disc in a telescope and was not observed to have characteristics of an active comet such as a tail. As minor planets in the outer Solar System were discovered that were found to have volatile-rich surfaces similar to comets, these came to be distinguished from the objects found in the main asteroid belt. Thus the term "asteroid" now generally refers to the minor planets of the inner Solar System, including those co-orbital with Jupiter. Larger asteroids are often called planetoids.
Overview
Millions of asteroids exist: many are shattered remnants of planetesimals, bodies within the young Sun's solar nebula that never grew large enough to become planets. The vast majority of known asteroids orbit within the main asteroid belt located between the orbits of Mars and Jupiter, or are co-orbital with Jupiter (the Jupiter trojans). However, other orbital families exist with significant populations, including the near-Earth objects. Individual asteroids are classified by their characteristic spectra, with the majority falling into three main groups: C-type, M-type, and S-type. These were named after and are generally identified with carbon-rich, metallic, and silicate (stony) compositions, respectively. The sizes of asteroids varies greatly; the largest, Ceres, is almost across and massive enough to qualify as a dwarf planet.
Asteroids are somewhat arbitrarily differentiated from comets and meteoroids. In the case of comets, the difference is one of composition: while asteroids are mainly composed of mineral and rock, comets are primarily composed of dust and ice. Furthermore, asteroids formed closer to the sun, preventing the development of cometary ice. The difference between asteroids and meteoroids is mainly one of size: meteoroids have a diameter of one meter or less, whereas asteroids have a diameter of greater than one meter. Finally, meteoroids can be composed of either cometary or asteroidal materials.
Only one asteroid, 4 Vesta, which has a relatively reflective surface, is normally visible to the naked eye, and this is only in very dark skies when it is favorably positioned. Rarely, small asteroids passing close to Earth may be visible to the naked eye for a short time. , the Minor Planet Center had data on 930,000 minor planets in the inner and outer Solar System, of which about 545,000 had enough information to be given numbered designations.
The United Nations declared 30 June as International Asteroid Day to educate the public about asteroids. The date of International Asteroid Day commemorates the anniversary of the Tunguska asteroid impact over Siberia, Russian Federation, on 30 June 1908.
In April 2018, the B612 Foundation reported "It is 100 percent certain we'll be hit [by a devastating asteroid], but we're not 100 percent sure when." Also in 2018, physicist Stephen Hawking,
in his final book Brief Answers to the Big Questions, considered an asteroid collision to be the biggest threat to the planet. In June 2018, the US National Science and Technology Council warned that America is unprepared for an asteroid impact event, and has developed and released the "National Near-Earth Object Preparedness Strategy Action Plan" to better prepare. According to expert testimony in the United States Congress in 2013, NASA would require at least five years of preparation before a mission to intercept an asteroid could be launched.
Discovery
The first asteroid to be discovered, Ceres, was originally considered to be a new planet. This was followed by the discovery of other similar bodies, which, with the equipment of the time, appeared to be points of light, like stars, showing little or no planetary disc, though readily distinguishable from stars due to their apparent motions. This prompted the astronomer Sir William Herschel to propose the term "asteroid", coined in Greek as ἀστεροειδής, or asteroeidēs, meaning 'star-like, star-shaped', and derived from the Ancient Greek astēr 'star, planet'. In the early second half of the nineteenth century, the terms "asteroid" and "planet" (not always qualified as "minor") were still used interchangeably.
Discovery timeline:
10 by 1849
1 Ceres, 1801
2 Pallas 1802
3 Juno 1804
4 Vesta 1807
5 Astraea 1845
in 1846, planet Neptune was discovered
6 Hebe July 1847
7 Iris August 1847
8 Flora October 1847
9 Metis 25 April 1848
10 Hygiea 12 April 1849 tenth asteroid discovered
100 asteroids by 1868
1,000 by 1921
10,000 by 1989
100,000 by 2005
1,000,000 by 2020
Historical methods
Asteroid discovery methods have dramatically improved over the past two centuries.
In the last years of the 18th century, Baron Franz Xaver von Zach organized a group of 24 astronomers to search the sky for the missing planet predicted at about 2.8 AU from the Sun by the Titius-Bode law, partly because of the discovery, by Sir William Herschel in 1781, of the planet Uranus at the distance predicted by the law. This task required that hand-drawn sky charts be prepared for all stars in the zodiacal band down to an agreed-upon limit of faintness. On subsequent nights, the sky would be charted again and any moving object would, hopefully, be spotted. The expected motion of the missing planet was about 30 seconds of arc per hour, readily discernible by observers.
The first object, Ceres, was not discovered by a member of the group, but rather by accident in 1801 by Giuseppe Piazzi, director of the observatory of Palermo in Sicily. He discovered a new star-like object in Taurus and followed the displacement of this object during several nights. Later that year, Carl Friedrich Gauss used these observations to calculate the orbit of this unknown object, which was found to be between the planets Mars and Jupiter. Piazzi named it after Ceres, the Roman goddess of agriculture.
Three other asteroids (2 Pallas, 3 Juno, and 4 Vesta) were discovered over the next few years, with Vesta found in 1807. After eight more years of fruitless searches, most astronomers assumed that there were no more and abandoned any further searches.
However, Karl Ludwig Hencke persisted, and began searching for more asteroids in 1830. Fifteen years later, he found 5 Astraea, the first new asteroid in 38 years. He also found 6 Hebe less than two years later. After this, other astronomers joined in the search and at least one new asteroid was discovered every year after that (except the wartime year 1945). Notable asteroid hunters of this early era were J.R. Hind, A. de Gasparis, R. Luther, H.M.S. Goldschmidt, J. Chacornac, J. Ferguson, N.R. Pogson, E.W. Tempel, J.C. Watson, C.H.F. Peters, A. Borrelly, J. Palisa, the Henry brothers and A. Charlois.
In 1891, Max Wolf pioneered the use of astrophotography to detect asteroids, which appeared as short streaks on long-exposure photographic plates. This dramatically increased the rate of detection compared with earlier visual methods: Wolf alone discovered 248 asteroids, beginning with 323 Brucia, whereas only slightly more than 300 had been discovered up to that point. It was known that there were many more, but most astronomers did not bother with them, some calling them "vermin of the skies", a phrase variously attributed to E. Suess and E. Weiss. Even a century later, only a few thousand asteroids were identified, numbered and named.
Manual methods of the 1900s and modern reporting
Until 1998, asteroids were discovered by a four-step process. First, a region of the sky was photographed by a wide-field telescope, or astrograph. Pairs of photographs were taken, typically one hour apart. Multiple pairs could be taken over a series of days. Second, the two films or plates of the same region were viewed under a stereoscope. Any body in orbit around the Sun would move slightly between the pair of films. Under the stereoscope, the image of the body would seem to float slightly above the background of stars. Third, once a moving body was identified, its location would be measured precisely using a digitizing microscope. The location would be measured relative to known star locations.
These first three steps do not constitute asteroid discovery: the observer has only found an apparition, which gets a provisional designation, made up of the year of discovery, a letter representing the half-month of discovery, and finally a letter and a number indicating the discovery's sequential number (example: ).
The last step of discovery is to send the locations and time of observations to the Minor Planet Center, where computer programs determine whether an apparition ties together earlier apparitions into a single orbit. If so, the object receives a catalogue number and the observer of the first apparition with a calculated orbit is declared the discoverer, and granted the honor of naming the object subject to the approval of the International Astronomical Union.
Computerized methods
There is increasing interest in identifying asteroids whose orbits cross Earth's, and that could, given enough time, collide with Earth (see Earth-crosser asteroids). The three most important groups of near-Earth asteroids are the Apollos, Amors, and Atens. Various asteroid deflection strategies have been proposed, as early as the 1960s.
The near-Earth asteroid 433 Eros had been discovered as long ago as 1898, and the 1930s brought a flurry of similar objects. In order of discovery, these were: 1221 Amor, 1862 Apollo, 2101 Adonis, and finally 69230 Hermes, which approached within 0.005 AU of Earth in 1937. Astronomers began to realize the possibilities of Earth impact.
Two events in later decades increased the alarm: the increasing acceptance of the Alvarez hypothesis that an impact event resulted in the Cretaceous–Paleogene extinction, and the 1994 observation of Comet Shoemaker-Levy 9 crashing into Jupiter. The U.S. military also declassified the information that its military satellites, built to detect nuclear explosions, had detected hundreds of upper-atmosphere impacts by objects ranging from one to ten meters across.
All these considerations helped spur the launch of highly efficient surveys that consist of charge-coupled device (CCD) cameras and computers directly connected to telescopes. , it was estimated that 89% to 96% of near-Earth asteroids one kilometer or larger in diameter had been discovered. A list of teams using such systems includes:
Lincoln Near-Earth Asteroid Research (LINEAR)
Near-Earth Asteroid Tracking (NEAT)
Spacewatch
Lowell Observatory Near-Earth-Object Search (LONEOS)
Catalina Sky Survey (CSS)
Pan-STARRS
NEOWISE
Asteroid Terrestrial-impact Last Alert System (ATLAS)
Campo Imperatore Near-Earth Object Survey (CINEOS)
Japanese Spaceguard Association
Asiago-DLR Asteroid Survey (ADAS)
, the LINEAR system alone has discovered 147,132 asteroids. Among all the surveys, 19,266 near-Earth asteroids have been discovered including almost 900 more than in diameter.
Terminology
Traditionally, small bodies orbiting the Sun were classified as comets, asteroids, or meteoroids, with anything smaller than one meter across being called a meteoroid. Beech and Steel's 1995 paper proposed a meteoroid definition including size limits. The term "asteroid", from the Greek word for "star-like", never had a formal definition, with the broader term minor planet being preferred by the International Astronomical Union.
However, following the discovery of asteroids below ten meters in size, Rubin and Grossman's 2010 paper revised the previous definition of meteoroid to objects between 10 µm and 1 meter in size in order to maintain the distinction between asteroids and meteoroids. The smallest asteroids discovered (based on absolute magnitude H) are with and with both with an estimated size of about 1 meter.
In 2006, the term "small Solar System body" was also introduced to cover both most minor planets and comets. Other languages prefer "planetoid" (Greek for "planet-like"), and this term is occasionally used in English especially for larger minor planets such as the dwarf planets as well as an alternative for asteroids since they are not star-like. The word "planetesimal" has a similar meaning, but refers specifically to the small building blocks of the planets that existed when the Solar System was forming. The term "planetule" was coined by the geologist William Daniel Conybeare to describe minor planets, but is not in common use. The three largest objects in the asteroid belt, Ceres, Pallas, and Vesta, grew to the stage of protoplanets. Ceres is a dwarf planet, the only one in the inner Solar System.
When found, asteroids were seen as a class of objects distinct from comets, and there was no unified term for the two until "small Solar System body" was coined in 2006. The main difference between an asteroid and a comet is that a comet shows a coma due to sublimation of near-surface ices by solar radiation. A few objects have ended up being dual-listed because they were first classified as minor planets but later showed evidence of cometary activity. Conversely, some (perhaps all) comets are eventually depleted of their surface volatile ices and become asteroid-like. A further distinction is that comets typically have more eccentric orbits than most asteroids; most "asteroids" with notably eccentric orbits are probably dormant or extinct comets.
For almost two centuries, from the discovery of Ceres in 1801 until the discovery of the first centaur, Chiron in 1977, all known asteroids spent most of their time at or within the orbit of Jupiter, though a few such as Hidalgo ventured far beyond Jupiter for part of their orbit. Those located between the orbits of Mars and Jupiter were known for many years simply as The Asteroids. When astronomers started finding more small bodies that permanently resided further out than Jupiter, now called centaurs, they numbered them among the traditional asteroids, though there was debate over whether they should be considered asteroids or as a new type of object. Then, when the first trans-Neptunian object (other than Pluto), Albion, was discovered in 1992, and especially when large numbers of similar objects started turning up, new terms were invented to sidestep the issue: Kuiper-belt object, trans-Neptunian object, scattered-disc object, and so on. These inhabit the cold outer reaches of the Solar System where ices remain solid and comet-like bodies are not expected to exhibit much cometary activity; if centaurs or trans-Neptunian objects were to venture close to the Sun, their volatile ices would sublimate, and traditional approaches would classify them as comets and not asteroids.
The innermost of these are the Kuiper-belt objects, called "objects" partly to avoid the need to classify them as asteroids or comets. They are thought to be predominantly comet-like in composition, though some may be more akin to asteroids. Furthermore, most do not have the highly eccentric orbits associated with comets, and the ones so far discovered are larger than traditional comet nuclei. (The much more distant Oort cloud is hypothesized to be the main reservoir of dormant comets.) Other recent observations, such as the analysis of the cometary dust collected by the Stardust probe, are increasingly blurring the distinction between comets and asteroids, suggesting "a continuum between asteroids and comets" rather than a sharp dividing line.
The minor planets beyond Jupiter's orbit are sometimes also called "asteroids", especially in popular presentations. However, it is becoming increasingly common for the term "asteroid" to be restricted to minor planets of the inner Solar System. Therefore, this article will restrict itself for the most part to the classical asteroids: objects of the asteroid belt, Jupiter trojans, and near-Earth objects.
When the IAU introduced the class small Solar System bodies in 2006 to include most objects previously classified as minor planets and comets, they created the class of dwarf planets for the largest minor planets – those that have enough mass to have become ellipsoidal under their own gravity. According to the IAU, "the term 'minor planet' may still be used, but generally, the term 'Small Solar System Body' will be preferred." Currently only the largest object in the asteroid belt, Ceres, at about across, has been placed in the dwarf planet category.
Formation
It is thought that planetesimals in the asteroid belt evolved much like the rest of the solar nebula until Jupiter neared its current mass, at which point excitation from orbital resonances with Jupiter ejected over 99% of planetesimals in the belt. Simulations and a discontinuity in spin rate and spectral properties suggest that asteroids larger than approximately in diameter accreted during that early era, whereas smaller bodies are fragments from collisions between asteroids during or after the Jovian disruption. Ceres and Vesta grew large enough to melt and differentiate, with heavy metallic elements sinking to the core, leaving rocky minerals in the crust.
In the Nice model, many Kuiper-belt objects are captured in the outer asteroid belt, at distances greater than 2.6 AU. Most were later ejected by Jupiter, but those that remained may be the D-type asteroids, and possibly include Ceres.
Distribution within the Solar System
Various dynamical groups of asteroids have been discovered orbiting in the inner Solar System. Their orbits are perturbed by the gravity of other bodies in the Solar System and by the Yarkovsky effect. Significant populations include:
Asteroid belt
The majority of known asteroids orbit within the asteroid belt between the orbits of Mars and Jupiter, generally in relatively low-eccentricity (i.e. not very elongated) orbits. This belt is now estimated to contain between 1.1 and 1.9 million asteroids larger than in diameter, and millions of smaller ones. These asteroids may be remnants of the protoplanetary disk, and in this region the accretion of planetesimals into planets during the formative period of the Solar System was prevented by large gravitational perturbations by Jupiter.
Trojans
Trojans are populations that share an orbit with a larger planet or moon, but do not collide with it because they orbit in one of the two Lagrangian points of stability, L4 and L5, which lie 60° ahead of and behind the larger body.
The most significant population of trojans are the Jupiter trojans. Although fewer Jupiter trojans have been discovered (), it is thought that they are as numerous as the asteroids in the asteroid belt. Trojans have been found in the orbits of other planets, including Venus, Earth, Mars, Uranus, and Neptune.
Near-Earth asteroids
Near-Earth asteroids, or NEAs, are asteroids that have orbits that pass close to that of Earth. Asteroids that actually cross Earth's orbital path are known as Earth-crossers. , 14,464 near-Earth asteroids are known and approximately 900–1,000 have a diameter of over one kilometer.
Characteristics
Size distribution
Asteroids vary greatly in size, from almost for the largest down to rocks just 1 meter across. The three largest are very much like miniature planets: they are roughly spherical, have at least partly differentiated interiors, and are thought to be surviving protoplanets. The vast majority, however, are much smaller and are irregularly shaped; they are thought to be either battered planetesimals or fragments of larger bodies.
The dwarf planet Ceres is by far the largest asteroid, with a diameter of . The next largest are 4 Vesta and 2 Pallas, both with diameters of just over . Vesta is the only main-belt asteroid that can, on occasion, be visible to the naked eye. On some rare occasions, a near-Earth asteroid may briefly become visible without technical aid; see 99942 Apophis.
The mass of all the objects of the asteroid belt, lying between the orbits of Mars and Jupiter, is estimated to be in the range of , about 4% of the mass of the Moon. Of this, Ceres comprises , about a third of the total. Adding in the next three most massive objects, Vesta (9%), Pallas (7%), and Hygiea (3%), brings this figure up to half, whereas the three most-massive asteroids after that, 511 Davida (1.2%), 704 Interamnia (1.0%), and 52 Europa (0.9%), constitute only another 3%. The number of asteroids increases rapidly as their individual masses decrease.
The number of asteroids decreases markedly with size. Although this generally follows a power law, there are 'bumps' at and , where more asteroids than expected from a logarithmic distribution are found.
Largest asteroids
Although their location in the asteroid belt excludes them from planet status, the three largest objects, Ceres, Vesta, and Pallas, are intact protoplanets that share many characteristics common to planets, and are atypical compared to the majority of irregularly shaped asteroids. The fourth-largest asteroid, Hygiea, appears nearly spherical although it may have an undifferentiated interior, like the majority of asteroids. Between them, the four largest asteroids constitute half the mass of the asteroid belt.
Ceres is the only asteroid that appears to be plastic shape under its own gravity and hence the only one that is a likely dwarf planet. It has a much higher absolute magnitude than the other asteroids, of around 3.32, and may possess a surface layer of ice. Like the planets, Ceres is differentiated: it has a crust, a mantle and a core. No meteorites from Ceres have been found on Earth.
Vesta, too, has a differentiated interior, though it formed inside the Solar System's frost line, and so is devoid of water; its composition is mainly of basaltic rock with minerals such as olivine. Aside from the large crater at its southern pole, Rheasilvia, Vesta also has an ellipsoidal shape. Vesta is the parent body of the Vestian family and other V-type asteroids, and is the source of the HED meteorites, which constitute 5% of all meteorites on Earth.
Pallas is unusual in that, like Uranus, it rotates on its side, with its axis of rotation tilted at high angles to its orbital plane. Its composition is similar to that of Ceres: high in carbon and silicon, and perhaps partially differentiated. Pallas is the parent body of the Palladian family of asteroids.
Hygiea is the largest carbonaceous asteroid and, unlike the other largest asteroids, lies relatively close to the plane of the ecliptic. It is the largest member and presumed parent body of the Hygiean family of asteroids. Because there is no sufficiently large crater on the surface to be the source of that family, as there is on Vesta, it is thought that Hygiea may have been completely disrupted in the collision that formed the Hygiean family and recoalesced after losing a bit less than 2% of its mass. Observations taken with the Very Large Telescope's SPHERE imager in 2017 and 2018, and announced in late 2019, revealed that Hygiea has a nearly spherical shape, which is consistent both with it being in hydrostatic equilibrium (and thus a dwarf planet), or formerly being in hydrostatic equilibrium, or with being disrupted and recoalescing.
Rotation
Measurements of the rotation rates of large asteroids in the asteroid belt show that there is an upper limit. Very few asteroids with a diameter larger than 100 meters have a rotation period smaller than 2.2 hours. For asteroids rotating faster than approximately this rate, the inertial force at the surface is greater than the gravitational force, so any loose surface material would be flung out. However, a solid object should be able to rotate much more rapidly. This suggests that most asteroids with a diameter over 100 meters are rubble piles formed through the accumulation of debris after collisions between asteroids.
Composition
The physical composition of asteroids is varied and in most cases poorly understood. Ceres appears to be composed of a rocky core covered by an icy mantle, where Vesta is thought to have a nickel-iron core, olivine mantle, and basaltic crust. 10 Hygiea, however, which appears to have a uniformly primitive composition of carbonaceous chondrite, is thought to be the largest undifferentiated asteroid, though it may be a differentiated asteroid that was globally disrupted by an impact and then reassembled. Other asteroids appear to be the remnant cores or mantles of proto-planets, high in rock and metal Most small asteroids are thought to be piles of rubble held together loosely by gravity, though the largest are probably solid. Some asteroids have moons or are co-orbiting binaries: Rubble piles, moons, binaries, and scattered asteroid families are thought to be the results of collisions that disrupted a parent asteroid, or, possibly, a planet.
In the main asteroid belt, there appear to be two primary populations of asteroid: a dark, volatile-rich population, consisting of the C-type and P-type asteroids, with albedos less that 0.10 and densities under , and a dense, volatile-poor population, consisting of the S-type and M-type asteroids, with albedos over 0.15 and densities greater than 2.7. Within these populations, larger asteroids are denser, presumably due to compression. There appears to be minimal macro-porosity (interstitial vacuum) in the score of asteroids with masses greater than .
Asteroids contain traces of amino acids and other organic compounds, and some speculate that asteroid impacts may have seeded the early Earth with the chemicals necessary to initiate life, or may have even brought life itself to Earth (also see panspermia). In August 2011, a report, based on NASA studies with meteorites found on Earth, was published suggesting DNA and RNA components (adenine, guanine and related organic molecules) may have been formed on asteroids and comets in outer space.
Composition is calculated from three primary sources: albedo, surface spectrum, and density. The last can only be determined accurately by observing the orbits of moons the asteroid might have. So far, every asteroid with moons has turned out to be a rubble pile, a loose conglomeration of rock and metal that may be half empty space by volume. The investigated asteroids are as large as 280 km in diameter, and include 121 Hermione (268×186×183 km), and 87 Sylvia (384×262×232 km). Only half a dozen asteroids are larger than 87 Sylvia, though none of them have moons. The fact that such large asteroids as Sylvia may be rubble piles, presumably due to disruptive impacts, has important consequences for the formation of the Solar System: Computer simulations of collisions involving solid bodies show them destroying each other as often as merging, but colliding rubble piles are more likely to merge. This means that the cores of the planets could have formed relatively quickly.
On 7 October 2009, the presence of water ice was confirmed on the surface of 24 Themis using NASA's Infrared Telescope Facility. The surface of the asteroid appears completely covered in ice. As this ice layer is sublimating, it may be getting replenished by a reservoir of ice under the surface. Organic compounds were also detected on the surface. Scientists hypothesize that some of the first water brought to Earth was delivered by asteroid impacts after the collision that produced the Moon. The presence of ice on 24 Themis supports this theory.
In October 2013, water was detected on an extrasolar body for the first time, on an asteroid orbiting the white dwarf GD 61. On 22 January 2014, European Space Agency (ESA) scientists reported the detection, for the first definitive time, of water vapor on Ceres, the largest object in the asteroid belt. The detection was made by using the far-infrared abilities of the Herschel Space Observatory. The finding is unexpected because comets, not asteroids, are typically considered to "sprout jets and plumes". According to one of the scientists, "The lines are becoming more and more blurred between comets and asteroids."
In May 2016, significant asteroid data arising from the Wide-field Infrared Survey Explorer and NEOWISE missions have been questioned. Although the early original criticism had not undergone peer review, a more recent peer-reviewed study was subsequently published.
In November 2019, scientists reported detecting, for the first time, sugar molecules, including ribose, in meteorites, suggesting that chemical processes on asteroids can produce some fundamentally essential bio-ingredients important to life, and supporting the notion of an RNA world prior to a DNA-based origin of life on Earth, and possibly, as well, the notion of panspermia.
Acfer 049, a meteorite discovered in Algeria in 1990, was shown in 2019 to have ice fossils inside it – the first direct evidence of water ice in the composition of asteroids.
Findings have shown that solar winds can react with the oxygen in the upper layer of the asteroids and create water. It has been estimated that every cubic metre of irradiated rock could contain up to 20 litres.
Surface features
Most asteroids outside the "big four" (Ceres, Pallas, Vesta, and Hygiea) are likely to be broadly similar in appearance, if irregular in shape. 50 km (31 mi) 253 Mathilde is a rubble pile saturated with craters with diameters the size of the asteroid's radius, and Earth-based observations of 300 km (186 mi) 511 Davida, one of the largest asteroids after the big four, reveal a similarly angular profile, suggesting it is also saturated with radius-size craters. Medium-sized asteroids such as Mathilde and 243 Ida that have been observed up close also reveal a deep regolith covering the surface. Of the big four, Pallas and Hygiea are practically unknown. Vesta has compression fractures encircling a radius-size crater at its south pole but is otherwise a spheroid. Ceres seems quite different in the glimpses Hubble has provided, with surface features that are unlikely to be due to simple craters and impact basins, but details will be expanded with the Dawn spacecraft, which entered Ceres orbit on 6 March 2015.
Color
Asteroids become darker and redder with age due to space weathering. However evidence suggests most of the color change occurs rapidly, in the first hundred thousand years, limiting the usefulness of spectral measurement for determining the age of asteroids.
Classification
Asteroids are commonly categorized according to two criteria: the characteristics of their orbits, and features of their reflectance spectrum.
Orbital classification
Many asteroids have been placed in groups and families based on their orbital characteristics. Apart from the broadest divisions, it is customary to name a group of asteroids after the first member of that group to be discovered. Groups are relatively loose dynamical associations, whereas families are tighter and result from the catastrophic break-up of a large parent asteroid sometime in the past. Families are more common and easier to identify within the main asteroid belt, but several small families have been reported among the Jupiter trojans. Main belt families were first recognized by Kiyotsugu Hirayama in 1918 and are often called Hirayama families in his honor.
About 30–35% of the bodies in the asteroid belt belong to dynamical families each thought to have a common origin in a past collision between asteroids. A family has also been associated with the plutoid dwarf planet .
Quasi-satellites and horseshoe objects
Some asteroids have unusual horseshoe orbits that are co-orbital with Earth or some other planet. Examples are 3753 Cruithne and . The first instance of this type of orbital arrangement was discovered between Saturn's moons Epimetheus and Janus.
Sometimes these horseshoe objects temporarily become quasi-satellites for a few decades or a few hundred years, before returning to their earlier status. Both Earth and Venus are known to have quasi-satellites.
Such objects, if associated with Earth or Venus or even hypothetically Mercury, are a special class of Aten asteroids. However, such objects could be associated with outer planets as well.
Spectral classification
In 1975, an asteroid taxonomic system based on color, albedo, and spectral shape was developed by Chapman, Morrison, and Zellner. These properties are thought to correspond to the composition of the asteroid's surface material. The original classification system had three categories: C-types for dark carbonaceous objects (75% of known asteroids), S-types for stony (silicaceous) objects (17% of known asteroids) and U for those that did not fit into either C or S. This classification has since been expanded to include many other asteroid types. The number of types continues to grow as more asteroids are studied.
The two most widely used taxonomies now used are the Tholen classification and SMASS classification. The former was proposed in 1984 by David J. Tholen, and was based on data collected from an eight-color asteroid survey performed in the 1980s. This resulted in 14 asteroid categories. In 2002, the Small Main-Belt Asteroid Spectroscopic Survey resulted in a modified version of the Tholen taxonomy with 24 different types. Both systems have three broad categories of C, S, and X asteroids, where X consists of mostly metallic asteroids, such as the M-type. There are also several smaller classes.
The proportion of known asteroids falling into the various spectral types does not necessarily reflect the proportion of all asteroids that are of that type; some types are easier to detect than others, biasing the totals.
Problems
Originally, spectral designations were based on inferences of an asteroid's composition. However, the correspondence between spectral class and composition is not always very good, and a variety of classifications are in use. This has led to significant confusion. Although asteroids of different spectral classifications are likely to be composed of different materials, there are no assurances that asteroids within the same taxonomic class are composed of the same (or similar) materials.
Naming
A newly discovered asteroid is given a provisional designation (such as ) consisting of the year of discovery and an alphanumeric code indicating the half-month of discovery and the sequence within that half-month. Once an asteroid's orbit has been confirmed, it is given a number, and later may also be given a name (e.g. ). The formal naming convention uses parentheses around the number – e.g. (433) Eros – but dropping the parentheses is quite common. Informally, it is common to drop the number altogether, or to drop it after the first mention when a name is repeated in running text. In addition, names can be proposed by the asteroid's discoverer, within guidelines established by the International Astronomical Union.
Symbols
The first asteroids to be discovered were assigned iconic symbols like the ones traditionally used to designate the planets. By 1855 there were two dozen asteroid symbols, which often occurred in multiple variants.
In 1851, after the fifteenth asteroid (Eunomia) had been discovered, Johann Franz Encke made a major change in the upcoming 1854 edition of the Berliner Astronomisches Jahrbuch (BAJ, Berlin Astronomical Yearbook). He introduced a disk (circle), a traditional symbol for a star, as the generic symbol for an asteroid. The circle was then numbered in order of discovery to indicate a specific asteroid (although he assigned ① to the fifth, Astraea, while continuing to designate the first four only with their existing iconic symbols). The numbered-circle convention was quickly adopted by astronomers, and the next asteroid to be discovered (16 Psyche, in 1852) was the first to be designated in that way at the time of its discovery. However, Psyche was given an iconic symbol as well, as were a few other asteroids discovered over the next few years (see chart above). 20 Massalia was the first asteroid that was not assigned an iconic symbol, and no iconic symbols were created after the 1855 discovery of 37 Fides. That year Astraea's number was increased to ⑤, but the first four asteroids, Ceres to Vesta, were not listed by their numbers until the 1867 edition. The circle was soon abbreviated to a pair of parentheses, which were easier to typeset and sometimes omitted altogether over the next few decades, leading to the modern convention.
Exploration
Until the age of space travel, objects in the asteroid belt were merely pinpricks of light in even the largest telescopes and their shapes and terrain remained a mystery. The best modern ground-based telescopes and the Earth-orbiting Hubble Space Telescope can resolve a small amount of detail on the surfaces of the largest asteroids, but even these mostly remain little more than fuzzy blobs. Limited information about the shapes and compositions of asteroids can be inferred from their light curves (their variation in brightness as they rotate) and their spectral properties, and asteroid sizes can be estimated by timing the lengths of star occultations (when an asteroid passes directly in front of a star). Radar imaging can yield good information about asteroid shapes and orbital and rotational parameters, especially for near-Earth asteroids. In terms of delta-v and propellant requirements, NEOs are more easily accessible than the Moon.
The first close-up photographs of asteroid-like objects were taken in 1971, when the Mariner 9 probe imaged Phobos and Deimos, the two small moons of Mars, which are probably captured asteroids. These images revealed the irregular, potato-like shapes of most asteroids, as did later images from the Voyager probes of the small moons of the gas giants.
The first true asteroid to be photographed in close-up was 951 Gaspra in 1991, followed in 1993 by 243 Ida and its moon Dactyl, all of which were imaged by the Galileo probe en route to Jupiter.
The first dedicated asteroid probe was NEAR Shoemaker, which photographed 253 Mathilde in 1997, before entering into orbit around 433 Eros, finally landing on its surface in 2001.
Other asteroids briefly visited by spacecraft en route to other destinations include 9969 Braille (by Deep Space 1 in 1999), and 5535 Annefrank (by Stardust in 2002).
From September to November 2005, the Japanese Hayabusa probe studied 25143 Itokawa in detail and was plagued with difficulties, but returned samples of its surface to Earth on 13 June 2010.
The European Rosetta probe (launched in 2004) flew by 2867 Šteins in 2008 and 21 Lutetia, the third-largest asteroid visited to date, in 2010.
In September 2007, NASA launched the Dawn spacecraft, which orbited 4 Vesta from July 2011 to September 2012, and has been orbiting the dwarf planet 1 Ceres since 2015. 4 Vesta is the second-largest asteroid visited to date.
On 13 December 2012, China's lunar orbiter Chang'e 2 flew within of the asteroid 4179 Toutatis on an extended mission.
The Japan Aerospace Exploration Agency (JAXA) launched the Hayabusa2 probe in December 2014, and plans to return samples from 162173 Ryugu in December 2020.
In June 2018, the US National Science and Technology Council warned that America is unprepared for an asteroid impact event, and has developed and released the "National Near-Earth Object Preparedness Strategy Action Plan" to better prepare.
In September 2016, NASA launched the OSIRIS-REx sample return mission to asteroid 101955 Bennu, which it reached in December 2018. On May 10 2021, the probe departed the asteroid with a sample from its surface, and is expected to return to Earth on September 24 2023.
Planned and future missions
In early 2013, NASA announced the planning stages of a mission to capture a near-Earth asteroid and move it into lunar orbit where it could possibly be visited by astronauts and later impacted into the Moon. On 19 June 2014, NASA reported that asteroid 2011 MD was a prime candidate for capture by a robotic mission, perhaps in the early 2020s.
It has been suggested that asteroids might be used as a source of materials that may be rare or exhausted on Earth (asteroid mining), or materials for constructing space habitats (see Colonization of the asteroids). Materials that are heavy and expensive to launch from Earth may someday be mined from asteroids and used for space manufacturing and construction.
In the U.S. Discovery program the Psyche spacecraft proposal to 16 Psyche and Lucy spacecraft to Jupiter trojans made it to the semi-finalist stage of mission selection.
In January 2017, Lucy and Psyche mission were both selected as NASA's Discovery Program missions 13 and 14 respectively.
In November 2021, NASA launched its Double Asteroid Redirection Test (DART), a mission to test technology for defending Earth against potential asteroids or comets.
Location of Ceres (within asteroid belt) compared to other bodies of the Solar System
Fiction
Asteroids and the asteroid belt are a staple of science fiction stories. Asteroids play several potential roles in science fiction: as places human beings might colonize, resources for extracting minerals, hazards encountered by spacecraft traveling between two other points, and as a threat to life on Earth or other inhabited planets, dwarf planets, and natural satellites by potential impact.
Gallery
See also
ʻOumuamua
Active asteroid
Amor asteroid
Apollo asteroid
Asteroid Day
Asteroid impact avoidance
Asteroids in astrology
Aten asteroid
Atira asteroid
BOOTES (Burst Observer and Optical Transient Exploring System)
Category:Asteroid groups and families
Category:Asteroids
Category:Binary asteroids
Centaur (minor planet)
Chang'e 2 lunar orbiter
Constellation program
Dawn (spacecraft)
Dwarf planet
Impact event
List of asteroid close approaches to Earth
List of exceptional asteroids
List of impact craters on Earth
List of minor planets
List of minor planets named after people
List of minor planets named after places
List of possible impact structures on Earth
Lost minor planet
Marco Polo (spacecraft)
Meanings of minor planet names
Mesoplanet
Meteoroid
Minor planet
Near-Earth object
NEOShield
NEOSSat (Near Earth Object Surveillance Satellite) Canada's new satellite
Pioneer 10
Rosetta (spacecraft)
Explanatory notes
References
Further reading
Further information about asteroids
(see Logarithmic scale)
External links
Minor planets |
21707070 | https://en.wikipedia.org/wiki/S%C3%A3o%20Paulo%20State%20Technological%20Colleges | São Paulo State Technological Colleges | The São Paulo State Technological Colleges (FATECs, Portuguese: Faculdades de Tecnologia do Estado de São Paulo) are public institutions of higher education maintained by the State Center of Technological Education (CEETEPS). FATECs are important Brazilian institutions of higher education, being pioneers in the graduation of technologists. They are located in several cities of the São Paulo state, with four campuses in the capital (Bom Retiro, Campos Elíseos, East Zone and South Zone), and several other units in the metropolitan region of São Paulo, countryside and seashore.
The 46 FATECs offer high degree careers in virtually all areas of knowledge. In most of the units, are offered courses of higher education in technology, focused in the training of technologists. The units of São Caetano do Sul, Ourinhos, Carapicuíba and Americana, however, offer the option of bachelorship and licentiate degree in the career of System Analysis and Information Technology, starting the tradition of FATECs to train, too, bachelors and licentiates.
More than 28 thousand students are currently enrolled in FATECs. For the formation of this quota is annually invested more than R$1 billion (US$420,000 mi).
History
The first milestone of the trajectory of FATECs was the founding, in 1969, of the Center of Technological Education of São Paulo State, by the current Governor of the State Abreu Sodré, which had as objective the training of technologists to supply the growing demand for university-level professionals. The CEETEPS was installed in Coronel Fernando Prestes plaza, in the center of São Paulo, using the old campus of USP Polytechnic School.
The courses run by FATEC Bom Retiro (or FATEC-SP) are the oldest, having been taught since 1969. That year, was founded in the city of Sorocaba the Sorocaba Technological College, with the same goals. In 1970, were created the higher education technological courses of Construction, in the forms: Buildings, Hydraulic Works, Earthmoving and Paving. Were subsequently created the higher education technological courses of Mechanics, in the forms: Workshops and Designer.
In 1973, by state law, the Center came to be called Centro Estadual de Educação Tecnológica Paula Souza (State Center of Technological Education) and their classes began to form the São Paulo State Technological College. This way, CEETEPS became the maintainer of two FATECs: one in São Paulo and one in Sorocaba. In 1974, has been created the higher education technological career of Data Processing, today still reference in the area of Information Technology.
In 1976, the state government merge, by law, all its isolated higher education establishments into the UNESP (São Paulo State University). As CEETEPS was not an educational institution, but the maintainer of two academic units, the law that created the UNESP established that CEETEPS will integrate the new university in the condition of special institution, linked and associated with it.
Admission
As in other Brazilian government academic institutions, the teaching in FATEC is funded by taxation and not paid for directly by the students. The admission is done through public concourse (vestibular), open to anyone who has completed or is about to finish high school. CEETEPS himself is responsible for the public concourse, which is conducted every six months. The vestibular of FATEC, unlike the majority of Brazilian public institutions of higher education, consists in only one phase, covering various questions of multiple choice and one essay.
According to 2008 vestibular, 38,220 applicants competed for 6,175 places in dispute that year. The major concurrence recorded was of career of Systems Analysis and Development (Night) in FATEC São Paulo (Bom Retiro), with 22.8 applicants per vacancy. In 2009, the average demand in the three campuses of the capital was 7.54 candidates per seat. The data confirm the tradition of FATEC to promote one of the most competitive selective processes of Brazil.
Structure and Teachers
The São Paulo State Technoloulties are divided into 46 units, with presence in 44 cities of São Paulo State. In opposition with other Brazilian institutions of state and federal education, FATECs do not have rectory, with each unit being coordinated by a director, subordinated to CEETEPS superintendent and bound to UNESP rectory.
Is mandatory requirement for admission as a teacher in FATEC be carrying the title of master or doctor in recommended program recognized by Brazilian law. The wages of teachers of FATEC are among the best in Brazil public school system, with income of R$4,932.00 (US$2,103.20) for Assistant Professor, R$5,523.84 (US$2,355.58) for Associate Professor and R$6,954.12 (US$2,965.51) for Complete Professor. The maximum salary of teachers is of R$7,743.24 (US$3,302.02).
Graduation
The FATECs are currently offering 59 undergraduate courses, in all three areas of knowledge.
The average length of courses in FATEC is 2,800 hours, with three years duration. Taking into account that the final resolution of the Brazilian Ministry of Education (MEC) provides the minimum hours to 2,400 hours for various careers of bachelorship, we can conclude that even with a shorter duration in comparison to traditional degree courses, the degrees offered in FATECs form professionals with working hours equal and even superior to several BA. This is possible because the number of hours that students remain in schools every day is high, being more than 5 hours on some campuses. It is noteworthy that for the FATECs, Saturday is considered normal day for study.
As there is occurring a strong expansion in existing units and in the creation of new faculties of technology, some careers are having their cases being analyzed by the State Council of Education. In these cases, the start of classes is conditional to final approval from the council.
Points of Excellence
The FATECs are recognized by their remarkable quality on professional training in the areas of Information Technology, Logistics and Transport and Precision Mechanics (Mechatronics). The rate of employability of former students of FATEC is high, compared with Brazilian standards, with more than 93% in labor market, with average starting pay of R$2,500.00 (US$1,066.10). Currently, many FATEC students compete in degree of equality with the students of the best universities in the country, such as the Polytechnic School (Poli-USP) and the Technological Institute of Aeronautics (ITA).
The Student's Guide, an annual publication of Editora Abril (popular Brazilian publisher), which evaluates the courses of BA on the market, in its 2008 edition, quoted the career of Bachelorship in System Analysis and Information Technology (FATEC Ourinhos), classifying it with 3 stars (good). The evaluation, taken as very satisfactory for a recent class, is proof of the level of quality associated with São Paulo State Technological Faculties.
See also
São Paulo Federal Institute of Education, Science and Technology
References
External links
Centro Paula Souza
Educational institutions established in 1969
Education in São Paulo
Universities and colleges in São Paulo (state)
1969 establishments in Brazil
Technical universities and colleges in Brazil
State universities and colleges in Brazil |
55647336 | https://en.wikipedia.org/wiki/Micro-80 | Micro-80 | The Micro-80 () was the first do-it-yourself home computer in the Soviet Union.
Overview
Schematics and information were published in the local DIY electronic magazine Radio in 1983. It was complex, using an KR580VM80A-based system (a clone of the Intel 8080) which contained about 200 ICs. This system gained low popularity, but set a precedent in getting the attention of hobbyists for DIY computers, and later other DIY computers were published by Radio and other DIY magazines.
History of creation
The creation of the Micro-80 prototype began in 1978, when a package from the Kiev NPO Kristall arrived at MIEM by mistake. There were microcircuits in that package. Soon, MIEM specialists figured out that this was a domestic analogue of the i8080 microprocessor and peripheral controllers and decided to create their own PC.
In 1979, the first sample of a microcomputer was launched. As in the first Western microcomputers, a terminal connected via a serial interface was used as a display device and keyboard, in this case the Videoton-340. There was also a punched tape reader FS-1500. 4 KB RAM was made on K565RU2 microcircuits with a 1K×1 organization (later the RAM was increased by another 8 KB). Initially, there was no ROM at all, and when the computer was turned on cold (as in one of the first American microcomputers Altair 8800 of 1975), it was necessary to manually enter the program for loading the block from punched tape with toggle switches. When i2708 chips (UV-ROM 1K×8) became available some time after the computer was running, they were used to store the ROM-BIOS and the monitor, eliminating the need to constantly load them from punched tape.
Popov developed a text video adapter that works on a conventional household TV and a keyboard read through the PPA KR580VV55, which eliminated the bulky industrial terminal. After a data storage system based on a cassette recorder was developed, in 1980 a prototype of a full-fledged household computer was obtained. After bringing it into a presentable form, it was shown to the Deputy Minister of the Radio Industry N.V. Gorshkov, but did not meet his understanding regarding the implementation of the development.
Resonance for journal publication
The idea to build a computer on their own interested many radio amateurs. Letters began to come to the editors of the Radio magazine with requests to simplify the design of the Micro-80 and, to facilitate assembly, develop printed circuit boards for it. Therefore, soon, already in 1986, the same authors published a much simpler Radio 86RK computer, containing only 29 microcircuits.
References
Soviet computer systems |
22073769 | https://en.wikipedia.org/wiki/Packet%20aggregation | Packet aggregation | In a packet-based communications network, packet aggregation is the process of joining multiple packets together into a single transmission unit, in order to reduce the overhead associated with each transmission.
Packet aggregation is useful in situations where each transmission unit may have significant overhead (preambles, headers, cyclic redundancy check, etc.) or where the expected packet size is small compared to the maximum amount of information that can be transmitted.
In a communication system based on a layered OSI model, packet aggregation may be responsible for joining multiple MSDUs into a single MPDU that can be delivered to the physical layer as a single unit for transmission.
The ITU-T G.hn standard, which provides a way to create a high-speed (up to 1 Gigabit/s) Local area network using existing home wiring (power lines, phone lines and coaxial cables), is an example of a protocol that employs packet aggregation to increase efficiency.
See also
Packet segmentation
Frame aggregation
Packets (information technology) |
8563309 | https://en.wikipedia.org/wiki/Anne%20Isabella%20Thackeray%20Ritchie | Anne Isabella Thackeray Ritchie | Anne Isabella, Lady Ritchie (9 June 1837 – 26 February 1919), eldest daughter of William Makepeace Thackeray, was an English writer, whose several novels were appreciated in their time and made her a central figure on the late Victorian literary scene. She is noted especially as the custodian of her father's literary legacy, and for short fiction that places fairy tale narratives in a Victorian milieu. Her 1885 novel Mrs. Dymond introduced into English the proverb, "Give a man a fish and you feed him for a day; teach a man to fish and you feed him for life."
Life
Anne Isabella Thackeray was born in London, the eldest daughter of William Makepeace Thackeray and his wife Isabella Gethin Shawe (1816–1893). She had two younger sisters: Jane, born in 1839, who died at eight months, and Harriet Marian "Minny" (1840–1875), who married Leslie Stephen in 1869. Anne, whose father called her Anny, spent her childhood in France and England, where she and her sister were accompanied by the future poet Anne Evans.
In 1877, she married her cousin, Richmond Ritchie, who was 17 years her junior. They had two children, Hester and Billy. She was a step-aunt of Virginia Woolf, who penned an obituary of her in the Times Literary Supplement. She is also thought to have inspired the character of Mrs Hilbery in Woolf's Night and Day.
Literary career
In 1863, Anne Isabella published The Story of Elizabeth with immediate success. Several other works followed:
The Village on the Cliff (1867)
To Esther, and Other Sketches (1869)
Old Kensington (1873)
Toilers and Spinsters, and Other Essays (1874)
Bluebeard's Keys, and Other Stories (1874)
Five Old Friends (1875)
Madame de Sévigné (1881), a biography with literary excerpts
In other writings, she made unusual use of old folk stories to depict modern situations and occurrences, such as Sleeping Beauty, Cinderella and Little Red Riding Hood.
She also wrote the five novels:
Miss Angel (1875)
From An Island (1877), a semi-autobiographical novella
Miss Williamson's Divagations (1881)
A Book of Sibyls: Mrs. Barbauld, Mrs. Opie, Miss Edgeworth, Miss Austen (1883)
Mrs. Dymond (1885; reprinted in 1890)
References
Bibliography
"Introduction" by Anne Thackeray Ritchie in Our Village, fully and openly available online in the Baldwin Library of Historical Children's Literature Digital Collection
Aplin, John. The Inheritance of Genius – A Thackeray Family Biography, 1798–1875, Lutterworth Press (2010).
Aplin, John. Memory and Legacy – A Thackeray Family Biography, 1876–1919, Lutterworth Press (2011).
Aplin, John (editor). The Correspondence and Journals of the Thackeray Family, 5 vols., Pickering & Chatto (2011).
External links
Genealogy of Anne Thackeray Ritchie
Anne Isabella Thackeray at Victorian Web
19th-century English novelists
English short story writers
1837 births
1919 deaths
English children's writers
Victorian women writers
British women short story writers
English women novelists
19th-century English women writers
Writers from London
Anne
19th-century British short story writers
Victorian novelists |
27697009 | https://en.wikipedia.org/wiki/API | API | An application programming interface (API) is a connection between computers or between computer programs. It is a type of software interface, offering a service to other pieces of software. A document or standard that describes how to build or use such a connection or interface is called an API specification. A computer system that meets this standard is said to implement or expose an API. The term API may refer either to the specification or to the implementation.
In contrast to a user interface, which connects a computer to a person, an application programming interface connects computers or pieces of software to each other. It is not intended to be used directly by a person (the end user) other than a computer programmer who is incorporating it into software. An API is often made up of different parts which act as tools or services that are available to the programmer. A program or a programmer that uses one of these parts is said to call that portion of the API. The calls that make up the API are also known as subroutines, methods, requests, or endpoints. An API specification defines these calls, meaning that it explains how to use or implement them.
One purpose of APIs is to hide the internal details of how a system works, exposing only those parts a programmer will find useful and keeping them consistent even if the internal details later change. An API may be custom-built for a particular pair of systems, or it may be a shared standard allowing interoperability among many systems.
The term API is often used to refer to web APIs, which allow communication between computers that are joined by the internet. There are also APIs for programming languages, software libraries, computer operating systems, and computer hardware. APIs originated in the 1940s, though the term did not emerge until the 1960s and 1970s.
Purpose
In building applications, an API (application programming interface) simplifies programming by abstracting the underlying implementation and only exposing objects or actions the developer needs. While a graphical interface for an email client might provide a user with a button that performs all the steps for fetching and highlighting new emails, an API for file input/output might give the developer a function that copies a file from one location to another without requiring that the developer understand the file system operations occurring behind the scenes.
History of the term
The term API initially described an interface only for end-user-facing programs, known as application programs. This origin is still reflected in the name "application programming interface." Today, the term is broader, including also utility software and even hardware interfaces.
1940s and 1950s
The idea of the API is much older than the term itself. British computer scientists Maurice Wilkes and David Wheeler worked on a modular software library in the 1940s for EDSAC, an early computer. The subroutines in this library were stored on punched paper tape organized in a filing cabinet. This cabinet also contained what Wilkes and Wheeler called a "library catalog" of notes about each subroutine and how to incorporate it into a program. Today, such a catalog would be called an API (or an API specification or API documentation) because it instructs a programmer on how to use (or "call") each subroutine that the programmer needs.
Wilkes and Wheeler's 1951 book The Preparation of Programs for an Electronic Digital Computer contains the first published API specification. Joshua Bloch considers that Wilkes and Wheeler "latently invented" the API, because it is more of a concept that is discovered than invented.
1960s and 1970s
The term "application program interface" (without an -ing suffix) is first recorded in a paper called Data structures and techniques for remote computer graphics presented at an AFIPS conference in 1968. The authors of this paper use the term to describe the interaction of an application—a graphics program in this case—with the rest of the computer system. A consistent application interface (consisting of Fortran subroutine calls) was intended to free the programmer from dealing with idiosyncrasies of the graphics display device, and to provide hardware independence if the computer or the display were replaced.
The term was introduced to the field of databases by C. J. Date in a 1974 paper called The Relational and Network Approaches: Comparison of the Application Programming Interface. An API became a part of the ANSI/SPARC framework for database management systems. This framework treated the application programming interface separately from other interfaces, such as the query interface. Database professionals in the 1970s observed these different interfaces could be combined; a sufficiently rich application interface could support the other interfaces as well.
This observation led to APIs that supported all types of programming, not just application programming.
1990s
By 1990, the API was defined simply as "a set of services available to a programmer for performing certain tasks" by technologist Carl Malamud.
The idea of the API was expanded again with the dawn of remote procedure calls and web APIs. As computer networks became common in the 1970s and 1980s, programmers wanted to call libraries located not only on their local computers, but on computers located elsewhere. These remote procedure calls were well supported by the Java language in particular. In the 1990s, with the spread of the internet, standards like CORBA, COM, and DCOM competed to become the most common way to expose API services.
2000s
Roy Fielding's dissertation Architectural Styles and the Design of Network-based Software Architectures at UC Irvine in 2000 outlined Representational state transfer (REST) and described the idea of a "network-based Application Programming Interface" that Fielding contrasted with traditional "library-based" APIs. XML and JSON web APIs saw widespread commercial adoption beginning in 2000 and continuing as of 2022. The web API is now the most common meaning of the term API.
The Semantic Web proposed by Tim Berners-Lee in 2001 included "semantic APIs" that recast the API as an open, distributed data interface rather than a software behavior interface. Proprietary interfaces and agents became more widespread than open ones, but the idea of the API as a data interface took hold. Because web APIs are widely used to exchange data of all kinds online, API has become a broad term describing much of the communication on the internet. When used in this way, the term API has overlap in meaning with the term communication protocol.
Usage
Libraries and frameworks
The interface to a software library is one type of API. The API describes and prescribes the "expected behavior" (a specification) while the library is an "actual implementation" of this set of rules.
A single API can have multiple implementations (or none, being abstract) in the form of different libraries that share the same programming interface.
The separation of the API from its implementation can allow programs written in one language to use a library written in another. For example, because Scala and Java compile to compatible bytecode, Scala developers can take advantage of any Java API.
API use can vary depending on the type of programming language involved.
An API for a procedural language such as Lua could consist primarily of basic routines to execute code, manipulate data or handle errors while an API for an object-oriented language, such as Java, would provide a specification of classes and its class methods. Hyrum's law states that "With a sufficient number of users of an API, it does not matter what you promise in the contract: all observable behaviors of your system will be depended on by somebody." Meanwhile, several studies show that most applications that use an API tend to use a small part of the API. API use actually varies depending on the number of users, as well as on the popularity of the API.
Language bindings are also APIs. By mapping the features and capabilities of one language to an interface implemented in another language, a language binding allows a library or service written in one language to be used when developing in another language.
Tools such as SWIG and F2PY, a Fortran-to-Python interface generator, facilitate the creation of such interfaces.
An API can also be related to a software framework: a framework can be based on several libraries implementing several APIs, but unlike the normal use of an API, the access to the behavior built into the framework is mediated by extending its content with new classes plugged into the framework itself.
Moreover, the overall program flow of control can be out of the control of the caller and in the framework's hands by inversion of control or a similar mechanism.
Operating systems
An API can specify the interface between an application and the operating system. POSIX, for example, specifies a set of common APIs that aim to enable an application written for a POSIX conformant operating system to be compiled for another POSIX conformant operating system.
Linux and Berkeley Software Distribution are examples of operating systems that implement the POSIX APIs.
Microsoft has shown a strong commitment to a backward-compatible API, particularly within its Windows API (Win32) library, so older applications may run on newer versions of Windows using an executable-specific setting called "Compatibility Mode".
An API differs from an application binary interface (ABI) in that an API is source code based while an ABI is binary based. For instance, POSIX provides APIs while the Linux Standard Base provides an ABI.
Remote APIs
Remote APIs allow developers to manipulate remote resources through protocols, specific standards for communication that allow different technologies to work together, regardless of language or platform.
For example, the Java Database Connectivity API allows developers to query many different types of databases with the same set of functions, while the Java remote method invocation API uses the Java Remote Method Protocol to allow invocation of functions that operate remotely, but appear local to the developer.
Therefore, remote APIs are useful in maintaining the object abstraction in object-oriented programming; a method call, executed locally on a proxy object, invokes the corresponding method on the remote object, using the remoting protocol, and acquires the result to be used locally as a return value.
A modification of the proxy object will also result in a corresponding modification of the remote object.
Web APIs
Web APIs are the defined interfaces through which interactions happen between an enterprise and applications that use its assets, which also is a service-level agreement (SLA) to specify the functional provider and expose the service path or URL for its API users. An API approach is an architectural approach that revolves around providing a program interface to a set of services to different applications serving different types of consumers.
When used in the context of web development, an API is typically defined as a set of specifications, such as Hypertext Transfer Protocol (HTTP) request messages, along with a definition of the structure of response messages, usually in an Extensible Markup Language (XML) or JavaScript Object Notation (JSON) format. An example might be a shipping company API that can be added to an eCommerce-focused website to facilitate ordering shipping services and automatically include current shipping rates, without the site developer having to enter the shipper's rate table into a web database. While "web API" historically has been virtually synonymous with web service, the recent trend (so-called Web 2.0) has been moving away from Simple Object Access Protocol (SOAP) based web services and service-oriented architecture (SOA) towards more direct representational state transfer (REST) style web resources and resource-oriented architecture (ROA). Part of this trend is related to the Semantic Web movement toward Resource Description Framework (RDF), a concept to promote web-based ontology engineering technologies. Web APIs allow the combination of multiple APIs into new applications known as mashups.
In the social media space, web APIs have allowed web communities to facilitate sharing content and data between communities and applications. In this way, content that is created in one place dynamically can be posted and updated to multiple locations on the web. For example, Twitter's REST API allows developers to access core Twitter data and the Search API provides methods for developers to interact with Twitter Search and trends data.
Design
The design of an API has significant impact on its usage. The principle of information hiding describes the role of programming interfaces as enabling modular programming by hiding the implementation details of the modules so that users of modules need not understand the complexities inside the modules. Thus, the design of an API attempts to provide only the tools a user would expect. The design of programming interfaces represents an important part of software architecture, the organization of a complex piece of software.
Release policies
APIs are one of the more common ways technology companies integrate. Those that provide and use APIs are considered as being members of a business ecosystem.
The main policies for releasing an API are:
Private: The API is for internal company use only.
Partner: Only specific business partners can use the API. For example, vehicle for hire companies such as Uber and Lyft allow approved third-party developers to directly order rides from within their apps. This allows the companies to exercise quality control by curating which apps have access to the API, and provides them with an additional revenue stream.
Public: The API is available for use by the public. For example, Microsoft makes the Windows API public, and Apple releases its API Cocoa, so that software can be written for their platforms. Not all public APIs are generally accessible by everybody. For example, Internet service providers like Cloudflare or Voxility, use RESTful APIs to allow customers and resellers access to their infrastructure information, DDoS stats, network performance or dashboard controls. Access to such APIs is granted either by "API tokens", or customer status validations.
Public API implications
An important factor when an API becomes public is its "interface stability". Changes to the API—for example adding new parameters to a function call—could break compatibility with the clients that depend on that API.
When parts of a publicly presented API are subject to change and thus not stable, such parts of a particular API should be documented explicitly as "unstable". For example, in the Google Guava library, the parts that are considered unstable, and that might change soon, are marked with the Java annotation @Beta.
A public API can sometimes declare parts of itself as deprecated or rescinded. This usually means that part of the API should be considered a candidate for being removed, or modified in a backward incompatible way. Therefore, these changes allow developers to transition away from parts of the API that will be removed or not supported in the future.
Client code may contain innovative or opportunistic usages that were not intended by the API designers. In other words, for a library with a significant user base, when an element becomes part of the public API, it may be used in diverse ways.
On February 19, 2020, Akamai published their annual "State of the Internet" report, showcasing the growing trend of cybercriminals targeting public API platforms at financial services worldwide. From December 2017 through November 2019, Akamai witnessed 85.42 billion credential violation attacks. About 20%, or 16.55 billion, were against hostnames defined as API endpoints. Of these, 473.5 million have targeted financial services sector organizations.
Documentation
API documentation describes the services an API offers and how to use those services, aiming to cover everything a client would need to know for practical purposes.
Documentation is crucial for the development and maintenance of applications using the API.
API documentation is traditionally found in documentation files but can also be found in social media such as blogs, forums, and Q&A websites.
Traditional documentation files are often presented via a documentation system, such as Javadoc or Pydoc, that has a consistent appearance and structure.
However, the types of content included in the documentation differs from API to API.
In the interest of clarity, API documentation may include a description of classes and methods in the API as well as "typical usage scenarios, code snippets, design rationales, performance discussions, and contracts", but implementation details of the API services themselves are usually omitted.
Restrictions and limitations on how the API can be used are also covered by the documentation. For instance, documentation for an API function could note that its parameters cannot be null, that the function itself is not thread safe. Because API documentation tends to be comprehensive, it is a challenge for writers to keep the documentation updated and for users to read it carefully, potentially yielding bugs.
API documentation can be enriched with metadata information like Java annotations. This metadata can be used by the compiler, tools, and by the run-time environment to implement custom behaviors or custom handling.
It is possible to generate API documentation in a data-driven manner. By observing many programs that use a given API, it is possible to infer the typical usages, as well the required contracts and directives. Then, templates can be used to generate natural language from the mined data.
Dispute over copyright protection for APIs
In 2010, Oracle Corporation sued Google for having distributed a new implementation of Java embedded in the Android operating system. Google had not acquired any permission to reproduce the Java API, although permission had been given to the similar OpenJDK project. Judge William Alsup ruled in the Oracle v. Google case that APIs cannot be copyrighted in the U.S and that a victory for Oracle would have widely expanded copyright protection to a "functional set of symbols" and allowed the copyrighting of simple software commands:
Alsup's ruling was overturned in 2014 on appeal to the Court of Appeals for the Federal Circuit, though the question of whether such use of APIs constitutes fair use was left unresolved.
In 2016, following a two-week trial, a jury determined that Google's reimplementation of the Java API constituted fair use, but Oracle vowed to appeal the decision. Oracle won on its appeal, with the Court of Appeals for the Federal Circuit ruling that Google's use of the APIs did not qualify for fair use. In 2019, Google appealed to the Supreme Court of the United States over both the copyrightability and fair use rulings, and the Supreme Court granted review. Due to the COVID-19 pandemic, the oral hearings in the case were delayed until October 2020.
The case was decided by the Supreme Court in Google's favor.
Examples
ASPI for SCSI device interfacing
Cocoa and Carbon for the Macintosh
DirectX for Microsoft Windows
EHLLAPI
Java APIs
ODBC for Microsoft Windows
OpenAL cross-platform sound API
OpenCL cross-platform API for general-purpose computing for CPUs & GPUs
OpenGL cross-platform graphics API
OpenMP API that supports multi-platform shared memory multiprocessing programming in C, C++, and Fortran on many architectures, including Unix and Microsoft Windows platforms.
Server Application Programming Interface (SAPI)
Simple DirectMedia Layer (SDL)
See also
API testing
API writer
Augmented web
Calling convention
Common Object Request Broker Architecture (CORBA)
Comparison of application virtual machines
Document Object Model (DOM)
Double-chance function
Foreign function interface
Front and back ends
Interface (computing)
Interface control document
List of 3D graphics APIs
Microservices
Name mangling
Open API
Open Service Interface Definitions
Parsing
Plugin
RAML (software)
Software development kit (SDK)
Web API
Web content vendor
XPCOM
References
Further reading
Argues that "APIs are far from neutral tools" and form a key part of contemporary programming, understood as a fundamental part of culture.
What is an API? – in the U.S. Supreme Court opinion, Google v. Oracle 2021, pp. 3–7 – "For each task, there is computer code; API (also known as Application Program Interface) is the method for calling that 'computer code' (instruction – like a recipe – rather than cooking instruction, this is machine instruction) to be carry out"
Maury, Innovation and Change – Cory Ondrejka \ February 28, 2014 \ " ...proposed a public API to let computers talk to each other". (Textise URL)
External links
Forrester : IT industry : API Case : Google v. Oracle – May 20, 2021 – content format: Audio with text – length 26:41
Technical communication |
35244938 | https://en.wikipedia.org/wiki/Valerie%20Aurora | Valerie Aurora | Valerie Anita Aurora is a software engineer and feminist activist. She was the co-founder of the Ada Initiative, a non-profit organization that sought to increase women's participation in the free culture movement, open source technology, and open source culture. Aurora is also known within the Linux community for advocating new developments in filesystems in Linux, including ChunkFS and the Union file system. Her birth name was Val Henson, but she changed it shortly before 2009, choosing her middle name after the computer scientist Anita Borg. In 2012, Aurora, and Ada Initiative co-founder Mary Gardiner, were named two of the most influential people in computer security by SC Magazine. In 2013, she won the O'Reilly Open Source Award.
Early life and education
Daughter of Carolyn Meinel, Aurora was raised in New Mexico, and was home-schooled. She became involved in computer programming when she attended DEF CON in 1995. She studied computer science and mathematics at the New Mexico Institute of Mining and Technology.
Programming
She first became involved with file systems when she worked with ZFS in 2002 at Sun Microsystems. She later moved to IBM where she worked in the group of Theodore Ts'o, where they considered extensions to the ext2 and ext3 Linux file systems. While working at Intel, she implemented the ext2 dirty bit and relative atime. Along with Arjan van de Ven, she came up with the idea for ChunkFS, which simplifies file system checks by dividing the file system into independent pieces. She also co-organized the first Linux File Systems Workshop in order to figure out how to spread awareness of and raise funding for file system development. As of 2009, she worked for Red Hat as a file systems developer as well as a part-time science writer and Linux consultant.
Ada Initiative
Already an activist for women in open source, she joined Mary Gardiner and members of the Geek Feminism blog to develop anti-harassment policies for conferences after Noirin Shirley was sexually assaulted at ApacheCon 2010. Aurora quit her job as a Linux kernel developer at Red Hat and, with Gardiner, founded the Ada Initiative in February 2011. The organization was named after Ada Lovelace, who worked with Charles Babbage and is considered to be the world's first computer programmer. Two years later, Aurora founded Double Union, a hackerspace for women, with Amelia Greenhall and Liz Henry, but was banned in 2018. The Ada Initiative was shut down in October 2015.
Writing
Maintaining a blog since 2007, Aurora has written extensively about coding and the experiences of women in open source. This has included descriptions of DEF CON and the harassment that took place there. In 2013, Aurora provided a comment to The Verge about the Electronic Frontier Foundation's involvement in the legal defense of Andrew Auernheimer, who was in prison for hacking and had previously harassed Kathy Sierra. Aurora said "This is another case where they're saying, 'The cases we care about are the ones white men are interested in. We’re less interested in protecting women on the web.'" This comment was received negatively by the EFF's Director of International Freedom of Expression, Jillian York.
Another 2013 controversy that received commentary from Aurora was Donglegate in which PyCon attendee Adria Richards faced backlash for reporting a conversation overheard between two men sitting near her. Aurora condemned the threats sent to Richards and stated that Anonymous, by using large numbers of computers, was "distorting social pressure". When asked if firing one of the males was an appropriate response, she said "I don't have enough information to know that." Two years later, Aurora praised the gender ratio at PyCon and called Guido van Rossum and the Python community "the biggest success story for women in open source." In the same interview, she approved of the culture of the website Tumblr and stated that Linus Torvalds' daughter Patricia was a positive role-model.
See also
AdaCamp
References
Further reading
External links
Valerie Aurora on File Systems and the Ada Initiative: An Interview
American feminists
Free software programmers
IBM employees
Red Hat employees
Intel people
Linux people
Living people
New Mexico Institute of Mining and Technology alumni
People from New Mexico
Place of birth missing (living people)
Sex-positive feminists
Sun Microsystems people
Women founders
21st-century American women scientists
21st-century American scientists
American women's rights activists
Year of birth missing (living people) |
35682554 | https://en.wikipedia.org/wiki/Anti-tamper%20software | Anti-tamper software | Anti-tamper software is software which makes it harder for an attacker to modify it. The measures involved can be passive such as obfuscation to make reverse engineering difficult or active tamper-detection techniques which aim to make a program malfunction or not operate at all if modified. It is essentially tamper resistance implemented in the software domain. It shares certain aspects but also differs from related technologies like copy protection and trusted hardware, though it is often used in combination with them. Anti-tampering technology typically makes the software somewhat larger and also has a performance impact. There are no provably secure software anti-tampering methods; thus, the field is an arms race between attackers and software anti-tampering technologies.
Tampering can be malicious, to gain control over some aspect of the software with an unauthorized modification that alters the computer program code and behaviour. Examples include installing rootkits and backdoors, disabling security monitoring, subverting authentication, malicious code injection for the purposes of data theft or to achieve higher user privileges, altering control flow and communication, license code bypassing for the purpose of software piracy, code interference to extract data or algorithms and counterfeiting. Software applications are vulnerable to the effects of tampering and code changes throughout their lifecycle from development and deployment to operation and maintenance.
Anti-tamper protection can be applied as either internally or externally to the application being protected. External anti-tampering is normally accomplished by monitoring the software to detect tampering. This type of defense is commonly expressed as malware scanners and anti-virus applications. Internal anti-tampering is used to turn an application into its own security system and is generally done with specific code within the software that will detect tampering as it happens. This type of tamper proofing defense may take the form of runtime integrity checks such as cyclic redundancy checksums, anti-debugging measures, encryption or obfuscation. Execution inside a virtual machine has become a common anti-tamper method used in recent years for commercial software; it is used for example in StarForce and SecuROM. Some anti-tamper software uses white-box cryptography, so cryptographic keys are not revealed even when cryptographic computations are being observed in complete detail in a debugger. A more recent research trend is tamper-tolerant software, which aims to correct the effects of tampering and allow the program to continue as if unmodified. A simple (and easily defeated) scheme of this kind was used in the Diablo II video game, which stored its critical player data in two copies at different memory locations and if one was modified externally, the game used the lower value.
Anti-tamper software is used in many types of software products including: embedded systems, financial applications, software for mobile devices, network-appliance systems, anti-cheating in games, military, license management software, and digital rights management (DRM) systems. Some general-purpose packages have been developed which can wrap existing code with minimal programing effort; for example the SecuROM and similar kits used in the gaming industry, though they have the downside that semi-generic attacking tools also exist to counter them. Malicious software itself can and has been observed using anti-tampering techniques, for example the Mariposa botnet.
See also
Hardening (computing)
Fault tolerance
Denuvo
Digital rights management
References
Computer security software |
9535081 | https://en.wikipedia.org/wiki/Telengard | Telengard | Telengard is a 1982 role-playing dungeon crawler video game developed by Daniel Lawrence and published by Avalon Hill. The player explores a dungeon, fights monsters with magic, and avoids traps in real-time without any set mission other than surviving. Lawrence first wrote the game as DND, a 1976 version of Dungeons & Dragons for the DECsystem-10 mainframe computer. He continued to develop DND at Purdue University as a hobby, rewrote the game for the Commodore PET 2001 after 1978, and ported it to Apple II+, TRS-80, and Atari 800 platforms before Avalon Hill found the game at a convention and licensed it for distribution. Its Commodore 64 release was the most popular. Reviewers noted Telengards similarity to Dungeons and Dragons. RPG historian Shannon Appelcline noted the game as one of the first professionally produced computer role-playing games, and Gamasutras Barton considered Telengard consequential in what he deemed "The Silver Age" of computer role-playing games preceding the golden age of the late 1980s. Some of the game's dungeon features, such as altars, fountains, teleportation cubes, and thrones, were adopted by later games such as Tunnels of Doom (1982).
Gameplay
In Telengard, the player travels alone through a dungeon fraught with monsters, traps, and treasures in a manner similar to the original Dungeons & Dragons. The game has 50 levels with two million rooms, 20 monster types, and 36 spells. It has no missions or quests, and its only objective is to survive and improve the player character. The game is set in real-time and cannot be paused, so the player must visit an inn to save game progress. In the early releases such as Apple II, the game world has no sound and is represented by ASCII characters, such as slashes for stairs and dollar signs for treasure. Unless the player enters a special cheat code, progress is lost upon dying.
The single-player adventure begins by personalizing a player character. Each character has randomly generated values for their statistical character attributes: charisma, constitution, dexterity, intelligence, strength, and wisdom. The algorithm never changes, but the player can randomize repeatedly for new character attribute distributions until satisfied. The player begins with a sword, armor, shield, and no money, and can only see the immediate surroundings, rather than the whole level. Monsters spawn randomly, and players have three options in battle: fight, use magic, or evade. Magic includes combative missiles, fireballs, lightning bolts, turning the undead, health regeneration, and trap navigation. The effects of the game's most complex spells are not outlined in the instruction manual and must be learned by trial and error. Like the game, the battle events are carried out in real-time instead of in turns. Enemies increase in difficulty as the player progresses through the dungeon. They include both living and undead monsters such as elves, dragons, mummies, and wraiths. Defeating enemies awards experience points, which accrete to raise the player's experience level and increase player stats. The player is rewarded with treasures that include magical weapons, armor items, and potions. Players can code their own features into the game.
Development
While a computer science student at Purdue University, Daniel Lawrence wrote several hobbyist computer games for the university's PDP-11 RSTS/E mainframe computer, and one grew into Telengard. In his 1976 and 1977 college summer breaks at home, he worked at BOCES in Spencerport, New York, where he wrote a dungeon crawl game called DND (postdating the similar but unrelated dnd) in the BASIC programming language for the DECsystem-10's TOPS-10 operating system. He had been influenced by the pen and paper role-playing game Dungeons & Dragons. At college, he ported the game to Purdue's PDP-11 RSTS/E. The game's mechanics grew from conversations at the Purdue engineering building. Part of its real-time nature descended from the need to not have players monopolize the few shared computer terminals.
In 1978, Lawrence purchased the Commodore PET 2001 and no longer needed the university's computer, though the microcomputer's lack of memory was his primary design obstacle. He rewrote DND as Telengard within eight kilobytes of memory and designed the dungeon to be procedurally generated based on the player-character's position so the maps would not have to be stored in memory. The final version almost completely used 32 kilobytes of memory. It was easily ported to the Apple II+ and TRS-80 platforms due to their similar usage of the 8K BASIC programming language. The later Atari 800 port required a more complicated handling of string variables. The three ports were finished before Avalon Hill saw the game at a gaming convention and licensed it in 1982 as one of its first computer games. The IBM PC port required a rewrite into the C programming language; the source code for this version was later lost. The Heath/Zenith CP/M version requires MBASIC. The game's Commodore 64 port was the most popular.
Matt Barton of Gamasutra reported that Lawrence's DND (and consequently, his Telengard) was directly inspired by Whisenhunt and Wood's dnd for PLATO, with its randomized dungeons and minimalist graphics, though Lawrence recalled in an interview that he had not seen or known of their game. Computer Gaming Worlds Scorpia wrote that Telengard was based on the earlier, public domain software Castle Telengard.
The game's BASIC source code was available, so ports and remasters were made by the fan community.
Reception and legacy
Norman Banduch provided an early review for Telengard in the December 1982 issue of The Space Gamer, saying "Telengard could have been a good game, but is marred by poor programming and lack of polish. If you don't want to rewrite it yourself, wait for the second edition."
RPG historian Shannon Appelcline identifies Telengard as one of the first professionally produced computer role-playing games. Gamasutras Barton described the game as a "pure dungeon crawler" for its lack of diversions, and noted its expansive dungeons as a "key selling point". AllGame's Earl Green remarked that the game's mechanics were very similar in practice to Dungeons & Dragons, The Commodore 64 Home Companion described it to have Dungeons-and-Dragons style, Computer Gaming Worlds Dick McGrath also said the game "borrowed heavily" from the original such that he expected its creators to be thanked in the end credits, and Scorpia cited four specific similarities with Dungeons & Dragons.
Green described the game as "exceedingly simple ... yet very addictive" and rated it four of five stars. McGrath wrote that he wanted to have more control over his money, and added that a store for purchasing upgrades would have been useful. He thought that games such as Dunjonquest and Maces and Magic handled this aspect better. McGrath suggested that the player draw their own map in the absence of an overview mapping system. He said that his appreciation for the game grew with time and that it had the necessary hook to make him continually return and play again. Tony Roberts of Compute! considered the Commodore 64 version of the game best for its enhanced graphics. The Commodore 64 GHome Companion agreed, stating that it "has some fine sprite graphics and sound effects not found in other versions of the game". Scorpia in 1993 stated that while Telengard was "interesting for its time, the game would be pretty dated today" compared to the Gold Box games; "back then, however, it was hot stuff, and a fun way of passing the time".
Barton of Gamasutra placed Telengard alongside Wizardry and the early Ultima series in what he deemed "The Silver Age" of computer role-playing games that preceded the golden age of the late 1980s. Yet in 1992, Computer Gaming World Gerald Graef wrote that Telengard and Temple of Apshai were "quickly overshadowed" by the Wizardry and Ultima series. Some of the game's dungeon features, such as altars, fountains, teleportation cubes, and thrones, were adopted by later games such as Tunnels of Doom (1982), and Sword of Fargoal (1982) has similar features. Barton wrote in 2007 that Telengard "still enjoys considerable appreciation today" and questioned whether the Diablo series was "but an updated Telengard".
Notes
References
External links
1982 video games
Apple II games
Atari 8-bit family games
Avalon Hill video games
CP/M games
Commercial video games with freely available source code
Commodore 64 games
Commodore PET games
DOS games
Dungeon crawler video games
Fantasy video games
FM-7 games
Freeware games
Roguelike video games
Role-playing video games
Single-player video games
TRS-80 games
Video games developed in the United States
Video games using procedural generation |
33532463 | https://en.wikipedia.org/wiki/Protecting%20Children%20from%20Internet%20Pornographers%20Act%20of%202011 | Protecting Children from Internet Pornographers Act of 2011 | The Protecting Children from Internet Pornographers Act of 2011 () was a United States bill designed with the stated intention of increasing enforcement of laws related to the prosecution of child pornography and child sexual exploitation offenses. Representative Lamar Smith (R-Texas), sponsor of H.R. 1981, stated that, "When investigators develop leads that might result in saving a child or apprehending a pedophile, their efforts should not be frustrated because vital records were destroyed simply because there was no requirement to retain them."
Organizations that support the goal of the bill include the National Sheriffs' Association, the National Center for Missing and Exploited Children (NCMEC), the National Center for Victims of Crime, and Eastern North Carolina Stop Human Trafficking Now.
H.R. 1981 has been criticized for its scope and privacy implications. Opponents of the bill, which include Electronic Frontier Foundation (EFF), the American Civil Liberties Union, and the American Library Association, take issue with the violation of privacy that would necessarily occur if government could compel ISPs to render subscriber information. Kevin Bankston, an EFF staff attorney, stated that "The data retention mandate in this bill would treat every Internet user like a criminal and threaten the online privacy and free speech rights of every American..., ".
History
On May 25, 2011, Representative Lamar Smith of Texas introduced the bill. It was co-sponsored by 25 other House Representatives. The bill passed the United States House Judiciary Committee on July 28, 2011, by a vote of 19–10. As of January 2012, the bill had 39 co-sponsors. A Congressional Budget Office report on the costs of enacting the bill was released on October 12, 2011. The next step for the bill would be a debate in the House of Representatives.
Scope
H.R. 1981 would introduce harsher penalties for offenders and make it a crime to financially facilitate the sale, distribution and purchase of child pornography. The bill would also amend Section 2703 of the Stored Communications Act, requiring ISPs to retain user IP addresses thereby enabling identification of "corresponding customer or subscriber information" listed in subsection (c)(2) of 18 USC 2703, for at least one year. Retained information would include subscribers' names, addresses, length of service, telephone numbers, and means and sources of payment for services (including credit card or bank account numbers, if they were used to pay for service.) The bill does not introduce limits on subscriber information that may be retained by the ISPs and accessed by the government. The bill also protects ISPs from civil actions resulting from the loss of data stored as a requirement of the bill. The bill also requires the Attorney General to conduct studies related to the costs of compliance for service providers as well as the compliance standards implemented by service providers. The cost assessment would include hardware, software, and all personnel involved in the compliance and the compliance assessment would include a survey of the privacy standards implemented by the providers and the frequency of reported breaches of data.
Use of the data ISPs would be forced to retain under the bill would not be limited to investigations of child pornography, but would be available for law enforcement perusal for any issue, but only with probable cause and a warrant. However, issues involving unregistered sex offenders would allow for the use of an administrative subpoena, which is different from a warrant or judicial subpoena, and which does not require probable cause. The bill does not grant the right to access subscriber records to any "person or other entity that is not a governmental entity."
The bill also does not provide extra funding to investigate or prosecute additional child pornography related cases.
Purpose
On July 12, 2011, the Sheriff of Bedford County, VA, provided testimony on H.R. 1981 before the United States House Judiciary Subcommittee on Crime, Terrorism, and Homeland Security. In his testimony, Brown claimed that the growth of technology and the ability to claim anonymity has "enabled child pornography to become a worldwide epidemic" and made it more difficult for law enforcement to identify and prosecute child predators. Brown further reasoned that an Internet Service Provider (ISP) could retain client records for a limited span of time, ranging from a couple hours, days, or weeks, and that a lack of uniformity across ISPs "significantly hinders law enforcement's ability to identify predators when they come across child pornography." He then provided an actual account of when his county received a cybertip from the NCMEC involving an individual who posted that they were exposing themselves to a toddler. The only information he claimed law enforcement possessed was the IP address that was accessing a YAHOO Chat room through an nTelos wireless connection. During the investigation, law enforcement discovered that the ISP only retained the Media access control address and IP history for 30 days, a limit that foreclosed their opportunity to access investigative material.
NCEMC, which created CyberTipline over a decade ago, reported that, "To date, more than 51 million child pornography images and videos have been reviewed by the analysts in NCMEC's Child Victim Identification Program" and it is estimated that "[Forty] percent or more of people who possess child pornography also sexually assault children" and H.R. 1981 "equips federal, state and local law enforcement agencies with the modern-day tools needed to combat the escalation in child pornography and child exploitation crimes."
It has been suggested by critics including the Center for Democracy and Technology, that H.R. 1981 was framed as a child protection measure at least in part to make it more difficult for members of Congress to reject the bill.
Privacy issues
Various groups have expressed concerns over the privacy implications of the data providers would be required to retain under the act, including the Electronic Frontier Foundation, the American Civil Liberties Union, and the American Library Association. Concerns raised include the security of the data from a hacker, the nature of the data collected, as well as the potential for misuse by law enforcement, or use in investigations that are not child pornography-related.
Even though only assigned IP addresses and certain subscriber data would be retained, some commenters, including the EFF and some editorialists, have suggested that the data could be used to deduce any given user's personal habits, including a detailed map of where they customarily are at any given point in a day. The CDT also issued a comprehensive memorandum regarding the Data Retention Mandate in H.R. 1981, in which it detailed how data retention provisions in H.R. 1981 would raise issues concerning privacy and free speech, among the few other issues that the bill raises.
Representative Zoe Lofgren, (D-Calif.), a vocal opponent of the bill, presented an amendment to rename the bill the "Keep Every American's Digital Data for Submission to the Federal Government Without a Warrant Act." Rep. John Conyers (D-Mich.) also opposed it, saying "This is not protecting children from Internet pornography. It's creating a database for everybody in this country for a lot of other purposes." She also argued that the bill's pertinence to "commercial" ISPs would allow criminals to circumvent legislation if they used the Internet anonymously in venues including Internet cafes or libraries.
Lamar Smith, however, has defended the data retention requirements present in the bill in stating that, "Some Internet service providers currently retain these [IP] addresses for business purposes. But the period of retention varies widely among providers, from a few days to a few months. The lack of uniform data retention impedes the investigation of Internet crimes." Smith also stated that the number of child pornography cases has grown by 150% per year over the past ten years.
Marc Rotenberg, president of the Electronic Privacy Information Center has gone on record for saying that
"the bill's expansion of data retention is counter to the growing practice to limit data retention as
a mechanism to counter security threats." In addition, Rotenberg also mentions that in fact, there is a strong movement towards minimization of data retention in the information security arena, and data retention is in direct conflict with that notion. Rotenberg concluded that data minimization and not data retention is the best way to protect consumer privacy.
Cost
On October 12, 2011 a report by the Congressional Budget Office on the financial impact of the bill was released. This report stated that the cost to the government would be minimal and that the private companies providing Internet services would pay over $200 million in costs. Costs would include servers for storage of the user data.
The Center for Democracy and Technology has issued a report suggesting that the cost of data retention would be much higher than the Congressional Budget Office report indicates, and would grow prohibitively expensive with ongoing trends in internet addressing.
Similar legislation
H.R. 1981 is similar to Canada's Protecting Children from Internet Predators Act which "requires Internet providers to acquire the ability to engage in multiple simultaneous interceptions and gives law enforcement the power to audit their surveillance capabilities. Should it take effect, the bill would create a new regulatory environment for Internet providers, requiring them to submit a report within months of the law taking effect describing their equipment and surveillance infrastructure. Moreover, they would actively work with law enforcement to test their facilities for interception purposes and even provide the name of employees involved in interceptions to allow for possible RCMP background checks."
See also
Child pornography laws in the United States
Cyber Intelligence Sharing and Protection Act
Data retention
Internet privacy
Stored Communications Act
Electronic Communications Privacy Act
Fourth Amendment to the United States Constitution
Russian State Duma Bill 89417-6
References
External links
H.R. 1981 at Open Congress
H.R. 1981 on Thomas – Library of Congress
H.R. 1981 on GovTrack
Testimony on H.R. 1981 by Sheriff, Bedford County, VA Michael J. Brown
Section 2703 of Title 18 of the United States Code
Eastern North Carolina Stop Human Trafficking Now
Memorandum on Data Retention Mandate in H.R. 1981
Bill C-30
Child pornography
United States federal computing legislation
Internet access
Internet law in the United States
United States federal child welfare legislation
Internet privacy legislation
Proposed legislation of the 112th United States Congress
Childhood in the United States |
1272050 | https://en.wikipedia.org/wiki/Carrier%20Grade%20Linux | Carrier Grade Linux | Carrier Grade Linux (CGL) is a set of specifications which detail standards of availability, scalability, manageability, and service response characteristics which must be met in order for Linux kernel-based operating system to be considered "carrier grade" (i.e. ready for use within the telecommunications industry). The term is particularly applicable as telecom converges technically with data networks and commercial off-the-shelf commoditized components such as blade servers.
Carrier-grade is a term for public network telecommunications products that require up to 5 nines or 6 nines (or 99.999 to 99.9999 percent) availability, which translates to downtime per year of 30 seconds (6 nines) to 5 minutes (5 nines). The term "5 nines" is usually associated with carrier-class servers, while "6 nines" is usually associated with carrier-class switches.
CGL project and goals
The primary motivation behind the CGL effort is to present an open architecture alternative to the closed, proprietary software on proprietary hardware systems that are currently used in telecommunication systems. These proprietary systems are monolithic (hardware, software and applications integrated very tightly) and operate well as a unit. However, they are hard to maintain and scale as telecommunications companies have to utilize the services of the vendor for even relatively minor enhancements to the system.
CGL seeks to progressively reduce or to eliminate this dependence on proprietary systems and provide a path for easy deployment and scalability by utilizing cheap COTS systems to assemble a telecommunications system.
The CGL effort was started by the Open Source Development Lab (CGL Working Group). The specification is now in the combined Linux Foundation. The latest specification release is CGL 5.0. Several CGL-registered Linux distributions exist, including MontaVista, Wind River Systems and Red Flag Linux.
Applications and services
The OSDL CGLWG defines three main types of applications that carrier-grade Linux will support — gateways, signaling servers, and management.
Gateway applications provide bridging services between different technologies or administrative domains. Gateway applications are characterized by supporting many connections in real-time over many interfaces, with the requirement of not losing any frames or packets. An example of a gateway application is a media gateway, which converts conventional voice circuits using TDM to IP packets for transmission over an IP-switched network.
Signaling server applications, which include SS7 products, handle control services for calls, such as routing, session control, and status. Signaling server applications are characterized by sub-millisecond real-time requirements and large numbers of simultaneous connections (10,000 or more). An example signaling server application would include control processing for a rack of line cards.
Management applications handle traditional service and billing operations, as well as network management. Management applications are characterized by a much less stringent requirement for real-time, as well as by additional database and communication-oriented requirements. A typical management application might handle visitor and home location registers for mobile access, and authorization for customer access to billable services.
See also
SCOPE Alliance
OpenSAF
Notes
External links
Carrier Grade Linux from the Linux Foundation
Computer standards
Linux
Linux Foundation projects
Telecommunications standards |
1140661 | https://en.wikipedia.org/wiki/Hack%20Canada | Hack Canada | Hack Canada is a Canadian organization run by hackers and phreakers that provides information mainly about telephones, computer technology, and legal issues related to technology.
Founded in 1998 by CYBØRG/ASM, HackCanada has been in media publications many times, including Wired News and the Edmonton Sun newspaper (as well as other regional newspapers) for developments such as A Palm Pilot red boxing program. Hackcanada has also been featured in books such as Hacking for Dummies () and Steal This Computer Book. Hackcanada was also featured often on the Hacker News Network.
On November 29, 2017, almost twenty years after its registration, the HackCanada.com domain went offline and now displays a "This Domain Name Has Expired" message.
References
External links
Hacker groups
Scientific organizations based in Canada
1998 establishments in Canada
Organizations established in 1998 |
236322 | https://en.wikipedia.org/wiki/Plone%20%28software%29 | Plone (software) | Plone is a free and open source content management system (CMS) built on top of the Zope application server. Plone is positioned as an enterprise CMS and is commonly used for intranets and as part of the web presence of large organizations. High-profile public sector users include the U.S. Federal Bureau of Investigation, Brazilian Government, United Nations, City of Bern (Switzerland), New South Wales Government (Australia), and European Environment Agency. Plone's proponents cite its security track record and its accessibility as reasons to choose Plone.
Plone has a long tradition of development happening in so-called "sprints", in-person meetings of developers over the course of several days, the first having been held in 2003 and nine taking place in 2014. The largest sprint of the year is the sprint immediately following the annual conference. Certain other sprints are considered strategic so are funded directly by the Plone Foundation, although very few attendees are sponsored directly. The Plone Foundation also holds and enforces all copyrights and trademarks in Plone, and is assisted by legal counsel from the Software Freedom Law Center.
History
The Plone project began in 1999 by Alexander Limi, Alan Runyan, and Vidar Andersen. It was made as a usability layer on top of the Zope Content Management Framework. The first version was released in 2001. The project quickly grew into a community, receiving plenty of new add-on products from its users. The increase in community led to the creation of the annual Plone conference in 2003, which is still running today. In addition, "sprints" are held, where groups of developers meet to work on Plone, ranging from a couple of days to a week. In March 2004, Plone 2.0 was released. This release brought more customizable features to Plone, and enhanced the add-on functions. In May 2004, the Plone Foundation was created for the development, marketing, and protection of Plone. The Foundation has ownership rights over the Plone codebase, trademarks, and domain names. Even though the foundation was set up to protect ownership rights, Plone remains open source.
On March 12, 2007, Plone 3 was released. This new release brought inline editing, an upgraded visual editor, and strengthened security, among many other enhancements. Plone 4 was released in September 2010. There are over 450 developers contributing to Plone's code. Plone won two Packt Open Source CMS Awards.
Release history
Design
Plone runs on the Zope application server, which is written in Python. Plone by default stores all information in Zope's built-in transactional object database (ZODB). It comes with installers for Windows, macOS, and Linux, along with other operating systems. New updates are released regularly on Plone's website. Plone is available in over 50 languages. It complies with WCAG 2.0 AA and U.S. section 508, which allows people with disabilities to access and use Plone. A major part of Plone is its use of skins and themes. Plone's Diazo theming engine can be used to customize a website's look. These themes are written with JavaScript, HTML, XSLT, and Cascading Style Sheets. In addition, Plone comes with a user management system called Pluggable Authentication Service (PAS). PAS is used to search for users and groups in Plone. Most importantly, PAS covers the security involved for users and groups, requiring authentication in order to log into Plone. This gives users an increase in both security and organization with their content.
A large part of Plone's changes have come from its community. Since Plone is open source, the members of the Plone community regularly make alterations or add-ons to Plone's interface, and make these changes available to the rest of the community via Plone's website.
The name Plone comes from a band by that name and "Plone should look and feel like the band sounds".
Languages
Plone is built on the Zope application framework and therefore is primarily written in Python but also contains large amounts of HTML and CSS, as well as JavaScript. Plone uses jQuery as its Javascript framework in current versions, after abandoning a declarative framework for progressive enhancement called KSS. Plone uses an XML dialect called ZCML for configuration, as well as an XML based templating language, meaning approximately 10% of the total source code is XML based.
Add-on products
The community supports and distributes thousands of add-ons via company websites but mostly through PYPI and www.plone.org. There are currently 2149 packages available via PyPI for customizing Plone.
Since its release, many of Plone's updates and add-ons have come from its community. Events called Plone "sprints" consist of members of the community coming together for a week and helping improve Plone. The Plone conference is also attended and supported by the members of the Plone community. In addition, Plone has an active IRC channel to give support to users who have questions or concerns. Up through 2007, there have been over one million downloads of Plone. Plone's development team has also been ranked in the top 2% of the largest open source communities.
Strengths and weaknesses
A 2007 comparison of CMSes rated Plone highly in a number of categories (standards conformance, access control, internationalization, aggregation, user-generated content, micro-applications, active user groups and value). However, as most of the major CMSes, including Plone, Drupal, WordPress and Joomla, have undergone major development since then, only limited value can be drawn from this comparison. Plone is available on many different operating systems, due to its use of platform-independent underlying technologies such as Python and Zope. Plone's Web-based administrative interface is optimized for standards, allowing it to work with most common web browsers, and uses additional accessibility standards to help users who have disabilities. All of Plone's features are customizable, and free add-ons are available from the Plone website.
Focus on security
Mitre is a not-for-profit corporation which hosts the Common Vulnerabilities and Exposures (CVE) Database. The CVE database provides a worldwide reporting mechanism for developers and the industry and is a source feed into the U.S. National Vulnerability Database (NVD). According to Mitre, Plone has the lowest number of reported lifetime and year to date vulnerabilities when compared to other popular Content Management Systems. This security record has led to widespread adoption of Plone by government and non-governmental organizations, including the FBI.
The following table compares the number of CVEs as reported by Mitre. Logged CVEs take into account vulnerabilities exposed in the core product as well as the modules of the software, of which, the included modules may be provided by 3rd party vendors and not the primary software provider.
See also
Content management system
Diazo (software)
List of content management systems
List of applications with iCalendar support
Zope
References
External links
Free content management systems
Zope
Free software programmed in Python
Cross-platform software
Web frameworks |
31117139 | https://en.wikipedia.org/wiki/Asphalt%20%28series%29 | Asphalt (series) | Asphalt is a series of racing video games developed and published by Gameloft. Games in the series typically focus on fast-paced arcade racing set in various locales throughout the world, tasking players to complete races while evading the local law enforcement in police pursuits.
Asphalt Urban GT, the first game in the series, was released for the Nintendo DS and N-Gage in 2004 alongside simplified J2ME versions for mobile phones. Incarnations of the game for various other platforms soon followed, the latest in the main series being Asphalt 9: Legends released in 2018; a number of spinoffs were also released, such as the endless runner Asphalt Overdrive, Asphalt Nitro, a minimal version of Asphalt for low-end devices with procedural generation as a selling point, Asphalt Xtreme, an off-road-centric entry into the series, and the drag racing game Asphalt Street Storm.
Common elements
The series puts emphasis on fast-paced, arcade-style street racing in the vein of Need for Speed, along with elements from other racing games such as Burnout; the spin-off game Asphalt Xtreme takes place in an off-road racing setting, with open-wheel buggies, sport-utility vehicles and rally cars in lieu of supercars as in previous games. Each game in the series puts players behind the wheel of licensed sports cars from various manufacturers, from entry-level models such as the Dodge Dart GT, to supercars like the Bugatti Veyron, and even concept cars such as Mercedes-Benz's Biome design study.
Police chases are a recurring gameplay element especially in the early games, but were de-emphasized in favour of stunt jumps and aerobatic maneuvers as of Airborne; they made a return, however, with Overdrive and Nitro, the latter of which combined elements from Airborne and previous games in the series.
Over the course of the games, players are gradually given access to various race courses, most of which are modelled after real-world locations and major cities, and upgrades for vehicles which can be bought from money earned in a race, or in later games, points or through in-application purchases using real currency. Events are presented in increasing difficulty as players advance through the game, sometimes requiring them to complete bonus challenges, e.g. taking down a given number of opponent racers or finishing the race without wrecking their vehicle.
History
The first game in the series is Asphalt Urban GT, which was released for the Nintendo DS and N-Gage on November 21, 2004, with simplified versions for J2ME mobile phones being released on December 2.
Asphalt 4: Elite Racing was the first game in the series to be released for iOS. Asphalt 6: Adrenaline marks the first game in the series to be released for macOS; later home computer releases in the series are exclusive to Microsoft Windows, with Asphalt 7: Heat being the first to be released on the Windows Store.
Asphalt 8: Airborne, the eighth main installment and tenth title overall, was released in 2013 for iOS, Android, Windows and Blackberry platforms to critical acclaim, becoming one of the bestselling games on the iOS App Store and Google Play Store. Asphalt Nitro, the twelfth title in the series, was quietly released on Gameloft's own app store in May 2015 for Android, alongside a 2.5D J2ME version of the game for feature phones. A main selling point of Nitro was the game's small resource footprint, which was aided by the use of procedural generation.
A free-to-play spinoff entitled Asphalt Overdrive was released for iOS and Android in September 2014. Unlike prior titles in the series, the game is presented as an "endless runner" similar to the Temple Run franchise and Subway Surfers, and does not offer a traditional racing mode. Overdrive is then followed by Asphalt Xtreme, which focuses on arcade-style off-road racing, and in 2016 with Asphalt: Street Storm, a rhythm-based drag racing game in the vein of NaturalMotion's CSR Racing. Street Storm was quietly released in the Philippines on December 2016 for iOS devices. Asphalt 9: Legends was released worldwide in July 2018, for macOS in January 2020, and for Xbox One and Xbox Series X/S on August 31, 2021.
Games
These are the list of games in this series (the latest one is Asphalt Nitro 2) -
Asphalt: Urban GT (N-Gage, NDS, J2ME, BREW, DoJa)
Asphalt: Urban GT 2 (N-Gage, NDS, Symbian, PSP, J2ME)
Asphalt: Import Tuner Edition (J2ME, BREW)
Asphalt 3: Street Rules (N-Gage, Symbian, Windows Mobile, J2ME)
Asphalt 4: Elite Racing (N-Gage, iOS, DSiWare, Symbian OS, Windows Mobile, J2ME, BlackBerry OS)
Asphalt Online (DoJa, BREW)
Asphalt 5 (iOS, Android, Symbian, Windows Phone 7, Bada, webOS, Freebox)
Asphalt 6: Adrenaline (iOS, OS X, Android, Symbian, J2ME, BlackBerry Tablet OS, Bada, webOS, Freebox)
Asphalt Audi RS 3 (iOS)
Asphalt 3D (3DS)
Asphalt: Injection (PS Vita, Android)
Asphalt 7: Heat (iOS, Android, Windows Phone 8, Windows 8, Windows 10, BlackBerry 10, BlackBerry Tablet OS)
Asphalt 8: Airborne (iOS, Android, Windows Phone 8, Windows RT, Windows 8, BlackBerry 10, Windows 10, Windows 10 Mobile, tvOS, macOS, Tizen)
Asphalt Overdrive (iOS, Android, Windows Phone 8, Windows 8, Windows 10, Windows 10 Mobile)
Asphalt Nitro (Android, Java ME, Tizen)
Asphalt Xtreme (iOS, Android, Windows 8, Windows Phone 8, Windows 10, Windows 10 Mobile)
Asphalt Street Storm (iOS, Android, Windows 8, Windows 10)
Asphalt 9: Legends (iOS, Android, Windows 10, Nintendo Switch, macOS, Xbox One, Xbox Series X/S)
Asphalt Retro (Browser)
Asphalt Nitro 2 (Android)
References
External links
Gameloft games
Vivendi franchises
Racing video games
Video game franchises introduced in 2004 |
50901764 | https://en.wikipedia.org/wiki/Videology | Videology | Videology is an advertising software company based in New York City. It was founded in 2007 as Tidal TV and launched a Hulu competitor in 2008. In 2012, it was rebranded as Videology and now develops software that sends ads to specific demographics within an audience of video viewers, performs analytics, and other functions.
History
The idea for Videology was conceived in 2006 when founder Scott Ferber heard AOL Time Warner CEO Jeff Bewkes express concern that distribution of television content online might reduce cable subscriptions. Videology was initially founded to develop software to stream television programming online. The company officially began operations in 2007 under the name Tidal TV. It was founded in Baltimore by Ferber. In its first year of operation, Tidal TV raised $15 million in venture funding. Beta testing of its video streaming software began in 2008.
In June 2008, the company launched a free television-watching site that competed with sites like Hulu and Joost. According to US News & World Report and Read Write Web, the website had a good selection of channels and ran smoothly, but was not well-known. At the time Hulu was still in beta and shortly afterwards introduced a user interface similar to Tidal TV's. Tidal TV served 84,000 viewers in its first month, compared to Hulu's 9.7 million viewers during the same time period. However, Tidal TV continued to expand its programming and staff. By 2009, it was selling specific demographic audiences to advertisers using algorithms and analytics. The company had just 40 employees in 2010 and 80 by 2011. Another $30 million in funding was raised in March 2011 and by 2012 it had $200 million in revenues.
In 2012 Tidal TV was renamed to Videology. With the rebrand, it introduced new software products for content publishers selling advertising. Later that year it acquired an advertising marketplace, LucidMedia, for an undisclosed sum, and a data management company, Collider, for $13.2 million. Videology raised another $60 million in 2013, reaching a total of $120 million in funding. By 2014, it had operations in 28 countries, up from three in 2011. In 2014, Videology said half of its revenues were coming from television advertising budgets and it created a TV division within the company. Its headquarters were moved to New York City in August 2016.
Videology announced chapter 11 bankruptcy in May 2018, and was subsequently acquired by the Singtel Group that owns the Amobee brand.
Software
Videology developed software for advertisers, content publishers and viewers that uses algorithms and analytics to target different demographics with different ads while watching television programming or other digital video content. Advertising space for targeted demographics is then sold at a premium. Its revenue model is a mix of software licensing and deals where it is paid a percentage of advertising spend. According to the company's website, it sells software products for advertising analytics, optimization (revenue and scenario planning), and management respectively. Its software is integrated with AT&T and Adobe advertising systems to allow advertisers to send custom ads to different demographics. The software is also licensed by Canadian broadcasters Bell Media and Rogers Media.
History
Videology released an update called Descartes in 2013, which introduced a new user interface, as well as more advertising management and targeting features. In 2014, it announced a partnership with Mediaocean in order to integrate Videology software with Spectra, a common software application used by advertisers to shop for placements. In January 2016 Videology added data from third-party vendors DoubleVerify, Integral Ad Science and Moat, to verify the number of people viewing an advertisement and allow advertising to be purchased based on the number of verified impressions. In April 2016, Videology increased its use of Nielsen data. Videology's data is matched to Nielsen's in order to observe the behavior of audiences across television and internet mediums.
Operations
The company publishes a quarterly "U.S. Video Market At-A-Glance" report, based on data from the advertisements running on its software.
See also
Tech companies in the New York metropolitan area
References
External links
Online advertising
Software companies established in 2007
Software companies based in New York City
Software companies of the United States |
51141175 | https://en.wikipedia.org/wiki/Democratic%20National%20Committee%20cyber%20attacks | Democratic National Committee cyber attacks | The Democratic National Committee cyber attacks took place in 2015 and 2016, in which two groups of Russian computer hackers infiltrated the Democratic National Committee (DNC) computer network, leading to a data breach. Cybersecurity experts, as well as the U.S. government, determined that the cyberespionage was the work of Russian intelligence agencies.
Forensic evidence analyzed by several cybersecurity firms, CrowdStrike, Fidelis, and Mandiant (or FireEye), strongly indicates that two Russian intelligence agencies separately infiltrated the DNC computer systems. The American cybersecurity firm CrowdStrike, which removed the hacking programs, revealed a history of encounters with both groups and had already named them, calling one of them Cozy Bear and the other Fancy Bear, names which are used in the media.
On December 9, 2016, the CIA told U.S. legislators the U.S. Intelligence Community concluded Russia conducted the cyberattacks and other operations during the 2016 U.S. election to assist Donald Trump in winning the presidency. Multiple U.S. intelligence agencies concluded that specific individuals tied to the Russian government provided WikiLeaks with the stolen emails from the DNC, as well as stolen emails from Hillary Clinton's campaign chairman, who was also the target of a cyberattack. These intelligence organizations additionally concluded Russia hacked the Republican National Committee (R.N.C.) as well as the D.N.C., but chose not to leak information obtained from the R.N.C.
Cyber attacks and responsibility
Cyber attacks that successfully penetrated the DNC computing system began in 2015. Attacks by "Cozy Bear" began in the summer of 2015. Attacks by "Fancy Bear" began in April 2016. It was after the "Fancy Bear" group began their activities that the compromised system became apparent. The groups were presumed to have been spying on communications, stealing opposition research on Donald Trump, as well as reading all email and chats. Both were finally identified by CrowdStrike in May 2016. Both groups of intruders were successfully expelled from the DNC systems within hours after detection. These attacks are considered to be part of a group of recent attacks targeting U.S. government departments and several political organizations, including 2016 campaign organizations.
On July 22, 2016, a person or entity going by the moniker "Guccifer 2.0" claimed on a WordPress-hosted blog to have been acting alone in hacking the DNC. He also claimed to send significant amounts of stolen electronic DNC documents to WikiLeaks. WikiLeaks has not revealed the source for their leaked emails. However, cybersecurity experts and firms, including CrowdStrike, Fidelis Cybersecurity, Mandiant, SecureWorks, ThreatConnect, and the editor for Ars Technica, have rejected the claims of "Guccifer 2.0" and have determined, on the basis of substantial evidence, that the cyberattacks were committed by two Russian state-sponsored groups (Cozy Bear and Fancy Bear).
According to separate reports in the New York Times and the Washington Post, U.S. intelligence agencies have concluded with "high confidence" that the Russian government was behind the theft of emails and documents from the DNC. While the U.S. intelligence community has concluded that Russia was behind the cyberattack, intelligence officials told the Washington Post that they had "not reached a conclusion about who passed the emails to WikiLeaks" and so did not know "whether Russian officials directed the leak." A number of experts and cybersecurity analysts believe that "Guccifer 2.0" is probably a Russian government disinformation cover story to distract attention away from the DNC breach by the two Russian intelligence agencies.
President Obama and Russian President Vladimir Putin had a discussion about computer security issues, which took place as a side segment during the then-ongoing G20 summit in China in September 2016. Obama said Russian hacking stopped after his warning to Putin.
In a joint statement on October 7, 2016, the United States Department of Homeland Security and the Office of the Director of National Intelligence stated that the US intelligence community is confident that the Russian government directed the breaches and the release of the obtained material in an attempt to "… interfere with the US election process."
Background
As is common among Russian intelligence services, both groups used similar hacking tools and strategies. It is believed that neither group was aware of the other. Although this is antithetical to American computer intelligence methods, for fear of undermining or defeating intelligence operations of the other, this has been common practice in the Russian intelligence community since 2004.
This intrusion was part of several attacks attempting to access information from American political organizations, including the 2016 U.S. presidential campaigns. Both "Cozy Bear" and "Fancy Bear" are known adversaries, who have extensively engaged in political and economic espionage that benefits the Russian Federation government. Both are believed connected to the Russian intelligence services. Also, both access resources and demonstrate levels of proficiency matching nation-state capabilities.
"Cozy Bear" has in the past year infiltrated unclassified computer systems of the White House, the U.S. State Department, and the U.S. Joint Chiefs of Staff. According to CrowdStrike, other targeted sectors include: Defense, Energy, Mining, Financial, Insurance, Legal, Manufacturing, Media, Think tanks, Pharmaceutical, Research and Technology industries as well as universities. "Cozy Bear" observed attacks have occurred in Western Europe, Brazil, China, Japan, Mexico, New Zealand, South Korea, Turkey and Central Asia.
"Fancy Bear" has been operating since the mid-2000s. CrowdStrike reported targeting has included Aerospace, Defense, Energy, Government and the Media industries. "Fancy Bear" intrusions have occurred in United States, Western Europe, Brazil, Canada, China, Republic of Georgia, Iran, Japan, Malaysia and South Korea. Targeted defense ministries and military organizations parallel Russian Federation government interests. This may indicate affiliation with the Main Intelligence Directorate (GRU, a Russian military intelligence service). Specifically, "Fancy Bear" has been linked to intrusions into the German Bundestag and France's TV5 Monde (television station) in April 2015. SecureWorks, a cybersecurity firm headquartered in the United States, concluded that from March 2015 to May 2016, the "Fancy Bear" target list included not merely the DNC, but tens of thousands of foes of Putin and the Kremlin in the United States, Ukraine, Russia, Georgia, and Syria. Only a handful of Republicans were targeted, however.
Hacking the DNC
On January 25, 2018 Dutch newspaper de Volkskrant and TV program Nieuwsuur reported that in 2014 the Dutch Intelligence agency General Intelligence and Security Service (AIVD) successfully infiltrated the computers of Cozy Bear and observed the hacking of the head office of the State Department and subsequently the White House and were the first to alert the National Security Agency about the cyber-intrusion.
In early 2015, the NSA apprised the FBI and other agencies of the DNC intrusions which the Dutch had secretly detected and on August 15, 2015, the Washington field office first alerted DNC technical staff of the compromise of their systems. Much later, the lack of higher level communications between the political party and the government was seen by the former as an "unfathomable lapse" and it wasn't until April 2016 when legal authorizations to share sensitive technical data with the government finally apprised DNC leaders that their systems had been penetrated.
Cozy Bear" had access to DNC systems since the summer of 2015; and "Fancy Bear", since April 2016. There was no evidence of collaboration or knowledge of the other's presence within the system. Rather, the "two Russian espionage groups compromised the same systems and engaged separately in the theft of identical credentials". "Cozy Bear" employed the "Sea Daddy" implant and an obfuscated PowerShell script as a backdoor, launching malicious code
at various times and in various DNC systems. "Fancy Bear" employed X Agent malware, which enabled distant command execution, transmissions of files and keylogging, as well as the "X-Tunnel" malware.
DNC leaders became aware of the compromise in April 2016. These attacks broadly reflect Russian government interest in the U.S. political system, as well as political leaders' policies, tendencies and proclivities while assessing possible beneficial outcomes. The attacks also broadly reflect Russian government interest in the strategies, policies, and practices of the U.S. Government. This also globally reflects foreign governments' interest in ascertaining information on Donald Trump as a new entry into U.S. political leadership roles, in contrast to information likely to have been garnered over the decades pertaining to the Clintons.
The DNC commissioned the cybersecurity company CrowdStrike to defeat the intrusions. Its chief technology officer, Dmitri Alperovitch, who is also a cybersecurity expert, stated:
Other cybersecurity firms, Fidelis Cybersecurity and FireEye, independently reviewed the malware and came to the same conclusion as CrowdStrike—that expert Russian hacking groups were responsible for the breach. In November 2017, US authorities identified 6 Russian individuals who conducted the hack. Beginning in December 2016 the Russian government arrested Sergei Mikhailov, a high ranking government cyber-spy, Ruslan Stoyanov, a private sector cyber-security expert, Georgy Fomchenkov, a former government cyber-spy, and Dmitry Dokuchaev, a Mikhailov associate and charged them with aiding U.S. intelligence agencies which the New York Times associated with the DNC hacking.
Donor information
Although the DNC claimed that no personal, financial, or donor information was accessed, "Guccifer 2.0" leaked what he or they claimed were donor lists detailing DNC campaign contributions to Gawker and The Smoking Gun.
However, this information has not been authenticated, and doubts remain about Guccifer 2.0's backstory.
Guccifer 2.0
In June 2016, a person or person(s) claimed to be the hacker who had hacked the DNC servers and then published the stolen documents online. "Guccifer 2.0" later also claimed to have leaked 20.000 emails to WikiLeaks.
U.S. intelligence conclusions
The U.S. Intelligence Community tasked resources debating why Putin chose summer 2016 to escalate active measures influencing U.S. politics. Director of National Intelligence James R. Clapper said after the 2011–13 Russian protests that Putin's confidence in his viability as a politician was damaged, and Putin responded with the propaganda operation. Former CIA officer Patrick Skinner explained the goal was to spread uncertainty. U.S. Congressman Adam Schiff, Ranking Member of the House Permanent Select Committee on Intelligence, commented on Putin's aims, and said U.S. intelligence agencies were concerned with Russian propaganda. Speaking about disinformation that appeared in Hungary, Slovakia, the Czech Republic, and Poland, Schiff said there was an increase of the same behavior in the U.S. Schiff concluded Russian propaganda operations would continue against the U.S. after the election.
On December 9, 2016, the CIA told U.S. legislators the U.S. Intelligence Community concluded Russia conducted operations during the 2016 U.S. election to assist Donald Trump in winning the presidency. Multiple U.S intelligence agencies concluded people with specific individuals tied to the Russian government gave WikiLeaks hacked emails from the Democratic National Committee (D.N.C.) and additional sources such as John Podesta, campaign chairman for Hillary Clinton. These intelligence organizations additionally concluded Russia hacked the Republican National Committee (R.N.C.) as well as the D.N.C.—and chose not to leak information obtained from the R.N.C. The CIA said the foreign intelligence agents were Russian operatives previously known to the U.S. CIA officials told U.S. Senators it was "quite clear" Russia's intentions were to help Trump. Trump released a statement December 9, and disregarded the CIA conclusions.
FBI involvement
A senior law enforcement official told CNN:
The FBI therefore had to rely on an assessment from CrowdStrike instead, who were hired by the DNC to investigate the cyber attacks.
U.S. legislative response
Members of the U.S. Senate Intelligence Committee traveled to Ukraine and Poland in 2016 and learned about Russian operations to influence their affairs. U.S. Senator Angus King told the Portland Press Herald that tactics used by Russia during the 2016 U.S. election were analogous to those used against other countries. On 30 November 2016, King joined a letter in which seven members of the U.S. Senate Intelligence Committee asked President Obama to publicize more information from the intelligence community on Russia's role in the U.S. election. In an interview with CNN, King warned against ignoring the problem, saying it was a bipartisan issue.
Representatives in the U.S. Congress took action to monitor the National security of the United States by advancing legislation to monitor propaganda. On 30 November 2016, legislators approved a measure within the National Defense Authorization Act to ask the U.S. State Department to act against propaganda with an inter-agency panel. The legislation authorized funding of $160 million over a two-year-period. The initiative was developed through a bipartisan bill, the Countering Foreign Propaganda and Disinformation Act, written by U.S. Senators Rob Portman (Republican) and Chris Murphy (Democrat). Portman urged more U.S. government action to counter propaganda. Murphy said after the election it was apparent the U.S. needed additional tactics to fight Russian propaganda. U.S. Senate Intelligence Committee member Ron Wyden said frustration over covert Russian propaganda was bipartisan.
Republican U.S. Senators stated they planned to hold hearings and investigate Russian influence on the 2016 U.S. elections. By doing so they went against the preference of incoming Republican President-elect Donald Trump, who downplayed any potential Russian meddling in the election. U.S. Senate Armed Services Committee Chairman John McCain and U.S. Senate Intelligence Committee Chairman Richard Burr discussed plans for collaboration on investigations of Russian cyberwarfare during the election. U.S. Senate Foreign Relations Committee Chairman Bob Corker planned a 2017 investigation. Senator Lindsey Graham indicated he would conduct a sweeping investigation in the 115th Congress.
President Obama order
On December 9, 2016, President Obama ordered the entire United States Intelligence Community to conduct an investigation into Russia's attempts to influence the 2016 U.S. election — and provide a report before he left office on January 20, 2017. Lisa Monaco, U.S. Homeland Security Advisor and chief counterterrorism advisor to the president, announced the study, and said the intrusion of a foreign nation into a U.S. national election was an unprecedented event that would necessitate further investigation by subsequent administrations in the executive branch. The intelligence analysis will take into account data from the last three presidential elections in the U.S. Evidence showed malicious cyberwarfare during the 2008 and 2016 U.S. elections.
See also
2016 Democratic National Committee email leak
Cold War II
Conspiracy theories related to the Trump–Ukraine scandal
Democratic Congressional Campaign Committee cyber attacks
Foreign electoral intervention
Office of Personnel Management data breach
Operation Aurora
The Plot to Hack America
Podesta emails
Russian espionage in the United States
Russian interference in the 2016 United States elections
Russian interference in the 2018 United States elections
Social media in the 2016 United States presidential election
Trump Tower meeting
Yahoo! data breaches
References
External links
Timeline of hacks and publications on Glomar Disclosure
Computer security
Democratic National Committee
Espionage
Russian intelligence agencies
2015 scandals
2016 scandals
Data breaches in the United States
2015 in the United States
2016 in the United States
Email hacking
Hacking in the 2010s
2015 in computing
2016 in computing
Russian interference in the 2016 United States elections |
8144819 | https://en.wikipedia.org/wiki/SMS%20banking | SMS banking | SMS banking' is a form of mobile banking. It is a facility used by some banks or other financial institutions to send messages (also called notifications or alerts) to customers' mobile phones using SMS messaging, or a service provided by them which enables customers to perform some financial transactions using SMS.
Push and pull messages
SMS banking services may use either push and pull messages. Push messages are those that a bank sends out to a customer's mobile phone, without the customer initiating a request for the information. Typically, a push message could be a mobile marketing message or an alert of an event which happens in the customer's bank account, such as a large withdrawal of funds from an ATM or a large payment involving the customer's credit card, etc. It may also be an alert that some payment is due, or that an e-statement is ready to be downloaded.
Another type of push message is one-time password (OTPs). OTPs are the latest tool used by financial institutions to combat cyber fraud. Instead of relying on traditional memorized passwords, OTPs are sent to a customer's mobile phone via SMS, who are required to repeat the OTP to complete transactions using online or mobile banking. The OTP is valid for a relatively short period and expires once it has been used.
Bank customers can select the type of activities for which they wish to receive an alert. The selection can be done either using internet banking or by phone.
Pull messages are initiated by the customer, using a mobile phone, for obtaining information or performing a transaction in the bank account. Examples of pull messages include an account balance enquiry, or requests for current information like currency exchange rates and deposit interest rates, as published and updated by the bank.
Typical push and pull services offered
Depending on the selected extent of SMS banking transactions offered by the bank, a customer can be authorized to carry out either non-financial transactions, or both and financial and non-financial transactions. SMS banking solutions offer customers a range of functionality, classified by push and pull services as outlined below.
Typical push services would include:
periodic account balance reporting (say at the end of month);
reporting of salary and other credits to the bank account;
successful or un-successful execution of a standing order;
successful payment of a cheque issued on the account;
insufficient funds;
large value withdrawals on an account;
large value withdrawals on the ATM or EFTPOS on a debit card;
large value payment on a credit card or out of country activity on a credit card.
one-time password and authentication
an alert that some payment is due
an alert that an e-statement is ready to be downloaded.
Typical pull services would include:
Account balance enquiry;
Mini statement request;
Electronic bill payment;
Transfers between customer's own accounts, like moving money from a savings account to a current account to fund a cheque;
Stop payment instruction on a cheque;
Requesting for an ATM card or credit card to be suspended;
De-activating a credit or debit card when it is lost or the PIN is known to be compromised;
Foreign currency exchange rates enquiry;
Fixed deposit interest rates enquiry.
Concerns and skepticism
There is a very real possibility for fraud when SMS banking is involved, as SMS uses insecure encryption and is easily spoofable (see the SMS page for details). Supporters of SMS banking claim that while SMS banking is not as secure as other conventional banking channels, like the ATM and internet banking, the SMS banking channel is not intended to be used for very high-risk transactions.
Quality of service
Due to the concerns made explicit above, it is extremely important that SMS gateway providers can provide a decent quality of service for banks and financial institutions in regards to SMS services. Therefore, the provision of Service Level Agreement (SLA) is a requirement for this industry; it is necessary to give the bank customer delivery guarantees of all messages, as well as measurements on the speed of delivery, throughput, etc. SLAs give the service parameters in which a messaging solution is guaranteed to perform.
The convenience factor
The convenience of executing simple transactions and sending out information or alerting a customer on the mobile phone is often the overriding factor that dominates over the skeptics who tend to be overly bitten by security concerns.
As a personalized end-user communication instrument, today mobile phones are perhaps the easiest channel on which customers can be reached on the spot, as they carry the mobile phone all the time no matter where they are. Besides, the operation of SMS banking functionality over phone key instructions makes its use very simple. This is quite different from internet banking which can offer broader functionality, but has the limitation of use only when the customer has access to a computer and the Internet. Also, urgent warning messages, such as SMS alerts, are received by the customer instantaneously; unlike other channels such as the post, email, Internet, telephone banking, etc. on which a bank's notifications to the customer involves the risk of delayed delivery and response.
The SMS banking channel also acts as the bank's means of alerting its customers, especially in an emergency situation; e.g. when there is an ATM fraud happening in the region, the bank can push a mass alert (although not subscribed by all customers) or automatically alert on an individual basis when a predefined ‘abnormal’ transaction happens on a customer's account using the ATM or credit card. This capability mitigates the risk of fraud going unnoticed for a long time and increases customer confidence in the bank's information systems.
Compensating controls for lack of encryption
The lack of encryption on SMS messages is an area of concern that is often discussed. This concern sometimes arises within the group of the bank's technology personnel, due to their familiarity and past experience with encryption on the ATM and other payment channels. The lack of encryption is inherent to the SMS banking channel and several banks that use it have overcome their fears by introducing compensating controls and limiting the scope of the SMS banking application to where it offers an advantage over other channels.
Suppliers of SMS banking software solutions have found reliable means by which the security concerns can be addressed. Typically the methods employed are by pre-registration and using security tokens where the transaction risk is perceived to be high. Sometimes ATM type PINs are also employed, but the usage of PINs in SMS banking makes the customer's task more cumbersome.
Technologies
SMS banking usually integrates with a bank's computer and communications systems. As most banks have multiple backend hosts, the more advanced SMS banking systems are built to be able to work in a multi-host banking environment; and to have open interfaces which allow for messaging between existing banking host systems using industry or de facto standards.
Well developed and mature SMS banking software normally provide a robust control environment and a flexible and scalable operating environment. These solutions are able to connect seamlessly to multiple SMSC operators in the country of operation. Depending on the volume of messages that are required to be pushed, means to connect to the SMSC could be different, such as using simple modems or connecting over leased line using low level communication protocols (like SMPP, UCP etc.) Advanced SMS banking solutions also cater to providing failover mechanisms and least-cost routing options.
Most online banking platforms are owned and developed by the banks using them. There is only one open source online banking platform supporting mobile banking and SMS payments called Cyclos, which is developed to stimulate and empower local banks in development countries.
See also
Mobile banking
SMS messaging
Internet banking
Short Message Service Centre
One-time password
Cyclos
Barclays Pingit
References
GSM standard
Mobile payments
Text messaging |
3013083 | https://en.wikipedia.org/wiki/Turkish%20National%20Research%20Institute%20of%20Electronics%20and%20Cryptology | Turkish National Research Institute of Electronics and Cryptology | The National Research Institute of Electronics and Cryptology of Turkey (), shortly UEKAE, is a national scientific organization with the aim of developing advanced technologies for information security. UEKAE is the most prominent and also the founder (first) institute of the TÜBİTAK.
The institute was founded by Yılmaz Tokad, professor at ITU (Istanbul Technical University), and four researchers under his supervision in the building of engineering at METU (Middle East Technical University)in 1972, with the name Electronic Research Unit. In 1995 the institute's name has become National Research Institute of Electronics and Cryptology and moved to Gebze, Kocaeli.
It is affiliated with the TÜBİTAK Informatics and Information Security Research Center (BİLGEM), which is bound to Scientific and Technological Research Council of Turkey (TUBİTAK). The institute was later reorganized as the prime institute of the BİLGEM in the Gebze, Kocaeli Province campus of TÜBİTAK.
The institute consists of facilities on fields and for products as follows:
Semiconductor Technologies Research Laboratory (YITAL)
Cryptanalysis Center
EMC/Tempest Test Center
Speech and Language Technologies
Software Development
Surveillance Systems
Communication and Information Security
Electro-Optics Laboratory
Spectrum Analysis and Management
Open Source Software
Government Cerficiation Authority (KSM)
NATO Certified Products
See also
TÜBİTAK Informatics and Information Security Research Center (TÜBİTAK BİLGEM)
Scientific and Technological Research Council of Turkey (TÜBİTAK)
Turkish Academy of Sciences (TÜBA)
Turkish Atomic Energy Authority (TAEK)
Pardus, a Linux distribution
References
External links
Official website of the institute
Research institutes in Turkey
Defence companies of Turkey
Scientific and Technological Research Council of Turkey
Organizations established in 1994
1994 establishments in Turkey
Organizations based in Gebze |
162435 | https://en.wikipedia.org/wiki/Mind%20uploading | Mind uploading | Mind uploading, also known as whole brain emulation (WBE), is the theoretical futuristic process of scanning a physical structure of the brain accurately enough to create an emulation of the mental state (including long-term memory and "self") and transferring or copying it to a computer in a digital form. The computer would then run a simulation of the brain's information processing, such that it would respond in essentially the same way as the original brain and experience having a sentient conscious mind.
Substantial mainstream research in related areas is being conducted in animal brain mapping and simulation, development of faster supercomputers, virtual reality, brain–computer interfaces, connectomics, and information extraction from dynamically functioning brains. According to supporters, many of the tools and ideas needed to achieve mind uploading already exist or are currently under active development; however, they will admit that others are, as yet, very speculative, but say they are still in the realm of engineering possibility.
Mind uploading may potentially be accomplished by either of two methods: copy-and-upload or copy-and-delete by gradual replacement of neurons (which can be considered as a gradual destructive uploading), until the original organic brain no longer exists and a computer program emulating the brain takes control over the body. In the case of the former method, mind uploading would be achieved by scanning and mapping the salient features of a biological brain, and then by storing and copying, that information state into a computer system or another computational device. The biological brain may not survive the copying process or may be deliberately destroyed during it in some variants of uploading. The simulated mind could be within a virtual reality or simulated world, supported by an anatomic 3D body simulation model. Alternatively the simulated mind could reside in a computer inside (or either connected to or remotely controlled) a (not necessarily humanoid) robot or a biological or cybernetic body.
Among some futurists and within the part of transhumanist movement, mind uploading is treated as an important proposed life extension technology. Some believe mind uploading is humanity's current best option for preserving the identity of the species, as opposed to cryonics. Another aim of mind uploading is to provide a permanent backup to our "mind-file", to enable interstellar space travels, and a means for human culture to survive a global disaster by making a functional copy of a human society in a computing device. Whole brain emulation is discussed by some futurists as a "logical endpoint" of the topical computational neuroscience and neuroinformatics fields, both about brain simulation for medical research purposes. It is discussed in artificial intelligence research publications as an approach to strong AI (artificial general intelligence) and to at least weak superintelligence. Another approach is seed AI, which wouldn't be based on existing brains. Computer-based intelligence such as an upload could think much faster than a biological human even if it were no more intelligent. A large-scale society of uploads might, according to futurists, give rise to a technological singularity, meaning a sudden time constant decrease in the exponential development of technology. Mind uploading is a central conceptual feature of numerous science fiction novels, films, and games.
Overview
The established neuroscientific consensus is that the human mind is largely an emergent property of the information processing of its neuronal network.
Neuroscientists have stated that important functions performed by the mind, such as learning, memory, and consciousness, are due to purely physical and electrochemical processes in the brain and are governed by applicable laws. For example, Christof Koch and Giulio Tononi wrote in IEEE Spectrum:
The concept of mind uploading is based on this mechanistic view of the mind, and denies the vitalist view of human life and consciousness.
Eminent computer scientists and neuroscientists have predicted that advanced computers will be capable of thought and even attain consciousness, including Koch and Tononi, Douglas Hofstadter, Jeff Hawkins, Marvin Minsky, Randal A. Koene, and Rodolfo Llinás.
Many theorists have presented models of the brain and have established a range of estimates of the amount of computing power needed for partial and complete simulations. Using these models, some have estimated that uploading may become possible within decades if trends such as Moore's law continue.
Theoretical benefits and applications
"Immortality" or backup
In theory, if the information and processes of the mind can be disassociated from the biological body, they are no longer tied to the individual limits and lifespan of that body. Furthermore, information within a brain could be partly or wholly copied or transferred to one or more other substrates (including digital storage or another brain), thereby – from a purely mechanistic perspective – reducing or eliminating "mortality risk" of such information. This general proposal was discussed in 1971 by biogerontologist George M. Martin of the University of Washington.
Space exploration
An "uploaded astronaut" could be used instead of a "live" astronaut in human spaceflight, avoiding the perils of zero gravity, the vacuum of space, and cosmic radiation to the human body. It would allow for the use of smaller spacecraft, such as the proposed StarChip, and it would enable virtually unlimited interstellar travel distances.
Relevant technologies and techniques
The focus of mind uploading, in the case of copy-and-transfer, is on data acquisition, rather than data maintenance of the brain. A set of approaches known as loosely coupled off-loading (LCOL) may be used in the attempt to characterize and copy the mental contents of a brain. The LCOL approach may take advantage of self-reports, life-logs and video recordings that can be analyzed by artificial intelligence. A bottom-up approach may focus on the specific resolution and morphology of neurons, the spike times of neurons, the times at which neurons produce action potential responses.
Computational complexity
Advocates of mind uploading point to Moore's law to support the notion that the necessary computing power is expected to become available within a few decades. However, the actual computational requirements for running an uploaded human mind are very difficult to quantify, potentially rendering such an argument specious.
Regardless of the techniques used to capture or recreate the function of a human mind, the processing demands are likely to be immense, due to the large number of neurons in the human brain along with the considerable complexity of each neuron.
In 2004, Henry Markram, lead researcher of the Blue Brain Project, stated that "it is not [their] goal to build an intelligent neural network", based solely on the computational demands such a project would have.
It will be very difficult because, in the brain, every molecule is a powerful computer and we would need to simulate the structure and function of trillions upon trillions of molecules as well as all the rules that govern how they interact. You would literally need computers that are trillions of times bigger and faster than anything existing today.
Five years later, after successful simulation of part of a rat brain, Markram was much more bold and optimistic. In 2009, as director of the Blue Brain Project, he claimed that "A detailed, functional artificial human brain can be built within the next 10 years". Less than two years into it, the project was recognised to be mismanaged and its claims overblown, and Markram was asked to step down.
Required computational capacity strongly depend on the chosen level of simulation model scale:
Scanning and mapping scale of an individual
When modelling and simulating the brain of a specific individual, a brain map or connectivity database showing the connections between the neurons must be extracted from an anatomic model of the brain. For whole brain simulation, this network map should show the connectivity of the whole nervous system, including the spinal cord, sensory receptors, and muscle cells. Destructive scanning of a small sample of tissue from a mouse brain including synaptic details is possible as of 2010.
However, if short-term memory and working memory include prolonged or repeated firing of neurons, as well as intra-neural dynamic processes, the electrical and chemical signal state of the synapses and neurons may be hard to extract. The uploaded mind may then perceive a memory loss of the events and mental processes immediately before the time of brain scanning.
A full brain map has been estimated to occupy less than 2 x 1016 bytes (20,000 TB) and would store the addresses of the connected neurons, the synapse type and the synapse "weight" for each of the brains' 1015 synapses. However, the biological complexities of true brain function (e.g. the epigenetic states of neurons, protein components with multiple functional states, etc.) may preclude an accurate prediction of the volume of binary data required to faithfully represent a functioning human mind.
Serial sectioning
A possible method for mind uploading is serial sectioning, in which the brain tissue and perhaps other parts of the nervous system are frozen and then scanned and analyzed layer by layer, which for frozen samples at nano-scale requires a cryo-ultramicrotome, thus capturing the structure of the neurons and their interconnections. The exposed surface of frozen nerve tissue would be scanned and recorded, and then the surface layer of tissue removed. While this would be a very slow and labor-intensive process, research is currently underway to automate the collection and microscopy of serial sections. The scans would then be analyzed, and a model of the neural net recreated in the system that the mind was being uploaded into.
There are uncertainties with this approach using current microscopy techniques. If it is possible to replicate neuron function from its visible structure alone, then the resolution afforded by a scanning electron microscope would suffice for such a technique. However, as the function of brain tissue is partially determined by molecular events (particularly at synapses, but also at other places on the neuron's cell membrane), this may not suffice for capturing and simulating neuron functions. It may be possible to extend the techniques of serial sectioning and to capture the internal molecular makeup of neurons, through the use of sophisticated immunohistochemistry staining methods that could then be read via confocal laser scanning microscopy. However, as the physiological genesis of 'mind' is not currently known, this method may not be able to access all of the necessary biochemical information to recreate a human brain with sufficient fidelity.
Brain imaging
It may be possible to create functional 3D maps of the brain activity, using advanced neuroimaging technology, such as functional MRI (fMRI, for mapping change in blood flow), magnetoencephalography (MEG, for mapping of electrical currents), or combinations of multiple methods, to build a detailed three-dimensional model of the brain using non-invasive and non-destructive methods. Today, fMRI is often combined with MEG for creating functional maps of human cortex during more complex cognitive tasks, as the methods complement each other. Even though current imaging technology lacks the spatial resolution needed to gather the information needed for such a scan, important recent and future developments are predicted to substantially improve both spatial and temporal resolutions of existing technologies.
Brain simulation
There is ongoing work in the field of brain simulation, including partial and whole simulations of some animals. For example, the C. elegans roundworm, Drosophila fruit fly, and mouse have all been simulated to various degrees.
The Blue Brain Project by the Brain and Mind Institute of the École Polytechnique Fédérale de Lausanne, Switzerland is an attempt to create a synthetic brain by reverse-engineering mammalian brain circuitry.
Issues
Practical issues
Kenneth D. Miller, a professor of neuroscience at Columbia and a co-director of the Center for Theoretical Neuroscience, raised doubts about the practicality of mind uploading. His major argument is that reconstructing neurons and their connections is in itself a formidable task, but it is far from being sufficient. Operation of the brain depends on the dynamics of electrical and biochemical signal exchange between neurons; therefore, capturing them in a single "frozen" state may prove insufficient. In addition, the nature of these signals may require modeling down to the molecular level and beyond. Therefore, while not rejecting the idea in principle, Miller believes that the complexity of the "absolute" duplication of an individual mind is insurmountable for the nearest hundreds of years.
Philosophical issues
Underlying the concept of "mind uploading" (more accurately "mind transferring") is the broad philosophy that consciousness lies within the brain's information processing and is in essence an emergent feature that arises from large neural network high-level patterns of organization, and that the same patterns of organization can be realized in other processing devices. Mind uploading also relies on the idea that the human mind (the "self" and the long-term memory), just like non-human minds, is represented by the current neural network paths and the weights of the brain synapses rather than by a dualistic and mystic soul and spirit. The mind or "soul" can be defined as the information state of the brain, and is immaterial only in the same sense as the information content of a data file or the state of a computer software currently residing in the work-space memory of the computer. Data specifying the information state of the neural network can be captured and copied as a "computer file" from the brain and re-implemented into a different physical form. This is not to deny that minds are richly adapted to their substrates. An analogy to the idea of mind uploading is to copy the temporary information state (the variable values) of a computer program from the computer memory to another computer and continue its execution. The other computer may perhaps have different hardware architecture but emulates the hardware of the first computer.
These issues have a long history. In 1775, Thomas Reid wrote: “I would be glad to know... whether when my brain has lost its original structure, and when some hundred years after the same materials are fabricated so curiously as to become an intelligent being, whether, I say that being will be me; or, if, two or three such beings should be formed out of my brain; whether they will all be me, and consequently one and the same intelligent being.”
A considerable portion of transhumanists and singularitarians place great hope into the belief that they may become immortal, by creating one or many non-biological functional copies of their brains, thereby leaving their "biological shell". However, the philosopher and transhumanist Susan Schneider claims that at best, uploading would create a copy of the original person's mind. Schneider agrees that consciousness has a computational basis, but this does not mean we can upload and survive. According to her views, "uploading" would probably result in the death of the original person's brain, while only outside observers can maintain the illusion of the original person still being alive. For it is implausible to think that one's consciousness would leave one's brain and travel to a remote location; ordinary physical objects do not behave this way. Ordinary objects (rocks, tables, etc.) are not simultaneously here, and elsewhere. At best, a copy of the original mind is created. Neural correlates of consciousness, a sub-branch of neuroscience, states that consciousness may be thought of as a state-dependent property of some undefined complex, adaptive, and highly interconnected biological system.
Others have argued against such conclusions. For example, Buddhist transhumanist James Hughes has pointed out that this consideration only goes so far: if one believes the self is an illusion, worries about survival are not reasons to avoid uploading, and Keith Wiley has presented an argument wherein all resulting minds of an uploading procedure are granted equal primacy in their claim to the original identity, such that survival of the self is determined retroactively from a strictly subjective position. Some have also asserted that consciousness is a part of an extra-biological system that is yet to be discovered; therefore it cannot be fully understood under the present constraints of neurobiology. Without the transference of consciousness, true mind-upload or perpetual immortality cannot be practically achieved.
Another potential consequence of mind uploading is that the decision to "upload" may then create a mindless symbol manipulator instead of a conscious mind (see philosophical zombie). Are we to assume that an upload is conscious if it displays behaviors that are highly indicative of consciousness? Are we to assume that an upload is conscious if it verbally insists that it is conscious? Could there be an absolute upper limit in processing speed above which consciousness cannot be sustained? The mystery of consciousness precludes a definitive answer to this question. Numerous scientists, including Kurzweil, strongly believe that the answer as to whether a separate entity is conscious (with 100% confidence) is fundamentally unknowable, since consciousness is inherently subjective (see solipsism). Regardless, some scientists strongly believe consciousness is the consequence of computational processes which are substrate-neutral. On the contrary, numerous scientists believe consciousness may be the result of some form of quantum computation dependent on substrate (see quantum mind).
In light of uncertainty on whether to regard uploads as conscious, Sandberg proposes a cautious approach:
Ethical and legal implications
The process of developing emulation technology raises ethical issues related to animal welfare and artificial consciousness. The neuroscience required to develop brain emulation would require animal experimentation, first on invertebrates and then on small mammals before moving on to humans. Sometimes the animals would just need to be euthanized in order to extract, slice, and scan their brains, but sometimes behavioral and in vivo measures would be required, which might cause pain to living animals.
In addition, the resulting animal emulations themselves might suffer, depending on one's views about consciousness. Bancroft argues for the plausibility of consciousness in brain simulations on the basis of the "fading qualia" thought experiment of David Chalmers. He then concludes: “If, as I argue above, a sufficiently detailed computational simulation of the brain is potentially operationally equivalent to an organic brain, it follows that we must consider extending protections against suffering to simulations.”
It might help reduce emulation suffering to develop virtual equivalents of anaesthesia, as well as to omit processing related to pain and/or consciousness. However, some experiments might require a fully functioning and suffering animal emulation. Animals might also suffer by accident due to flaws and lack of insight into what parts of their brains are suffering. Questions also arise regarding the moral status of partial brain emulations, as well as creating neuromorphic emulations that draw inspiration from biological brains but are built somewhat differently.
Brain emulations could be erased by computer viruses or malware, without need to destroy the underlying hardware. This may make assassination easier than for physical humans. The attacker might take the computing power for its own use.
Many questions arise regarding the legal personhood of emulations. Would they be given the rights of biological humans? If a person makes an emulated copy of themselves and then dies, does the emulation inherit their property and official positions? Could the emulation ask to "pull the plug" when its biological version was terminally ill or in a coma? Would it help to treat emulations as adolescents for a few years so that the biological creator would maintain temporary control? Would criminal emulations receive the death penalty, or would they be given forced data modification as a form of "rehabilitation"? Could an upload have marriage and child-care rights?
If simulated minds would come true and if they were assigned rights of their own, it may be difficult to ensure the protection of "digital human rights". For example, social science researchers might be tempted to secretly expose simulated minds, or whole isolated societies of simulated minds, to controlled experiments in which many copies of the same minds are exposed (serially or simultaneously) to different test conditions.
Political and economic implications
Emulations could create a number of conditions that might increase risk of war, including inequality, changes of power dynamics, a possible technological arms race to build emulations first, first-strike advantages, strong loyalty and willingness to "die" among emulations, and triggers for racist, xenophobic, and religious prejudice. If emulations run much faster than humans, there might not be enough time for human leaders to make wise decisions or negotiate. It is possible that humans would react violently against growing power of emulations, especially if they depress human wages. Emulations may not trust each other, and even well-intentioned defensive measures might be interpreted as offense.
Emulation timelines and AI risk
There are very few feasible technologies that humans have refrained from developing. The neuroscience and computer-hardware technologies that may make brain emulation possible are widely desired for other reasons, and logically their development will continue into the future. Assuming that emulation technology will arrive, a question becomes whether we should accelerate or slow its advance.
Arguments for speeding up brain-emulation research:
If neuroscience is the bottleneck on brain emulation rather than computing power, emulation advances may be more erratic and unpredictable based on when new scientific discoveries happen. Limited computing power would mean the first emulations would run slower and so would be easier to adapt to, and there would be more time for the technology to transition through society.
Improvements in manufacturing, 3D printing, and nanotechnology may accelerate hardware production, which could increase the "computing overhang" from excess hardware relative to neuroscience.
If one AI-development group had a lead in emulation technology, it would have more subjective time to win an arms race to build the first superhuman AI. Because it would be less rushed, it would have more freedom to consider AI risks.
Arguments for slowing down brain-emulation research:
Greater investment in brain emulation and associated cognitive science might enhance the ability of artificial intelligence (AI) researchers to create "neuromorphic" (brain-inspired) algorithms, such as neural networks, reinforcement learning, and hierarchical perception. This could accelerate risks from uncontrolled AI. Participants at a 2011 AI workshop estimated an 85% probability that neuromorphic AI would arrive before brain emulation. This was based on the idea that brain emulation would require understanding some brain components, and it would be easier to tinker with these than to reconstruct the entire brain in its original form. By a very narrow margin, the participants on balance leaned toward the view that accelerating brain emulation would increase expected AI risk.
Waiting might give society more time to think about the consequences of brain emulation and develop institutions to improve cooperation.
Emulation research would also speed up neuroscience as a whole, which might accelerate medical advances, cognitive enhancement, lie detectors, and capability for psychological manipulation.
Emulations might be easier to control than de novo AI because
Human abilities, behavioral tendencies, and vulnerabilities are more thoroughly understood, thus control measures might be more intuitive and easier to plan for.
Emulations could more easily inherit human motivations.
Emulations are harder to manipulate than de novo AI, because brains are messy and complicated; this could reduce risks of their rapid takeoff. Also, emulations may be bulkier and require more hardware than AI, which would also slow the speed of a transition. Unlike AI, an emulation wouldn't be able to rapidly expand beyond the size of a human brain. Emulations running at digital speeds would have less intelligence differential vis-à-vis AI and so might more easily control AI.
As counterpoint to these considerations, Bostrom notes some downsides:
Even if we better understand human behavior, the evolution of emulation behavior under self-improvement might be much less predictable than the evolution of safe de novo AI under self-improvement.
Emulations may not inherit all human motivations. Perhaps they would inherit our darker motivations or would behave abnormally in the unfamiliar environment of cyberspace.
Even if there's a slow takeoff toward emulations, there would still be a second transition to de novo AI later on. Two intelligence explosions may mean more total risk.
Because of the postulated difficulties that a whole brain emulation-generated superintelligence would pose for the control problem, computer scientist Stuart J. Russell in his book Human Compatible rejects creating one, simply calling it "so obviously a bad idea".
Advocates
Ray Kurzweil, director of engineering at Google, has long predicted that people will be able to "upload" their entire brains to computers and become "digitally immortal" by 2045. Kurzweil made this claim for many years, e.g. during his speech in 2013 at the Global Futures 2045 International Congress in New York, which claims to subscribe to a similar set of beliefs. Mind uploading has also been advocated by a number of researchers in neuroscience and artificial intelligence, such as the late Marvin Minsky. In 1993, Joe Strout created a small web site called the Mind Uploading Home Page, and began advocating the idea in cryonics circles and elsewhere on the net. That site has not been actively updated in recent years, but it has spawned other sites including MindUploading.org, run by Randal A. Koene, who also moderates a mailing list on the topic. These advocates see mind uploading as a medical procedure which could eventually save countless lives.
Many transhumanists look forward to the development and deployment of mind uploading technology, with transhumanists such as Nick Bostrom predicting that it will become possible within the 21st century due to technological trends such as Moore's law.
Michio Kaku, in collaboration with Science, hosted a documentary, Sci Fi Science: Physics of the Impossible, based on his book Physics of the Impossible. Episode four, titled "How to Teleport", mentions that mind uploading via techniques such as quantum entanglement and whole brain emulation using an advanced MRI machine may enable people to be transported vast distances at near light-speed.
The book Beyond Humanity: CyberEvolution and Future Minds by Gregory S. Paul & Earl D. Cox, is about the eventual (and, to the authors, almost inevitable) evolution of computers into sentient beings, but also deals with human mind transfer. Richard Doyle's Wetwares: Experiments in PostVital Living deals extensively with uploading from the perspective of distributed embodiment, arguing for example that humans are currently part of the "artificial life phenotype". Doyle's vision reverses the polarity on uploading, with artificial life forms such as uploads actively seeking out biological embodiment as part of their reproductive strategy.
See also
Mind uploading in fiction
BRAIN Initiative
Brain transplant
Brain-reading
Cyborg
Cylon (reimagining)
Democratic transhumanism
Human Brain Project
Isolated brain
Neuralink
Posthumanization
Robotoid
Ship of Theseus—thought experiment asking if objects having all parts replaced fundamentally remain the same object
Simulation hypothesis
Simulism
Technologically enabled telepathy
Turing test
The Future of Work and Death
Chinese room
References
Fictional technology
Hypothetical technology
Immortality
Neurotechnology
Transhumanism
Posthumanism |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.