id
stringlengths 3
8
| url
stringlengths 32
207
| title
stringlengths 1
114
| text
stringlengths 93
492k
|
---|---|---|---|
1135769
|
https://en.wikipedia.org/wiki/Ukrainian%20Air%20Force
|
Ukrainian Air Force
|
The Ukrainian Air Force () is part of the Armed Forces of Ukraine. Its headquarters are in the city of Vinnytsia. When the Soviet Union dissolved in 1991, many aircraft were left in Ukrainian territory. Ever since, the Ukrainian Air Force has been downsizing and upgrading its forces. The main inventory of the air force still consists of Soviet-made aircraft. Currently 36,300 personnel and 225 aircraft are in service in the Ukrainian Air Force and Air Defense forces.
Since Ukrainian independence in 1991, the air force has suffered from chronic underinvestment, leading to the bulk of its inventory becoming mothballed or otherwise inoperable. Despite this, Ukraine still possesses the world's 27th largest air force and the 7th largest air force in Europe, largely due to the ability of its domestic defense industry Ukroboronprom and its Antonov subsidiary to maintain its older aircraft.
The Ukraine Air Force participated in the war in Donbas. Following the 5 September 2014 ceasefire, the Air Force was suspended from carrying out missions in the contested areas of Donbas.
Missions
The role of the Air Force is to protect the air space of Ukraine. The objectives are: obtaining operational air superiority, delivering air strikes against enemy units and facilities, covering troops against enemy air strikes, providing air support to the Land Force and the Navy, disrupting enemy military and state management, disrupting enemy communications, and providing air support by reconnaissance, air drops, and troops and cargo transportation.
In peace-time, this is carried out by flying air-space control missions over the entire territory of Ukraine (603,700 square km), and by preventing air space intrusion along the aerial borders (totaling almost 7,000 km, including 5,600 km of land and 1,400 km of sea). Over 2,200 service personnel and civilian employees of the Air Force, employing 400 items of weapons and equipment, are summoned daily to perform defense duties.
On average, the Ukrainian radar forces detect and track more than 1,000 targets daily. As a result, in 2006 two illegal crossings of the state border were prevented and 28 violations of Ukrainian air space were prevented. Due to such increased strengthening of air space control, the number of air space violations decreased by 35% compared to the previous year, even though the amount of air traffic increased by 30%.
History
1917–1921
The roots of Ukrainian military aviation are in the autumn 1917 creation of the Ukrainian People's Republic Air Fleet, headed by former commander of the Kyiv Military District Lieutenant Colonel Viktor Pavlenko. Previously, while in Russian service in World War I, Pavlenko was in charge of air security of the Russian Stavka.
Sometimes in 1918 the West Ukrainian People's Republic created its own aviation corps with the Ukrainian Galician Army headed by Petro Franko, a son of renown Ukrainian writer Ivan Franko. In 1918 he organized an aviation school of the Ukrainian Galician Army Command Center which was active until 1920.
Among the airplanes used by the Ukrainian aviation in this period were Belgium-built SPAD S.VIIs. The Ukrainian Galician Army used Nieuport 17 biplanes.
Collapse of the USSR
The Ukrainian Air Force was established on 17 March 1992, in accordance with a Directive of the General Staff Chief of the Armed Forces. The headquarters of the 24th Air Army of the Soviet Air Force in Vinnytsia served as the basis to create the Air Force headquarters. Also present on Ukrainian soil were units of the former Soviet 5th, 14th, and 17th Air Armies, plus five regiments (185th, 251st, 260th, 341st Heavy Bomber Aviation Regiments and 199th Reconnaissance Aviation Regiment) of the 46th Air Army, Long Range Aviation. In addition, the 161st Maritime Fighter Aviation Regiment, at Limanskoye in Odessa Oblast, came under Ukrainian control. It had formerly been part of the 119th Maritime Fighter Aviation Division of the Black Sea Fleet.
The new Air Force inherited the 184th Guards Heavy Bomber Aviation Regiment (201st Heavy Bomber Aviation Division) of Tupolev Tu-160 'Blackjack' which were based at Pryluky. Discussions with Russia concerning their return bogged down. The main bone of contention was the price. While Russian experts, who examined the aircraft at Pryluky in 1993 and 1996, assessed their technical condition as good, the price of $3 billion demanded by Ukraine was unacceptable. The negotiations led to nowhere and in April 1998, Ukraine decided to commence scrapping the aircraft under the Nunn-Lugar Cooperative Threat Reduction Agreement. In November, the first Tu-160 was ostentatiously chopped up at Pryluky.
In April 1999, immediately after NATO began air attacks against Serbia, Russia resumed talks with Ukraine about the strategic bombers. This time they proposed buying back eight Tu-160s and three Tu-95MS models manufactured in 1991 (those in the best technical condition), as well as 575 Kh-55MS missiles. An agreement was eventually reached and a contract valued at $285 million was signed. That figure was to be deducted from Ukraine's debt for natural gas. A group of Russian military experts went to Ukraine on 20 October 1999 to prepare the aircraft for the trip to Engels-2 air base. Between November 1999 and February 2001 the aircraft were transferred to Engels. One Tu-160 remains on display in Poltava.
Ukraine also had Tupolev Tu-22s, Tupolev Tu-22Ms and Tupolev Tu-95s for a period after the collapse of the Soviet Union. The 106th Heavy Bomber Aviation Division, part of the 37th Air Army operated some of them. However, these have all been scrapped, apart from a handful displayed in museums. TU-16 and TU-22M bombers were among the aircraft destroyed under the Conventional Forces in Europe treaty. It has been reported that Tu-16s based with the 251st Heavy Bomber Aviation Regiment at Belaya Tserkov were dismantled in 1993. By 1995, the IISS Military Balance 1995/96 listed no Tu-22 Blinders in service, though a listing for one division HQ and two regiments of Tu-22M Backfires remained in the Military Balance from 1995/96 to 2000/01.
From 24 January 1992, after the collapse of the USSR, 28th Air Defense Corps, previously subordinate to 2nd Air Defence Army was transferred under the 8th Air Defence Army of Ukraine. Units stationed in Moldova were transferred to the Moldovan Armed Forces (275th Guards Anti-Aircraft Rocket Brigade, battalions and companies from the 14th Radio-Technical Brigade). There were about 67,000 air defense troops in 1992. The headquarters of the Ukrainian Air Defence Forces was formed on the basis of HQ 8th Air Defence Army.
There were also three air defence corps: the 28th (Lviv), 49th (Odessa), and 60th (Dnipropetrovsk). Holm reports that all three air defence corps were taken over by Ukraine on 1 February 1992, and that the 28th ADC became the Western AD Region on 1 June 1992. The first issue of the Military Balance after the Soviet collapse, 1992–93, listed one Air Defence army, 270 combat aircraft, and seven regiments of Su-15s (80), MiG-23s (110) and MiG-25s (80).
By March 1994 Air Forces Monthly reported three air defence regions: the Southern with the 62nd and 737th Fighter Aviation Regiments, the Western with the 92nd (transferred from 14th Air Army and based at Mukachevo), 179th, and 894th Fighter Aviation Regiments (from 28th AD Corps/2nd Air Defence Army), and the Central with the 146th (Vasilkov), 636th (Kramatorsk, seemingly disbanded 1996 and its Su-15s broken up for scrap), and 933rd Fighter Aviation Regiments. The Military Balance 95/96 said that six fighter regiments had been disbanded. (p. 71)
In March 1994 the 14th Air Army became the 14th Air Corps, and on 18 March 1994 the 5th Air Army was redesignated the 5th Air Corps. The two air corps remained active in 1996: the 14th in the Carpathian MD and the 5th in the Odessa MD, which by that time incorporated the former Kyiv MD area. The long-range bomber division at Poltava was still operational, reporting directly to Air Force headquarters. This division headquarters was probably the 13th Guards Heavy Bomber Aviation Division.
1991–2014
Since 1991's Ukrainian independence the Air Force has suffered from chronic underinvestment, leading to the bulk of its inventory becoming mothballed or otherwise becoming inoperable.
The structural reorganization of the Air Force had set as goals for itself the sufficient reduction in the total number of command and control levels, and increasing the efficiency of command and control processes. The reorganization of command and control elements of the air force is still underway. The first step of this organization was to transition from the existing air commands to the Command and Control and warning center systems.
This will not only help eliminate duplications at the command and control levels, but will also contribute to an increased centralization of the command and control system, the multi-functionality of the command and control elements, and effectiveness of response to the change of air conditions. 2006 saw the definition of the functions and tasks, organization and work of the C2 and Warning Center as well as the mechanism of interaction with the establishment of the Air Operations Center and Joint Operational Command. During the command and staff exercise one of the Air Force Commands has in effect performed control of "C2 and Warning Center – formation (unit)" level.
The An-24 and An-26 aircraft, as well as the anti-aircraft systems S-300 and "Buk M1", have been continually modernized, and their service life has been extended. An organizational basis and technological means for modernizing MiG-29, Su-24, Su-25, Su-27, L-39 has been produced. Given sufficient funding from the Verkhovna Rada, the Defense Industrial Complex of Ukraine, in cooperation with foreign companies and manufacturers, is capable of fully renewing the aircraft arsenal of the Ukrainian armed forces.
In 2005, the UAF was planning to restructure in an effort to improve efficiency. Ukraine was planning to put more advanced jet aircraft into service. Possibly buying newer SU-27s and MiG-29s from Russia. The plans were, that from approximately 2012, Ukraine would have to either take bold steps to create a new combat aircraft or purchase many existing combat aircraft. Due to the lack of funding, technical modernization was continually postponed. The Ukrainian air-force continued to use armament and military equipment which functioned mainly thanks to so-called ‘cannibalization’ (obtaining spare parts from other units), thus gradually depleting their total capabilities. Faced with the threat of losing military capability, initiating the process of technical modernization became a necessity.
In 2006, many aging weapons and equipment were decommissioned from combat service by the Air Force. This presented an opportunity to use the released funds to the modernization of various items of aviation and anti-aircraft artillery weapons and equipment, radio communication equipment, and flight maintenance equipment, as well as an improvement of Air Force personnel training.
In 2011 International Institute for Strategic Studies estimates that Ukraine's Air Force includes one Sukhoi Su-24M regiment, 5 regiments with Mikoyan MiG-29s and Sukhoi Su-27, one regiment with Sukhoi Su-25, two squadrons with Sukhoi Su-24MR, three transport regiments, some support helicopter squadrons, one helicopter training regiment, and some air training squadrons with L-39 Albatros. The IISS said they were grouped into the 5th and 14th Aviation Corps, the 35th Aviation Group, which is a multi-role rapid reaction formation, and a training aviation command. The IISS assesses the overall force size as 817 aircraft of all types and 43,100 personnel. The aviation corps had actually be reorganised into regional air commands in about 2004. Russian sources list three aviation groups (West, South, and Center).
The automated systems of collection, processing and transmission of radio information have been adopted as a component part of the Automated Command and Control System for aviation and air defense. Operational service testing of the circular surveillance radar station has also been completed. Prototypes of high-precision weapons systems, electronic warfare devices, and navigation equipment have been created and developed for state testing.
Role in the 2014 Crimean crisis and the war in Donbas
Following the 2014 Ukrainian Revolution and subsequent March 2014 Russian annexation of Crimea and the following violence and insurgency in east Ukraine, Ukraine tried to increase its defence spending and capabilities - with returning equipment to service being a key part of the spending drive.
During the 2014 Crimean crisis the air force did not fight but lost several aircraft to Russia; most were returned to Ukraine. The air force has taken part in the conflict against the 2014 insurgency in Donbas. During this conflict it has lost several planes and helicopters. Wall Street Journal published USA embassy in Kyiv report that Ukraine lost 19 planes and helicopters in the period 22 April - 22 July 2014. According to an unverified October 2015 report by Swiss technology company RUAG the Air Force had lost nearly half of its (combat) aircraft (since early 2014). RUAG believed that 222 of the Air Force's 400 aircraft had been lost.
Since 12 July 2014 the Ukrainian Air Force has been put on full combat alert. Around this date the Air Forces started restoring its former military airfields in Voznesensk, Buyalyk and Chervonohlynske (both in Odessa Oblast).
Ukraine inherited a large inventory of aircraft from the Soviet Union, these were mostly decommissioned and stored as the nation had little use or funding to keep a large fleet active. In 2014, the air force announced that it will be bringing back 68 aircraft that have been in reserve since the collapse of the Soviet Union, including the Tupolev Tu-141 reconnaissance drone. In April 2014 two MiG-29 aircraft were restored. In August a decommissioned An-26 transport aircraft was also restored to active service by a volunteer group. On 5 January 2015 the air force received another 4 newly restored airplanes, two MiG-29s and two Su-27s, as well as two Mi-8 and Mi-2 helicopters.
As a result of the war in Donbas the government of Ukraine has realized the importance of drone surveillance in locating enemy troops and recommissioned 68 Soviet era Tu-141 drones to be repaired. Analysts point out that despite being designed in 1979 the Tu-141 has a powerful camera, it likely uses similar airborne radar and infrared sensor as the Soviet-era Su-24 which would make it prone to jamming by Russian forces as they use the same equipment.
A crowd funding project for a "people's drone" was also conducted. The goal was to collect funds to purchase an already functioning American or Israeli drone. However, Ukrainian designers and engineers were able to build their own model based on the commercially available DJI Phantom 2 drone.
In October 2014 Students from Ivano-Frankivsk designed their own drone to be used in the war in Donbas. The newly build drone has the ability to broadcast footage live, unlike the Tu-141 which relies on film that must be recovered. The drone was built from off the shelf components and funded by volunteers. The drone was also stated to have an operational ceiling of 7,000 meters, a range of 25 kilometers, and cost about US$4,000 to build.
Ukroboronprom has received an order for 2.5 million hryvnia ($166,000) to refit several Mil Mi-24 helicopter gunships part of which included fitting them with night vision capabilities. The Mi-24 proved to be highly vulnerable to Russian separatist attacks during the 2014 Russian military intervention in Ukraine. With the exception of captured aircraft in Crimean airbases the Mi-24 had the highest loss rate of all aircraft in Ukraine's inventory, with 5 being shot down and 4 damaged during the conflict.
Developments towards restoration
On 19 March 2014 repaired L-39 were transferred to the 203rd Training Aviation Brigade.
On 4 April 2014 a single repaired MiG-29 is transferred to the 114th Tactical Aviation Brigade.
On 29 May 2014 a decision is taken to consolidate all MiG-29 repair the Lviv Aviacon Plant.
On 6 July 2014 a repaired Buk-M1 is transferred to the Air Defense Forces.
On 31 July 2014 a single repaired MiG-29 is transferred to the 40th Tactical Aviation Brigade.
On 5 August 2014 an order No. 499 was issued allocating finances to modernize all Su-27 to the Su-27B1M, Su-27P1M, Su-27S1M.
On 30 August 2014 a single repaired An-26 is transferred to the 15th Transport Aviation Brigade.
On 3 October 2014 Kanatovo Air Base in the Kirovograd Oblast is brought back to life.
There were plans to begin licensed production of the Saab JAS 39 Gripen fighter in Lviv. However, these plans have stalled since 2014.
Russo-Ukrainian War
On Thursday, February 24, 2022, Russian forces started an assault on many positions within Ukraine, bringing the Ukrainian Air Force into action against them. It was reported by Fox News that Ukrainian air defenses destroyed a Russian fighter jet in the Eastern part of Ukraine. In the beginning phases of the invasion information may be difficult to verify.
Branches of the Air Force
Anti-Aircraft Rocket
The Anti-Aircraft Rocket Force within Air Force became predominant after the merging of the Air Force and the Ukrainian Air Defense Forces. It allowed the Armed Forces of Ukraine to adopt the tri-service structure, common to most modern armed forces.
The Air Defense of Ukraine performs key tasks in the protection of Ukraine's sovereignty and the inviolability of its borders and air space. It has clearly defined functions in both peacetime and wartime, is intended to prevent any enemy air and missile strikes, to defend the most important administrative, political and industrial centers, to aid in the concentration of Army and Navy units, to intercept enemy aircraft and other military objects, and to protect against enemy ballistic and cruise missile strikes.
Radar Technology Corps
Structure
The Ukrainian Air Force Command is based in the city of Vinnytsia. After the collapse of the USSR and the independence of its constituent republics it was organised on the basis of the Soviet Air Force's 24th Air Army of the High Command with an Operational Purpose (24-я воздушная армия Верховного Главного командования оперативного назначения (24-я ВА ВГК (ОН)). Ukrainian air force structure after the establishment of Air Command East on 23 January 2017:
Air Force Command
Ukrainian Air Force Command (Командування повітряних сил України) (Military Unit [MU] А0215), Vinnytsia.
Staff of the Air Force Command (Штаб Командування ПС)
Air Force Operations Command (Операційне командування Повітряних Сил)
Air Force Training Command (Командування підготовки Повітряних Сил)
Air Force Logistics Command (Командування логістики Повітряних Сил)
directly subordinated (частини безпосереднього підпорядкування командуванню Повітряних Сил):
Air Force Command Center (Командний центр повітряних сил) (MU А0535), м. Вінниця
9th Command Center of the Signals and ELINT System (9 пункт управління та контролю системою зв'язку та радіотехнічної розвідки)(MU А2833)
40th Special Communications Support Center (40 центр забезпечення спеціального зв'язку) (MU А1670)
41st Intelligence Command Center (41 командно-розвідувальний центр) (MU А2280)
43rd Command Center for Search and Rescue Support to Flight Operations of the Ukrainian Armed Forces (43 центр управління пошуково-рятувального забезпечення польотів авіації ЗСУ) (MU А1134)
85th Long Distance Radio-navigation Center (85 центр дальньої радіонавігації) (MU А3666)
114th Air Navigation Support Center (114 центр аеронавігаційного забезпечення) (MU А0985)
182nd Joint Information and Telecommunications Nod (182 об'єднаний інформаційно-телекомунікаційний вузол) (MU А1660)
230th Separate Supply Base (230-та окрема база забезпечення) (MU А0549), м. Вінниця
tactical aviation (тактична авіація)
7th Tactical Aviation Brigade (Bomber-Recon) "Petro Franko" (7 бригада тактичної авіації (бомбардувально-розвідувальна)), Starokostiantyniv Air Base (Su-24M, Su-24MR, L-39C)
299th Tactical Aviation Brigade (Ground Attack) "Lt.-Gen. Vasil' Nikiforov" (299 бригада тактичної авіації (штурмова)), Kulbakino Air Base (Su-25, Su-25UB, L-39C)
383rd Separate UAV Aviation Regiment (383 окремий полк дистанційно-керованих літальних апаратів), Khmelnitsky (Bayraktar TB2)
transport aviation (транспортна авіація)
15th Boryspil Transport Aviation Brigade "Aircraft Designer Oleg Antonov" (15 бригада транспортної авіації), Boryspil Airport (VIP Transport: Tu-134AK, An-30)
25th Transport Aviation Brigade (25 бригада транспортної авіації), Melitopol Air Base (An-26, Il-76MD)
456th Guards Transport Aviation Brigade (456 бригада транспортної авіації), Vinnytsia Airport (An-24B, An-26, Mi-8)
other units (інші частини)
101st Separate Signals and Command Battalion (101 окремий полк зв'язку і управління) (MU А2656)
19th Separate Radio Intercept and ELINT Regiment (Special Purpose) (19 окремий полк радіо і радіотехнічної розвідки (особливого призначення)) (MU А3767)
NBC Surveillance and Analysis Station (Розрахунково-аналітична станція)
20th Special Signals and Radio-technical Equipment Repair Center (20 центр ремонта засобів зв'язку і радіотехнічного забезпечення) (MU А1724)
73rd Flying Air-Technical Laboratory (73 літаюча авіаційно-технічна лабораторія) (MU А3126)
101st Separate Repair and Overhaul Battalion (101 окремий ремонтно-відновлювальний батальйон) (MU А3549)
2007th Separate Repair and Overhaul Battalion (2007 окремий ремонтно-відновлювальний батальйон) (MU А4314)
99th Separate Stationary Automobile Repair Base (99 окрема стаціонарна база з ремонту автомобільної техніки) (MU А0294)
? Separate Stationary Automobile Repair Base (?? окрема стаціонарна база з ремонту автомобільної техніки) (MU А2466)
137th Joint Material Technical Supply Center (137 об'єднаний центр матеріально-технічного забезпечення) (MU А2287)
17th Air Force Technical Support Base (17 база авіаційно-технічного забезпечення) (MU А1840)
204th Complex Technical Control Nod (204 вузол комплексного технічного контролю)
3rd Air Force Arsenal for Special Weaponry (3 арсенал повітряних засобів ураження) (MU А3177)
332nd Arsenal for Missile Systems and Armament (332 арсенал ракетного озброєння і боєприпасів) (MU А4245)
433rd Weaponry and Equipment Storage Base (433 база зберігання озброєння і техніки) (MU А1912)
485th Storage for Missile Systems and Armament (485 авіаційний склад ракетного озброєння і боєприпасів) (MU А2734)
649th Storage for Missile Systems and Armament (649 авіаційний склад ракетного озброєння і боєприпасів) (MU А3013)
(інші частини)
training and research establishments and units (навчальні заклади та частини)
Kharkiv National Air Force University 'Ivan Kozhedub (Харківський національний університет Повітряних Сил імені Івана Кожедуба), Kharkiv, Kharkiv Oblast
203rd Training Aviation Brigade (203 навчальна авіаційна бригада) (MU А4104), Chuhuiv Air Base & Komunar Airbase, Kharkiv Oblast (L-39C, An-26, Mi-8T)
NCO Military College (Військовий коледж сержантського складу)
38th Joint Education Center of the KNAFU (38 об'єднаний навчальний центр Харківського НУПС) (MU А0704), Vasylkiv, Kyiv Oblast
41st Education and Training Center (41 навчально-тренувальний центр) (MU А2682), Danylivka, Kyiv Oblast
202nd Air Force Sergeants Training Center (202 центр підготовки сержантського складу ПС ЗСУ) (MU А1437), Vasylkiv, Kyiv Oblast
Mykolaiv Specialised Center for Combat Training of Aviation Specialists of the Ukrainian Armed Forces (Миколаївський спеціалізований центр бойової підготовки авіаційних фахівців Збройних Сил України) (MU А2488), Mykolaiv, Mykolaiv Oblast (Il-76MD, MiG-29, Su-25, L-39, Su-24)
State Scientific Test and Evaluation Center of the Ukrainian Armed Forces, Chernihiv
Koroliov Air Force Institute - Military Faculty of the National Aviation University, Kyiv
"LDARZ" State Aviation Maintenance Plant, Lviv
"ChARZ" Aviation Repair Plant, Chuhuiv
"Aviakon" Aviation Repair Plant, Konotop
"MARP" Aviation Repair Plant, Mykolaiv
Air Command West
Air Command West, Lviv
Command
193rd Airspace Control and Reporting Center, Lviv
76th Separate Signals and Command Regiment, Lypnyky
11th Security and Support Commandature, Lviv
114th Tactical Aviation Brigade (Fighter), Ivano-Frankivsk Air Base (MiG-29)
11th Anti-Aircraft Missile Regiment, Shepetivka (Buk-M1)
223rd Anti-Aircraft Missile Regiment, Stryi (Buk-M1)
540th Anti-Aircraft Missile Regiment, Kamianka-Buzka (S-300PS)
1st Radio-technical Brigade, Lypnyky
17th Separate Electronic Warfare Battalion
8th Aviation Commandature
25th Aviation Commandature
108th Aviation Commandature
204th Tactical Aviation Brigade, Lutsk Air Base (since 2018, prior to the 2014 Russian annexation of Crimea part of Task Force Crimea)
supply units
Air Command Central
Air Command Central, Vasylkiv
Command
192nd Airspace Control and Reporting Center, Vasylkiv
31st Separate Signals and Command Regiment, Kyiv
77th Security and Support Commandature, Vasylkiv
39th Tactical Aviation Brigade (Fighter), Ozerne Air Base (Su-27, L-39C)
40th Tactical Aviation Brigade (Fighter), Vasylkiv Air Base (MiG-29, L-39C)
831st Tactical Aviation Brigade (Fighter), Myrhorod Air Base (Su-27, L-39C)
96th Anti-aircraft Missile Brigade, Danylivka (S-300PS)
156th Anti-aircraft Missile Regiment, Zolotonosha (Buk-M1)
201st Anti-aircraft Missile Regiment
138th Radio-technical Brigade, Vasylkiv
21st Aviation Commandature
110th Aviation Commandature
112th Aviation Commandature
215th Aviation Commandature
supply units
Air Command South
Air Command South, Odessa
Command
195th Airspace Control and Reporting Center, Odessa
43rd Separate Signals and Command Regiment, Odessa
297th Security and Support Commandature, Odessa
160th Anti-aircraft Missile Brigade, Odessa (S-300PM)
208th Anti-aircraft Missile Brigade, Kherson (S-300PS)
201st Anti-aircraft Missile Regiment, Pervomaisk (S-300PS)
14th Radio-technical Brigade, Odessa
Radio-technical Battalion, Kherson
Radio-technical Battalion, Podilsk
1194th Electronic Warfare Battalion
15th Aviation Commandature
18th Aviation Commandature
supply units
Air Command East
Air Command East', Dnipro
Command
196th Airspace Control and Reporting Center, Dnipro
57th Separate Signals and Command Regiment, Dnipro
46th Security and Support Commandature, Dnipro
138th Anti-aircraft Missile Brigade, Dnipro (S-300PS)
3020th Anti-aircraft Missile Battalions Group
301st Anti-aircraft Missile Regiment, Nikopol (S-300PS)
164th Radio-technical Brigade, Kharkiv
2215th Radio-technical Battalion, Avdiivka
2315th Radio-technical Battalion, Rohan
2316th Radio-technical Battalion, Zaporizhzhia
2323rd Radio-technical Battalion, Mariupol
85th Aviation Commandature
supply units
Task Force Crimea
Task Force of Crimea peninsula is under the control of the Russian Armed Forces. On 8 April 2014 an agreement had been reached between Russia and Ukraine "for the withdrawal of an undisclosed number of Ukrainian aircraft seized in Crimea".
Task Force Crimea:
40th Separate radio team (Liubymivka near Sevastopol)
204th Tactical Aviation Brigade (Belbek, near Sevastopol). Former 62nd Fighter Aviation Regiment PVO. From 1 March 2014 Belbek Air Base and its 45 MiG-29s and 4 L-39s are under the control of the Russian Armed Forces. Since 2018 the 204th Tactical Aviation Brigade is based in Lutsk Air Base.
174th Anti-Aircraft Artillery regiment (Derhachi near Sevastopol. S-400)
50th Anti-Aircraft Artillery regiment (Feodosiya. S-400)
55th Anti-Aircraft Artillery regiment (Yevpatoriya. Buk-M1)
Geographic distribution
Military ranks
Training
Training activities have taken on a qualitatively new character due to their complexity, including the simultaneous employment of all branches of the Air Force aviation, anti-aircraft artillery and radar troops in close teamwork with units of other armed services of the Armed Forces. Operational and combat training has included the following activities:
Aviation units have performed more than 6,000 tasks in combat scenarios (including more than 1,500 air battles and interceptions, 629 firing at land-based targets, 530 bombings, 21 launches of air missiles, 454 tasks in aerial surveillance, 454 airborne landings, 740 airlifts, 575 flight shifts for a total of 10,553 flying hours);
Five tactical flying missions in a squadron, 14 in a pair and 5 in a flight organization have been carried out to perform the assigned combat tasks, and 54 pilots have been trained to perform specific tasks in difficult meteorological conditions;
The number of flight crews being trained to defend the air space of the country and counter-terrorism air operations has almost doubled from 46 in 2005 to 90 in 2006; the units of anti-aircraft artillery and radar troops carried out 50 maneuvers involving redeployment, with each operator tracking 70 and 140 real and simulated targets, respectively.
In early September 2007, the Ukrainian Air Force conducted the most large-scale training of its aircraft to date. As the Defense Minister of Ukraine, Anatoliy Hrytsenko stated, "The most large-scale, during the whole 16 years of the Ukrainian independence, training of fighting aircraft, which defends our air space, was carried out during September 4–5". According to him, they fulfilled 45 battle launches of air-to-air missile, out of them 22 during the day and 23 at night. 35 pilots confirmed their high skills during the training. Hrytsenko stressed that 100% of air targets were hit.
The Kharkiv State Aircraft Manufacturing Company has developed the KhAZ-30 ultralight trainer for the Ukrainian Airforce. The aircraft is designed for elementary pilot training as an introductory aircraft before recruits move on the more advanced Aero L-39 Albatros trainer.
Aircraft
Current inventory
Retired
Previous aircraft operated by the Air Force consisted of the MiG-21, MiG-23, MiG-25, MiG-27, Sukhoi Su-17, Sukhoi Su-15, Yakovlev Yak-28, Tupolev Tu-160, Tupolev Tu-95, Tupolev Tu-22M, Tupolev Tu-22, Tupolev Tu-16, Tupolev Tu-154, and the Tupolev Tu-134
Air Defence
See also
Ukrainian Long Range Aviation
Ukrainian Falcons aerobatic demonstration team
Air Force ranks and insignia of Ukraine
Antonov
Notes
References
Operation Crimea 2014
Air Forces Monthly March 1994
The Ukrainian Army - uarmy.iatp.org.ua
Analysis of the Ukrainian Security Policy
Other images from foxbat.ru
Ukraine as a Post-Cold War Military Power
Ukrainian Air Force
Photos from Ukrainian Air Force museum in Kyiv & Poltava
External links
Air Force page on the official site of Ministry of Defence:
in Ukrainian
Photo gallery of the Ukrainian Air Force and Ukrainian Falcons in flight.
Obsolete 1990-s pennants and patches, Linden Hill imports
Photos of Ukrainian Air Force
.
Air Force
Air forces by country
Air Force
1992 establishments in Ukraine
Military units and formations established in 1992
|
359350
|
https://en.wikipedia.org/wiki/The%20Ur-Quan%20Masters
|
The Ur-Quan Masters
|
The Ur-Quan Masters is a 2002 open-source fangame modification, based on the action-adventure science fiction game Star Control II. The original game was released for PCs in 1992 and ported to the 3DO Interactive Multiplayer in 1994. It has been frequently mentioned among the best games of all time, with additional praise for its writing, world design, character design, and music.
After the Star Control II copyrights reverted to creators Paul Reiche III and Fred Ford, they licensed their content to their fan community to keep their series in the public eye. The open-source development team remade the 3DO version as a port to modern operating systems, and allowed fan-made modifications to add improvements absent in the original release. Released under the title The Ur-Quan Masters (the subtitle of the original game), the modified remake has since been downloaded nearly two million times, earning critical reception as one of the best free games available, with additional praise for a high-definition graphics fan modification.
Gameplay
The Ur-Quan Masters is a re-make of Star Control II, an action-adventure science fiction game set in an open universe. The game includes exploration, resource-gathering, combat, and diplomacy. Much of the game is played from a top-down perspective, and features real-time combat between alien ships with different abilities. The player can freely explore a galaxy with hundreds of stars, planets, and moons, which contain resources for the player to scan and retrieve in a lander vehicle. In diplomacy, the player converses with alien races in branching dialog sequences, with the goal of rallying an alliance to defeat the titular antagonists, the Ur-Quan. The combat featured in the story can also be played as a separate mode called "Super Melee".
The player plays the role of the captain of a spaceship, which returns to Earth after a lost research mission. The captain quickly discovers that Earth has been conquered by the Ur-Quan, and begins a quest to acquire knowledge, resources, and allies in order to free humanity from slavery. During the story, the Ur-Quan become entangled in a civil war, allowing the captain to contact dozens of unique alien races, and ultimately influence the outcome of the conflict. After rallying humanity's former allies, the captain is able to overcome and defeat the Ur-Quan.
Development
The Star Control series was created by Fred Ford and Paul Reiche, and published by Accolade. The first release in 1990 was a space strategy and action game, inspired by the 1961 space combat game Spacewar!. Star Control II, the 1992 sequel, abandoned the first game's strategic elements and greatly expanded the story, wrapping the combat system into an adventure-based narrative. Its port to the 3DO Interactive Multiplayer console in 1994 added fully voiced dialog and other updates to the sound and graphics. Star Control received awards upon release, and Star Control II received even more. Journalists have listed Star Control among their best games of all time, with Star Control II earning even more "best game" rankings through the 1990s, 2000s, and 2010s. Star Control II is also remembered among the best games in several creative areas, including writing, world design, character design, and music.
By the early 2000s, the Star Control II copyrights reverted to Ford and Reiche, triggered by a contractual clause where the game was no longer generating royalties. With the game no longer available in stores, Ford and Reiche wanted to keep the game in the public eye so that they could one day make another game in the series. Ford and Reiche still owned the rights to Star Control I and II, but they could not successfully purchase the Star Control trademark from publisher Accolade, so they chose the title The Ur-Quan Masters. Their independent studio Toys for Bob hired Chris Nelson as their first summer intern, who was enthusiastic for open-source software. Nelson worked with Ford to port the game to modern operating systems. Ford recalled, "we haven't made a sequel yet, so we thought the least we could do is release the source code and let the fans revive it on modern computers".
The open-source project officially launched in 2002, when Ford and Reiche licensed the source code from the 3DO version of Star Control II as open source under the GNU General Public License. Ford and Reiche own all the copyrighted content in the first two Star Control games, and granted the fan-operated project a free, perpetual license to the Star Control II content and the Ur-Quan Masters trademark. The first version of The Ur-Quan Masters suffered from performance issues, but Nelson knew skilled contacts in the open-source community who could make progress on the project. The fan community continued the project with further support, enhancements, and modifications. The credits screen names the "core team" as Serge van den Boom, Mika Kolehmainen, Michael Chapman Martin, Chris Nelson, and Alex Volkov. Ford and Reiche personally credit the open-source remake for making their creation available from 2001 to 2011, before Star Control became available for sale digitally through GOG.com. In an interview, the fans-turned-developers stated that a for-profit company would not be able to justify the port and remake, and that "without the open-source philosophy, The Ur-Quan Masters would never have existed".
Modifications
The Ur-Quan Masters has an active fanbase, maintaining both the open-source project and an extensive wiki. The most essential modifications extended the original code to operate on newer operating systems, resolving compatibility issues with the native DOS game. Fans have since modified and extended the project several times. Reiche has commented, "our policy has been to let people do whatever they want, as long as they don't turn our characters into mass murderers or make money with it. If you're making money with our stuff, we'd like a pizza".
The Ur-Quan Masters introduced features from the 3DO version that were previously unavailable on other platforms, including improved graphics and full voice acting. The extensions further added mod support and online multiplayer combat, neither of which were supported in the original games. The most notable fan modification is the high-definition version of the game, The Ur-Quan Masters HD, which was released in 2013. It was created by re-painting every frame of animation by hand.
Reception
Since its 2002 release, The Ur-Quan Masters has been downloaded nearly two million times as of 2021. Soon after its debut, the game was featured in PC PowerPlay in its compilation of free games, celebrating it as a "timeless classic" from the "golden age of gaming". Finnish magazine Pelit rated it five stars in 2004 for its timeless appeal, as well as new features and remixed music. Retro Gamer featured The Ur-Quan Masters on the cover of their June 2005 edition. They further praised Ford and Reiche for making such a high-quality game available as an open-source project, stating that "this small Californian group has seen fit to grace the gaming world with one of its finest achievements, and at no cost". In a 2011 feature about open-source games, Michael Blake of IGN lauded The Ur-Quan Masters as one of the greatest games and a "pitch-perfect port to modern operating systems", which "completely hooked me, with the genius single-player storyline and the hectic multiplayer of Super Melee mode both good enough to warrant the download on their own". Hardcore Gaming 101 also called it "a brilliant port and a fantastic initiative to keep old games relevant".
The Ur-Quan Masters has been included on several best games lists since its release. In 2008, PC Gamer named The Ur-Quan Masters as one of the best free games. Game Developer Magazine featured the game in its 2010 list of open-source space games, praising its scale and charm, as well as its new features. The game was also listed in Maximum PC'''s 2015 "best free games" feature. Tom's Guide included The Ur-Quan Masters in its list of top classic games re-released for free, praising its staying power: "few games today feature the same mix of narrative depth, sandbox exploration and enjoyable space combat that have won the game a cult following to this day". In 2019, PCGamesN ranked The Ur-Quan Masters as one of the top 15 space games ever made and "one of the best free PC games you'll ever find", noting its characters, dialog and sense of discovery.
The Ur-Quan Masters HD The Ur-Quan Masters HD has received praise of its own. Rock, Paper, Shotgun celebrated it as an "ambitious and well-received fan-made (and free) remake", which "retains a certain 1990s vibe despite being made more appropriate to modern machines. It lends it a certain psychedelic silliness that today's more self-regarding space games seem to lack." Kotaku likewise praised the HD updates to the visuals and sound, and Dominic Tarason of PCGamesN described the detailed hand-painted modification as "a genuinely impressive piece of work". Since its release, The Ur-Quan Masters HD'' has been downloaded over 200,000 times on SourceForge.
References
External links
Official The Ur-Quan Masters website
Official The Ur-Quan Masters wiki – The Ultronomicon
2002 video games
Windows games
Linux games
MacOS games
Fangames
Star Control
Video game mods
Video game remakes
Video games developed in the United States
|
27418076
|
https://en.wikipedia.org/wiki/Precision%20livestock%20farming
|
Precision livestock farming
|
Precision livestock farming (PLF) is a set of electronic tools for managing livestock. It involves automated monitoring of animals to improve their production/reproduction, health and welfare, and impact on the environment. PLF tracks large animals, such as cows, "per animal"; however, it tracks animals like poultry "per flock". The whole flock in a house is tracked as one animal, especially in broilers.
PLF technologies include cameras, microphones, and other sensors for tracking livestock, as well as computer software. The results can be quantitative, qualitative and/or addressing sustainability.
Goals
PLF involves the monitoring of each individual animal, or the use of objective measurements on the animals, using signal analysis algorithms and statistical analysis. These techniques are applied in part with the goal of regaining an advantage of older, smaller-scale farming, namely detailed knowledge of individual animals. Before large farms became the norm, most farmers knew each of their animals by name. Moreover, a farmer could typically point out who the parents were and sum up other important characteristics. Each animal was approached as an individual. In the past three decades, farms have multiplied in scale, with highly automated processes for feeding and other tasks. Consequently, farmers currently are forced to work with many more animals to make their living out of livestock farming and work with average values per group. Variety has become an impediment to increasing economies of scale.
Using information technology, farmers can record numerous attributes of each animal, such as pedigree, age, reproduction, growth, health, feed conversion, killing out percentage (carcass weight as percentage of its live weight) and meat quality. Animal welfare, infection, aggression, weight, feed and water intake are variables that today can be monitored by PLF. Culling can now be done on the basis of reproduction values, plus killing out percentage, plus meat quality, plus health. The result is significantly higher reproduction outcomes, with each newborn also contributing to a higher meat value.
In addition to these economic goals, precision livestock farming supports societal goals: food of high quality and general safety, animal farming that is efficient but also sustainable, animal health and well-being, and a small ecological footprint of livestock production.
Economic livestock farming
Due to academic studies, the requirements of an animal are well known for each phase of its life and individual physical demands. These requirements allow the precise preparation of an optimal feed to support the animal. The requirements are oriented on the required nutrition – providing more nutrition than required make no economic sense, but providing less nutrients can be negative to the health of the animal.
Quality and safety
Economic goals are an important factor in livestock farming, but not the only one. Legal bodies (such as the government and industrial bodies) set quality standards that are legally binding to any livestock producing company. In addition, societal standards are followed.
'Quality' in this context includes:
the quality of used ingredients
the quality of animal keeping
the quality of the processes
One example for issues with quality of ingredients is the (nowadays often illegal) use of meat and bone meal for ruminant animals.
Ecological livestock farming
Selecting the "right" ingredients can have a positive effect on the environment pollution. It has been shown that optimizing the feed this can reduce nitrogen and phosphorus found in the excrements of pigs.
Tools
PLF starts with consistently collecting information about each animal. For this, there are several technologies: unique ID, electronic wearables to identify illness and other issues, software, cameras, etc.
Each animal requires a unique number (typically by means of an ear tag). This can be utilized through a visual ID, passive electronic ID tag or even an active electronic ID tag. For example, at birth, the farmer selects "Birth" from the menu on the reader, after which the interactive screen requests the user to read the tag of the mother. Next, tags are inserted in the ears of newborns and read. With this simple action, important information is recorded, such as:
who is the mother
how many siblings did she deliver
what is the gender of each sibling
what is the date of birth
Electronic wearables such as an active smart ear tag can get data from individual animals such as temperature and activity patterns. This data can be utilized in identification of illness, heat stress, oestrous, etc. This enables individualized care for the animals and methods to lower stress upon them. The end result is judicious use of drug treatments and nutrition to bolster healthy growth. This provides livestock producers with the tools to identify sick animals sooner and more accurately. This early detection leads to reduction in costs by lowering re-treatment rate and death loss, and getting animals back to peak performance faster.
Data recorded by the farmer or collected by sensors is then gathered by software. Although there has been software used that was run on a single computer, it has become more common for the software to connect to the internet, so that much of the data processing can happen on a remote server. Having the software connected to the internet can also make it easier to look up information about a particular animal. Due to high computational requirements, PLF requires computer-supported tools. The following types (available for PCs and via Internet) are available:
Induction/processing software applications (a necessity for use with electronic active ID tags)
Automated livestock administration software
Reproduction optimization software
Feed formulation software
Quality management software
Examples in different industries
Dairy Industry
Robotic Milkers
In Automatic milking, a robotic milker can be used for precision management of dairy cattle. The main advantages are time savings, greater production, a record of valuable information, and diversion of abnormal milk. There are many brands of robots available including Lely, DeLavel.
Automatic Feeders
An automatic feeder is a tool used to automatically provide feed to cattle. It is composed of a robot (either on a rail system or self-propelled) that will feed the cattle at designated times. The robot mixes the feed ration and will deliver the correct amount.
Activity Collars
Activity collars are like fitbits for cows. Some wearable devices help farmers with estrous detection as well as other adverse health events or conditions.
Inline Milk Sensors
Inline milk sensors help farmers identify variation of components in the milk. Some sensors are relatively simple technologies that measure properties like electrical conductivity. Other devices use automated sampling and reagents to provide a different measure to inform management decisions.
Meat Industry
EID / RFID / Electronic Identification / Electronic Ear Tags
Radio Frequency ID (commonly known as RFID or EID) is applied in cattle, pigs, sheep, goats, deer and other types of livestock for individual identification. In more and more countries, RFID or EID is mandatory for certain species. For example, Australia has made EID compulsory for cattle, as has New Zealand for deer, and the EU for sheep and goats. EID makes identification of individual animals much less error-prone. This enhances traceability, but it also provides other benefits such as reproduction tracking (pedigree, progeny and productivity), automatic weighing and drafting.
Smart Ear Tags
Cattle hide their symptoms of illness from humans due to their predatory response. The result is that illness is detected late and not very accurately utilizing conventional methods. Smart cattle ear tags get behavioural and biometric data from cattle 24 hours a day/7 days a week allowing managers to see the exact animals that need more attention regarding their health. This is effective in identifying illness earlier and more accurately than visual observation allows.
Swine Industry
There are many tools available to closely monitor animals in the swine industry. Size is an important factor in swine production.
Automated Weight Detection Cameras
Automated weight detection cameras can be used to calculate the pig's weight without a scale. These cameras can have an accuracy of less than 1.5 kilograms.
Microphones to Detect Respiratory Problems
In the swine industry, the presence of respiratory problems must be closely monitored. There are multiple pathogens that can cause infection, however, enzootic pneumonia is one of the most common respiratory diseases in pigs caused by Mycoplasma hyopneumoniae and other bacteria. This is an airborne disease that can be easily spread due to the proximity of the pigs in the herd. Early detection is important in using fewer antibiotics and minimising economic loss due to appetite loss of pigs. A common symptom of this is chronic coughing. A microphone can be used to detect the sound of coughing in the herd and raise an alert to the farmer.
Climate Control
Thermal stress is connected to reduced performance, illness, and mortality. Depending on geographical location, and the types of animals will require different heating or ventilation systems. Broilers, laying hens, and piglets like to be kept warm. Sensors can be used to constantly receive data about the climate control in the livestock houses and the automatic feeding systems. The behaviour of animals can also be monitored.
Poultry Industry
In the poultry industry, unfavourable climate conditions increase the chances of behavioural, respiratory, and digestive disorders in the birds. Thermometers should be used to ensure proper temperatures, and animals should be closely monitored for signs of unsatisfactory climate.
Quantitative Methods, towards scientifically based management of livestock farming
The development of quantitative methods for livestock production includes mathematical modelling based in plant-herbivore or predator-prey models to forecast and optimise meet production. An example is the Predator-Prey Grassland Livestock Model (PPGL) to address the dynamics of the combined grass-animals system as a predator-prey dynamical system. This PPGL model has been used to simulate the effect of forage deficiency on the farm's economic performance.
References
Livestock
Sustainable agriculture
Agriculture-related lists
|
1974089
|
https://en.wikipedia.org/wiki/ATA%20over%20Ethernet
|
ATA over Ethernet
|
ATA over Ethernet (AoE) is a network protocol developed by the Brantley Coile Company, designed for simple, high-performance access of block storage devices over Ethernet networks. It is used to build storage area networks (SANs) with low-cost, standard technologies.
Protocol description
AoE runs on layer 2 Ethernet. AoE does not use internet protocol (IP); it cannot be accessed over the Internet or other IP networks. In this regard it is more comparable to Fibre Channel over Ethernet than iSCSI.
With fewer protocol layers, this approach makes AoE fast and lightweight. It also makes the protocol relatively easy to implement and offers linear scalability with high performance. The AoE specification is 12 pages compared with iSCSI's 257 pages.
AoE Header Format:
0 1 2 3
0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
0 | Ethernet Destination MAC Address |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
4 | Ethernet Destination (cont) | Ethernet Source MAC Address |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
8 | Ethernet Source MAC Address (cont) |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
12 | Ethernet Type (0x88A2) | Ver | Flags | Error |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
16 | Major | Minor | Command |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
20 | Tag |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
24 | Arg |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
AoE has the IEEE assigned EtherType 0x88A2.
ATA encapsulation
SATA (and older PATA) hard drives use the Advanced Technology Attachment (ATA) protocol to issue commands, such as read, write, and status. AoE encapsulates those commands inside Ethernet frames and lets them travel over an Ethernet network instead of a SATA or 40-pin ribbon cable. Although internally AoE uses the ATA protocol, it presents the disks as SCSI to the operating system. Also the actual disks can be SCSI or any other kind, AoE is not limited to disks that use the ATA command set. By using an AoE driver, the host operating system is able to access a remote disk as if it were directly attached.
The encapsulation of ATA provided by AoE is simple and low-level, allowing the translation to happen either at high performance or inside a small, embedded device, or both.
Routability
AoE is a layer 2 protocol running at the data-link layer, unlike some other SAN protocols which run on top of layer 3 utilizing IP. While this reduces the significant processing overhead of TCP/IP, this means that routers cannot route AoE data across disparate networks (such as a campus network or the Internet). Instead, AoE packets can only travel within a single local Ethernet storage area network (e.g., a set of computers connected to the same switch or in the same LAN Subnet or VLAN).
Security
The non-routability of AoE is the only security mechanism (i.e., an intruder can't connect through a router—they must physically plug into the local Ethernet switch where Ethernet frame tunneling over routed networks is not in use). However, there are no AoE-specific mechanisms for password verification or encryption. The protocol provides for AoE targets such as Coraid Storage appliances, vblade and GGAOED to establish access lists ("masks") allowing connections only from specific MAC addresses (although these can be spoofed). Most secure AoE by using Ethernet VLANs.
Config string
The AoE protocol provides a mechanism for host-based cooperative locking. When more than one AoE initiator is using an AoE target they must communicate to avoid interfering with one another as they read and write the config string data on the shared AoE device. Without this cooperation file-system corruption and data loss is likely, unless access is strictly read-only or a cluster file system is used.
One option provided by AoE is to use the storage device itself as the mechanism for determining specific host access. This is the AoE "config string" feature. The config string can record who is using the device, as well as other information. If more than one host tries to set the config string simultaneously, only one succeeds. The other host is informed of the conflict.
Operating system support
The following operating systems provide ATA over Ethernet (AoE) support:
Hardware support
Coraid offered an array of AoE SAN appliances under the EtherDrive brand, along with diskless gateways that add network-attached storage functionality, using the NFS or SMB protocols, to one or more AoE appliances. The Coraid brand is now owned by SouthSuite, Inc., a company founded by Brantley Coile who founded Coraid.
In 2007, LayerWalker announced AoE hardware called miniSAN running at both Fast and Gigabit Ethernet. The miniSAN product family offers standard AoE server functions plus other management features that targets PC, consumer and small and medium businesses markets.
Related concepts
Although AoE is a simple network protocol, it opens up a complex realm of storage possibilities. To understand and evaluate these storage scenarios, it helps to be familiar with a few concepts.
Storage area networks
A SAN allows the physical hard drive to be removed from the server that uses it, and placed on the network. A SAN interface is similar in principle to non-networked interfaces such as SATA or SCSI. Most users will not use a SAN interface directly. Instead, they will connect to a server that uses a SAN disk instead of a local disk. Direct connection, however, can also be used.
When using a SAN network to access storage, there are several potential advantages over a local disk:
It is easier to add storage capacity and the amount of storage is practically unlimited.
It is easier to reallocate storage capacity.
Data may be shared.
Additionally, compared to other forms of networked storage, SANs are low-level and high performance
Using storage area networks
To use a SAN disk, the host must format it with a filesystem. Unlike a SATA or SCSI disk, however, a SAN hard drive may be accessed by multiple machines. This is a source of both danger and opportunity.
Traditional filesystems (such as FAT or ext3) are designed to be accessed by a single host, and will cause unpredictable behavior if accessed by multiple machines. Such filesystems may be used, and AoE provides mechanisms whereby an AoE target can be guarded against simultaneous access (see: Config String).
Shared disk file systems allow multiple machines to use a single hard disk safely by coordinating simultaneous access to individual files. These filesystems can be used to allow multiple machines access to the same AoE target without an intermediate server or filesystem (and at higher performance).
See also
HyperSCSI
iSCSI
Fibre Channel over Ethernet (FCoE)
InfiniBand
Network block device
References
External links
Articles:
ATA Over Ethernet: Putting Hard Drives on the LAN — Linux Journal (28 April 2005)
ATA-over-Ethernet enables low-cost Linux-oriented SAN — LinuxDevices.com (23 June 2004)
The ATA over Ethernet (AoE) Protocol — Linux Magazine (June 15, 2005)
HowTos:
Using ATA Over Ethernet On Debian Etch
Protocol:
AoE protocol definition
Network protocols
AT Attachment
Ethernet
Storage area networks
Wikipedia articles with ASCII art
|
63272765
|
https://en.wikipedia.org/wiki/Jody%20Wynn
|
Jody Wynn
|
Jody Wynn ( Anton, born February 21, 1974) is an American women's basketball coach. She was head coach at the University of Washington from 2017 to 2021 and at Long Beach State from 2009 to 2017.
High school
Jody Wynn was a prep standout in high school in Southern California. Her initial plans were to concentrate on swimming in high school with the goal of becoming an Olympic swimmer. However, while still in fifth grade, she was playing basketball when the head coach of Brea Olinda High School basketball team Mark Trakh (now the current University of Southern California women's coach), approached her with some shooting tips and encouraged her to think about playing basketball when she reached high school. She did commit to playing basketball and played for Trakh, starting every game in winning three straight championships. Although she was the tallest player on the team he had her playing at the two guard position. She earned the CIF-Southern Section and Orange County Player of the Year honors in 1991 and 1992. She was also tabbed a USA Today and Street & Smith's Honorable-Mention All-American.
Wynn played Forward and was a four-year starter on the varsity squad. She scored 16 points per game as a senior. In her four years, the team had a 129–6 record and won three California state championships.
College
Wynn graduated from the University of Southern California in 1996, earning her Bachelor's degree in Exercise Science. In 2000, she completed a Master's degree in Education at Pepperdine University.
During her collegiate playing career (1993–96), the USC Trojans earned a cumulative record of 79-35 (.693). This team, which was headlined by notable WNBA players Lisa Leslie and Tina Thompson, won the 1994 Pac-10 Conference Championship.
The Trojans made three consecutive NCAA Tournament appearances from 1993 to 1995. During this time, Wynn played under three head coaches – Marianne Stanley (1993), Cheryl Miller (1994–95) and Fred Williams (1996) – in a four-year span. Wynn's best statistical season was during her junior year, where she started in 27 games and averaged 8.2 points, 5.0 rebounds, and 3.0 assists per contest. Her senior year at USC was cut short by a career-ending ankle surgery.
Coaching career
On April 7, 2009, Wynn was named Head Coach of the Long Beach State Women's Basketball program. On April 14, 2017, she was named Head Coach of the Washington Women's Basketball program. Wynn was fired by the University of Washington on March 15, 2021.
Head coaching record
Personal life
In 2000, Jody married Derek Wynn. They have two daughters.
Before taking up basketball, Wynn competed in girls' water polo and open-water swimming events. Her father played American football at Occidental College under, while her mother was a U.S. Women's Amateur Golf champion.
References
1974 births
Living people
American women's basketball coaches
Female sports coaches
Long Beach State Beach women's basketball coaches
Pepperdine Waves women's basketball coaches
USC Trojans women's basketball coaches
USC Trojans women's basketball players
Washington Huskies women's basketball coaches
Basketball coaches from California
Basketball players from California
|
10780500
|
https://en.wikipedia.org/wiki/Programming%20productivity
|
Programming productivity
|
Programming productivity (also called software productivity or development productivity) describes the degree of the ability of individual programmers or development teams to build and evolve software systems. Productivity traditionally refers to the ratio between the quantity of software produced and the cost spent for it. Here the delicacy lies in finding a reasonable way to define software quantity.
Terminology
Productivity is an important topic investigated in disciplines as various as manufacturing, organizational psychology, industrial engineering, strategic management, finance, accounting, marketing and economics. Levels of analysis include the individual, the group, divisional, organizational and national levels [5]. Due to this diversity, there is no clear-cut definition of productivity and its influencing factors, although research has been conducted for more than a century. Like in software engineering, this lack of common agreement on what actually constitutes productivity, is perceived as a major obstacle for a substantiated discussion of productivity. The following definitions describe the best consensus on the terminology.
Productivity
While there is no commonly agreed on definition of productivity, there appears to be an agreement that productivity describes the ratio between output and input:
Productivity = Output / Input
However, across the various disciplines different notions and, particularly, different measurement units for input and output can be found. The manufacturing industry typically uses a straightforward relation between the number of units produced and the number of units consumed. Non-manufacturing industries usually use man-hours or similar units to enable comparison between outputs and inputs.
One basic agreement is that the meaning of productivity and the means for measuring it vary depending on what context is under evaluation. In a manufacturing company the possible contexts are:
the individual machine or manufacturing system;
the manufacturing function, for example assembly;
the manufacturing process for a single product or group of related products;
the factory; and
the company’s entire factory system
As long classical production processes are considered a straightforward metric of productivity is simple: how many units of a product of specified quality is produced by which costs. For intellectual work, productivity is much trickier. How do we measure the productivity of authors, scientists, or engineers? Due to the rising importance of knowledge work (as opposed to manual work), many researchers tried to develop productivity measurement means that can be applied in a non-manufacturing context. It is commonly agreed that the nature of knowledge work fundamentally differs from manual work and, hence, factors besides the simple output/input ratio need to be taken into account, e.g. quality, timeliness, autonomy, project success, customer satisfaction and innovation. However, the research communities in neither discipline have been able to establish broadly applicable and accepted means for productivity measurement yet. The same holds for more specific area of programming productivity.
Profitability
Profitability and performance are closely linked and are, in fact, often confused. However, as profitability is usually defined as the ratio between revenue and cost
Profitability = Revenue / Cost
It has a wider scope than performance, i.e. the number of factors that influence profitability is greater than the number of factors than influence productivity. Particularly, profitability can change without any change to the productivity, e.g. due to external conditions like cost or price inflation. Besides that, the interdependency between productivity and profitability is usually delayed, i.e. gains in productivity are rarely reflected in immediate profitability gains are more likely realized on the long-term.
Performance
The term performance is even broader than productivity and profitability and covers a plethora of factors that influence a company’s success. Hence, well-known performance controlling instruments like the Balanced Scorecard do include productivity as a factor that is central but not unique. Other relevant factors are e.g. the customers’ or stakeholders’ perception of the company.
Efficiency and effectiveness
Efficiency and effectiveness are terms that provide further confusion as they themselves are often mixed up and, additionally, efficiency is often confused with productivity. The difference between efficiency and effectiveness is usually explained informally as efficiency is doing things right and effectiveness is doing the right things. While there are numerous other definitions, there is a certain agreement that efficiency refers to the utilisation of resources and mainly influences the required input of the productivity ratio. Effectiveness on the other hand mainly influences the output of the productivity ratio as it usually has direct consequences for the customer. Effectiveness can be defined as "the ability to reach a desired output".
Generally, it is assumed, that efficiency can be quantified, e.g. by utilization rates, considerably more easily than effectiveness.
Quality
Tangen states: "Improvements in quality, other than the fact that no-fault products add to output levels, ought not to be included in the concept of productivity." However, most of the classic literature in non-software disciplines, especially in the manufacturing area, does not explicitly discuss the role of quality of the output in the productivity ratio. More recent works from non-manufacturing disciplines have a stronger focus on knowledge, office or white-collar work and hence increasingly discuss the role of quality with respect to quality.
Drucker stresses the importance of quality for the evaluation of knowledge worker productivity: "Productivity of knowledge work therefore has to aim first at obtaining quality—and not minimum quality but optimum if not maximum quality. Only then can one ask: "What is the volume, the quantity of work?""
Saari captures the importance of quality with his extended formula for productivity:
Total productivity = (Output quality and quantity)/(Input quality and quantity)
However, it appears that these efforts to include the quality in the determination of productivity did not lead to an operationalizable concept yet. It currently remains unclear how to quantify the vague terms “Output quality and quantity” as well as “Input quality and quantity”, let alone to calculate the ratio.
State of the art
In software development things are more complicated than in the production of goods. Software development is an engineering process.
COCOMO II
Boehm was one of the first researchers that systematically approached the field of software productivity. His cost estimation model COCOMO - now COCOMO II - is standard software engineering knowledge. In this model, he defines a set of factors that influence productivity, such as the required reliability or the capability of the analysts. These factors have been widely reused in other similar productivity approaches. The rest of the model is based on function points and finally source lines of code (LOC). The limitations of LOC as a productivity measure are well-known.
Jones's software productivity
Jones is the author of a series of books on software productivity. Besides several theoretical considerations his main contribution is the systematic provision and integration of a large amount of data relevant for productivity analyses. In at least two of his books, he gives a number of productivity factors but also points out that for each project a different set of factors are influential. These factors can form a basis for productivity assessments and for comparison with industrial averages.
This is one such list:
The 20 factors whose quantified impacts on software projects have been determined from historical data are the following:
Programming language used
Program size
The experience of programmers and design personnel
The novelty of requirements
The complexity of the program and its data
The use of structured programming methods
Program class or the distribution method
Program type of the application area
Tools and environmental conditions
Enhancing existing programs or systems
Maintaining existing programs or systems
Reusing existing modules and standard designs
Program generators
Fourth-generation languages
Geographic separation of development locations
Defect potentials and removal methods
Existing documentation
Prototyping before main development begins
Project teams and organization structures
Morale and compensation of staff
Function points
Function points were proposed in 1977 by Albrecht as a better size measure for software than LOC. In that it is based on the specification of the software and thereby aims at measuring the size of its functionality rather than the code itself. The reason is that the size of the code not only depends on the size of the functionality but also on the capability of the programmer: better programmers will produce less code for the same functionality. The function points have undergone several redesigns over the years mainly driven by the International Function Point User Group (IFPUG). This group is large with over 1200 companies as member which shows the rather strong acceptance of this measure. However, in many domains it still lacks practical application because it is often conceived as only applicable to business information systems.
Value-based software engineering
Several researchers proposed economic-driven or value-based software engineering as an important paradigm in future software engineering research. Boehm and Huang point out that is it not only important to track the costs in a software project but also the real earned value, i.e. the value for the customer. They explain that it is important to create the software business case and keep it up to date. In essence, value-based software engineering focuses on the customer value, mainly measured in monetary units.
Peopleware
The famous book Peopleware: Productive Projects and Teams by de Marco and Lister brought the importance of people-related factors to the attention of a broader audience. They collected in many software projects experiences with good and bad management practice that have an influence on the productivity of the team. They and others showed that these are the decisive issues in software engineering but were only able to describe them anecdotally.
Factors influencing programming productivity
There are probably a large number of factors influencing the programming productivity of individuals and teams. For example, the used software development process probably influences the effectiveness and efficiency of a team.
The personalities of software programmers influence the used coding styles which, in turn, influence the productivity of the programmers.
References
Further reading
Software Cost Estimation with Cocomo II, Barry W. Boehm et al., Prentice Hall, 2000. .
Developing Products in Half the Time: New Rules, New Tools, Preston G. Smith and Donald G. Reinertsen, Wiley, 1997. .
Programming Productivity, Capers Jones, Mcgraw-Hill, 1986. .
Estimating Software Costs, Capers Jones, McGraw-Hill, 2007. .
Production economics
Software engineering costs
Software project management
|
3287389
|
https://en.wikipedia.org/wiki/Joe%20Nickell
|
Joe Nickell
|
Joe Nickell (born December 1, 1944) is an American skeptic and investigator of the paranormal.
Nickell is senior research fellow for the Committee for Skeptical Inquiry and writes regularly for their journal, the Skeptical Inquirer. He is also an associate dean of the Center for Inquiry Institute. He is the author or editor of over 30 books.
Among his career highlights, Nickell helped expose the James Maybrick "Jack the Ripper Diary" as a hoax. In 2002, Nickell was one of a number of experts asked by scholar Henry Louis Gates, Jr. to evaluate the authenticity of the manuscript of Hannah Crafts' The Bondwoman's Narrative (1853–1860), possibly the first novel by an African-American woman. At the request of document dealer and historian, Seth Keller, Nickell analyzed documentation in the dispute over the authorship of "The Night Before Christmas", ultimately supporting the Clement Clarke Moore claim.
Early life, education and family
Joe Nickell is the son of J. Wendell and Ella (Turner) Nickell, and was born and raised in West Liberty, Kentucky. His parents indulged his interest in magic and investigation, allowing him to use a room in their house as a crime lab.
He earned a B.A. degree in 1967 from the University of Kentucky.
To avoid the wide draft for the Vietnam War, the following year in 1968, at the age of 24, he moved to Canada. There he began his careers as a magician, card dealer, and private investigator. After President Jimmy Carter granted unconditional pardons to draft dodgers in 1977, Nickell returned to the United States.
He returned to the University of Kentucky for graduate work, earning an M.A. (1982), and Ph.D (1987). His Ph.D is in English, focusing on literary investigation and folklore.
In late 2003, Nickell reconnected with his college girlfriend, Diana G. Harris. He learned he had a daughter by her named Cherette, and two grandsons, Tyner and Chase. Harris had married before but divorced. She and Nickell married April 1, 2006. Harris has since assisted Nickell in his investigative work.
Diane Harris had told Cherette that her biological father was her first husband, but the daughter questioned her lack of resemblance to him. On Cherette's wedding day, one of the guests mentioned that her parents weren't married when she was conceived. Later, based on intuition, Cherette challenged her mother directly about her father and sensed equivocation. After more conversations and a DNA test, Cherette learned that Nickell was her biological father.
Nickell used his daughter's claim that she had made an intuitive search for him as the basis for an article on the unconscious collection and processing of data. In it he concluded:
"Cautions notwithstanding, I must admit to a new appreciation of intuition, without which I would not have known of my wonderful daughter--and two grandsons! It's enough to warm an old skeptic's heart."
Career
Nickell has worked professionally as a stage magician, carnival pitchman, private detective, blackjack dealer, riverboat manager, university instructor, author, and paranormal investigator, and lists more than 1,000 personae on his website. Since the early 1980s, he has researched, written, co-authored, and edited books in many genres.
He was profiled by The New Yorker writer Burkhard Bilger, who met Nickell during the summer of 2002 at Lily Dale, New York. The investigator had disguised himself to investigate Spiritualist psychics.
Nickell is a recurring guest on the Point of Inquiry podcast and conducts the annual Houdini Seance at the Center for Inquiry every Halloween.
He is frequently consulted by news and television producers for his skeptical perspective.
Nickell explained his philosophy to Blake Smith of the Skeptic podcast MonsterTalk.
He served as a character consultant to Hilary Swank in her starring role in the horror film The Reaping (2007), in which she plays a paranormal investigator.
Books
Nickell's books can be divided into four main categories—religious, forensic, paranormal, and mysteries. He has also written two books for young readers and two stand-alone books, one on UFOs, one on a regional alcoholic drink, and several additional small press and "contributed to" books.
Miracles and religious artifacts
Nickell has investigated religious artifacts and claimed phenomena. Beginning in 1982 with his book Inquest on the Shroud of Turin: Latest Scientific Findings, Nickell demonstrates his research model of collecting evidence and following that evidence to a sustainable conclusion. He updated the book in 1998 with more recent historical, iconographic, forensic, physical and chemical evidence, with special explanations of the radiocarbon dating process.
In his 1993 book, Looking for a Miracle: Weeping Icons, Relics, Stigmata, Visions and Healing Cures, updated in 1998, Nickell analyzes miracles claimed by various religions. For each incident, Nickell reviews the contemporaneous written accounts, explores various natural explanations, explains the cultural environment surrounding the events, and speculates on the motivations of the affected religious community. He concludes that the claimed miracles were either hoaxes or misinterpretations of natural phenomena.
For instance, after studying the weeping St. Irene icon in Queens, New York, Nickell said,
Relics of the Christ (2007, British edition published as The Jesus Relics: From the Holy Grail to the Turin Shroud), focuses on the Christian tradition of relics. Speaking with D.J. Grothe on the Point of Inquiry podcast, Nickell proposed that veneration of relics had become a new idolatry; that is, worship of an actual deity within the relics in form of an entity that moves its eyes, weeps, bleeds, and walks. He said that although no icon in history has ever been proven authentic, in the sense of displaying such attributes, he approaches each case with a suspension of disbelief: "I'm interested in the evidence because I want us to know what the truth is ... I urge skeptics ... not to be as closed-minded as the other side is ridiculously open-minded."
In 2008, Prometheus Books published John Calvin's Treatise on Relics with an introduction by Nickell. He wrote a brief biography of Calvin and uses references from his own 2007 Relics book.
In his The Science of Miracles: Investigating the Incredible (2013), Nickell applied his investigative technique to 57 reported miracles. From the Virgin Mary's face appearing on a grilled cheese sandwich, to the Cross's regeneration after pieces were removed, to the structural deficiencies of the Loretto Chapel staircase, Nickell's describes fact and myth are presented with clarity and respect. The book was criticized in the New York Journal of Books for research limited to non-Biblical sources.
Forensic investigations
Nickell's first book in the authentication genre was Pen, Ink, and Evidence: A study of Writing and Writing Materials for the Penman, Collector, and Document Detective, described as a definitive work for researchers and practitioners. Mary Hood of the Georgia Review praised Nickell's scholarship:
In Camera Clues: A Handbook for Photographic Investigation, Nickell begins with the history of photography. He presents methods of dating photographs, from the physical characteristics of the work, to the subject and contents of the photo. He explains how old photographs can be faked and how those fakes can be detected. He also describes identification of persons and places in old photos and the use of photography by law enforcement. He explains various trick photography techniques, including ghost and spirit photography. These have become even more elaborate with the use of computer images or digital camera technology.
Detecting Forgery: Forensic Investigation of Documents (1996) presents an overview of a document expert's work. He says that forged documents are often revealed by the forgers' ignorance of or inability to re-create historic typefaces, inks, papers, pens, watermarks, signatures, and historic styles. Nickell explains forgeries of Daniel Boone's musket, Mark Hofmann's Mormon papers, and the Vinland Map.
According to Publishers Weekly, Crime Science: Methods of Forensic Detection (1998) provided extensive basic information, with brief case studies.
In Real or Fake: Studies in Authentication (2009), Nickell drew on his early work related to technical aspects of paper, ink, typefaces, pens, and other keys to determining authenticity of paper documents. New material details the step-by-step investigations of specific cases: the purported diary of Jack the Ripper (fake), The Bondwoman's Narrative (date authenticated, author unknown), Lincoln's Lost Gettysburg Address (fake), and An Outlaw's Scribblings (fake).
Paranormal investigations
Secrets of the Supernatural: Investigating the World's Occult Mysteries was Nickell's first book of his paranormal investigation genre. He and his collaborator, John F. Fisher, look for the answers to the Crystal Skulls, spontaneous human combustion, the Mackenzie House, and lesser known mysteries. On a Point of Inquiry podcast years later, Nickell explained that the same mysteries are reported over and over again because, "For each new generation, they have to re-learn that there is controversy ... Each new generation hears these for the first time ... It's an endless process in which you have to be willing to speak to the next crop of people."
Missing Pieces: How to Investigate Ghosts, UFOs, Psychics, and Other Mysteries, written by Nickell and Robert A. Baker, is a handbook that combines the practical techniques of investigating the paranormal with a description of the psychology of believers. Nickell often quoted Baker, "... there are no haunted places, only haunted people."
Mysterious Realms: Probing Paranormal, Historical, and Forensic Enigmas, written by Nickell and Fisher, analyzes 10 frequently reported mysteries, including the Kennedy assassination, Kentucky's Gray Lady ghost, and UFO cover-up conspiracy theories.
Nickell asked several researchers to investigate claims of psychic detectives. He collected their reports in Psychic Sleuths: ESP and Sensational Cases. None of the reports credits the psychics with factually supported insights. Nickell concludes that these individuals were either self-deluded or frauds. They used other psychological techniques to gain information, such as cold reading in discussions with police detectives, or retrofitting.
In Entities: Angels, Spirits, Demons, and Other Alien Beings, Nickell shows the development of ghost stories since the 17th century, and how they have been influenced by changing technology and communication methods. The faked Cottingley Fairies photographs, for example, became possible only when cameras became available to the general public.
The Outer Edge: Classic Investigations of the Paranormal is a collection of articles edited by Nickell, Barry Karr and Tom Genoni. It features Nickell and John F. Fischer's 1987 article, "Incredible Cremations: Investigating Spontaneous Combustion Deaths," along with essays by Martin Gardner, Ray Hyman, Susan Blackmore, and James Randi.
Adventures in Paranormal Investigation is a more detailed work than many of Nickell's. He ranges from dowsing to Frankenstein to healing spas. He includes an essay about learning that he had an adult daughter and accepting that she attributed her search for him to "intuition".
The first half of CSI Paranormal is a handbook on how to investigate paranormal claims. Nickell discusses his investigative strategy to:
Investigate on site
Check details of an account
Research precedents
Carefully examine physical evidence
Analyze development of a phenomenon
Assess a claim with a controlled test or experiment
Consider an innovative analysis
Attempt to recreate the "impossible"
Go undercover to investigate
In the second half of the book, Nickell shows how this strategy has been used to evaluate the claims of the Giant Ell, the Roswell UFO, the grilled cheese Madonna, and John Edward.
In The Science of Ghosts (2012), Nickell relates several archetype ghost stories—the girl in the snow, Elvis, phantom soldiers, and haunted lighthouses, castles, ships, and theaters. By tracking the development of these stories over the years, he demonstrates that the stories are not evidence of spirits, but evidence of the effects an appropriate setting can have on susceptible witnesses. He includes an analysis of 21st-century paranormal investigators, particularly Jason Hawes and Grant Wilson of the Syfy Channel's Ghost Hunters. He compares their investigations of the Myrtles Plantation, the Winchester Mystery House and the St. Augustine Lighthouse with his own.
Mysteries
Ambrose Bierce Is Missing And Other Historical Mysteries was Nickell's 1992 foray presenting historical investigations to the reading public. In the introduction, he uses the legal concepts of "a preponderance of the evidence" and "clear and compelling evidence" as standards by which hypotheses explaining mysteries should be objectively measured. Subjectively wishing an explanation is true can lead to imposing a hypothesis on the data instead of using data to test a hypothesis (the scientific method). Nickell's 2005 update of Ambrose Bierce, Unsolved History: Investigating Mysteries of the Past, is the same text with the addition of two books to its "Recommended Works".
Real-Life X-Files and its sequel, The Mystery Chronicle are a series of short essays on the histories, expanding mythologies, and likely causes of several dozen mysteries. In some cases, Nickell re-creates the legends, demonstrating that no special powers are needed to duplicate the effects. In others, he answers common lore with facts uncovered in his research. In 1982, Nickell and five of his relatives created a 440 foot long condor in a field in Kentucky by plotting coordinates of points on a drawing, a technique Nickell believes could have been used to create the Nazca Lines in Peru. "That is, on the small drawing we would measure along the center line from one end (the bird's beak) to a point on the line directly opposite the point to be plotted (say a wing tip). Then we would measure the distance from the center line to the desired point. A given number of units on the small drawing would require the same number of units—larger units—on the large drawing." In the case of West Virginia's Mothman, Nickell interviewed witnesses and conducted on-site experiments to find the most likely explanation for the original sightings. This investigation found that the mis-identification of an owl—most likely a Barred owl—was the most likely explanation.
Harry Eager of the Maui News calls Secrets of Sideshows "... virtually an encyclopedia of that nearly extinct form of entertainment." He faults Nickell for downplaying the brutality and grim fakery of the shows, especially what he calls "prettying" the geeks.
Lake Monster Mysteries: Investigating the World's Most Elusive Creatures is a collaboration of Nickell and Ben Radford. Author Ed Grabianowski summarizes one of the many possible explanations for lake monster sightings,
The research for Tracking the Man-Beasts: Sasquatch, Vampires, Zombies, and More took Nickell to many locations of reported monster sightings—the Pacific Northwest for Bigfoot, Australia for the Yowie, Austria for werewolves, New England for vampires, Argentina for the Chupacabra, West Virginia for aliens, and Louisiana for the swamp creatures. Nickell traces the monsters' iconography from first reports to latest sightings, concluding that the tales reflect the evolution of their cultural environment, not any basis in fact. A quote from his guide in the Louisiana swamps provides insight into the genesis of the tales, "... frightening tales could sometimes have been concocted to keep outsiders away—to safeguard prime hunting territory or even possibly to help protect moonshine stills. Charbonnet also suggested that such stories served in a bogyman fashion, frightening children so they would keep away from dangerous areas."
Young readers
In 1989, Nickell wrote his first book for young readers, The Magic Detectives: Join Them in Solving Strange Mysteries, engaging children by presenting paranormal stories in the form of mysteries with clues embedded in the narrative. The solutions, printed upside down, follow each story. The book also contains teachers' guides for additional assignments and recommended readings.
The 1991 Wonder Workers! How They Perform the Impossible was summarized by P.J. Rooks as, "... a 200-year, biographical tour of some of the more famous shenanigans and side show splendors of both sincere and charlatan magicians ... {that} guides readers on a fascinating exposé of magical history that leaves us, at the end of every page, thinking, "A-ha! I was wondering how they did that!"
UFOs
In 1997, Nickell, with Kendrick Frazier and Barry Karr, published The UFO Invasion, an anthology of UFO articles written for the Skeptical Inquirer covering the topic from history and abductions to Roswell and crop circles. The editors included six of Nickell's articles in the book. Nickell explained the physiology of alien abduction stories, "People claiming to be abducted by aliens is such an astonishing thing that you think they have either be crazy or lying, and in fact they may be perfectly sane and normal. ... They probably were having these powerful waking dreams. ... In this state, they tend to see bizarre imagery. ... The other kind of experience is hypnosis. ... Hypnosis is the yellow brick road to fantasy land."
Other investigations
The Shroud of Turin
The Shroud of Turin, claimed to be the burial cloth of Jesus miraculously imprinted with the image of his crucified body, is one of Christianity's most famous icons. The Roman Catholic Church, in possession of the Shroud since 1983, has allowed several public viewings and encourages devotions to the image, but takes no official position on the icon's authenticity. Nickell and others contend the Shroud is a 14th-century painting on linen, verified through radiocarbon dating. One of Nickell's many objections to the Shroud's authenticity is the proportions of the figure's face and body. Both are consistent with the proportions used by Gothic artists of the period and are not those of an actual person. Experts on both sides of the controversy have tried to duplicate the Shroud using medieval and modern methods. Claimants to the Shroud's authenticity believe the image could have been produced at the moment of resurrection by radiation, electrical discharge, or ultraviolet radiation; Nickell created a credible shroud using the bas relief method and contends that forgers had equivalent materials available during the 14th century.
The Warrens
Although Nickell rejects the term "debunker" to describe his work, his evidenced-based investigations of paranormal events have not yet uncovered any miracles, ghosts or monsters. His insistence on documented facts led to a heated exchange with Ed and Lorraine Warren on the Sally show in 1992. Nickell and the Warrens appeared on Sally Jesse Raphael's talk show with the Snedeker family, whose reports of ghosts and demons led to the 1992 book, In A Dark Place, The Story Of A True Haunting by novelist Ray Garton and the 2009 movie, The Haunting in Connecticut. After an on-air threat of violence from Ed Warren, Nickell stated:
Nickell continues to cite the Warrens as an example of exploitative and harmful charlatans. He told Blake Smith, host of the MonsterTalk podcast,
Aliens
Nickell proposes that alien encounters are the result of misinterpreted natural phenomena, hoaxes, or a fantasy prone personality. To explain the evolving nature of alien sightings, Nickell told the Skeptics' Guide to the Universe podcast team,
Magazine articles and website blogs
Nickel has written the "Investigative Files" column for the Skeptical Inquirer (SI) magazine since 1995 and contributes frequently to the Center for Inquiry website. The articles reflect the range of Nickell's interests and investigative skills, including spontaneous human combustion, ghost photographs, reincarnation, voodoo, Bigfoot, quack medicine, Elvis, psychic frauds, and phrenology. In his SI article about the Bell Witch Poltergeist, Nickell analyzed the content of the alleged Bell Manuscript for anachronistic references and word use, comparing the writing styles of Richard Williams Bell, the reported original author, and M.V. Ingram, the reporter who expanded on the story 50 years later. Nickell concludes, "Given all of these similarities between the texts, in addition to the other evidence, I have little hesitation in concluding that Ingram was the author of 'Bell'".
Nickell's writing for the Center for Inquiry (CfI) includes "Nickell-odeon Reviews", written with an emphasis on the facts behind the scripts. Nickell adds credibility to the plot of the Charles Dickens movie, The Invisible Woman. "Although not mentioned in the movie, posthumous confirmation of the affair came from Dickens' letters. Although many had been destroyed by his family, some merely had offending passages inked out. But that cloak of invisibility was ineffective: Dickens scholars turned to forensics, using infrared photography to read the obscured portions. These contained references to "Nelly" and confirmed the persistent rumors."
Awards
Nickell received the 2004 Isaac Asimov Award from the American Humanist Association and was a co-recipient of the 2005 and the 2012 Robert P. Balles Prize in Critical Thinking, awarded by CSICOP, now called CSI. In 2000 was presented with the Distinguished Skeptic award from CSI.
He was also presented an award for promotion of science in popular media at the 3rd Annual Independent Investigative Group IIG Awards, held on May 18, 2009.
In October 2011 asteroid 31451 (1999 CE10) was named JoeNickell in his honor by its discoverer James E. McGaha.
Major works
Inquest on the Shroud of Turin: Latest Scientific Findings (Prometheus Books: Amherst, NY; 1983). Revised edition, 1998.
Secrets of the Supernatural: Investigating the World's Occult Mysteries (Prometheus Books: Amherst, NY; 1988, 1991; with John F. Fischer).
The Magic Detectives: Join Them in Solving Strange Mysteries (Prometheus Books: Amherst, NY; 1989).
Pen, Ink, and Evidence: A Study of Writing and Writing Materials for the Penman, Collector, and Document Detective (Oak Knoll Books: New Castle, DE; 1990, 2000, 2003).
Wonder-Workers! How They Perform the Impossible (Prometheus Books: Amherst, NY; 1991).
Unsolved History: Investigating Mysteries of the Past originally published as Ambrose Bierce is Missing and Other Historical Mysteries (University Press of Kentucky: Lexington, KY; 1992, 2005).
Missing Pieces: How to Investigate Ghosts, UFOs, Psychics, and Other Mysteries (Prometheus Books: Amherst, NY; 1992; with Robert A. Baker).
Mysterious Realms: Probing Paranormal, Historical, and Forensic Enigmas (Prometheus Books: Amherst, NY; 1992; with John F. Fischer).
Looking for a Miracle: Weeping Icons, Relics, Stigmata, Visions and Healing Cures (Prometheus Books: Amherst, NY; 1993, 1998).
Psychic Sleuths: ESP and Sensational Cases (Prometheus Books: Amherst, NY; 1994).
Camera Clues: A Handbook for Photographic Investigation (University Press of Kentucky: Lexington, KY; 1994, 2005).
Entities: Angels, Spirits, Demons, and Other Alien Beings (Prometheus Books: Amherst, NY; 1995).
Detecting Forgery: Forensic Investigation of Documents (University Press of Kentucky: Lexington, KY; 1996, 2005).
The Outer Edge: Classic Investigations of the Paranormal (CSICOP: Amherst, NY; 1996, co-edited with Barry Karr and Tom Genoni).
The UFO Invasion: The Roswell Incident, Alien Abductions, and Government Coverups (Prometheus Books: Amherst, NY; 1997; co-edited with Kendrick Frazier and Barry Karr).
Crime Science: Methods of Forensic Detection (University Press of Kentucky: Lexington, KY; 1999; with co-author John F. Fischer).
Real-Life X-Files: Investigating the Paranormal (University Press of Kentucky: Lexington, KY; 2001).
The Kentucky Mint Julep (University Press of Kentucky: Lexington, KY; 2003).
Investigating the Paranormal (Barnes & Noble Books: New York; 2004).
The Mystery Chronicles: More Real-Life X-Files(University Press of Kentucky: Lexington, KY; 2004).
Secrets of the Sideshows (University Press of Kentucky: Lexington, KY; 2005).
Cronache del Misterio (Newton Compton editori: Rome, Italy; 2006).
Lake Monster Mysteries: Investigating the World's Most Elusive Creatures, (University Press of Kentucky: Lexington, KY; 2006; with co-author Benjamin Radford).
Relics of the Christ(University Press of Kentucky: Lexington, KY; 2007).
Adventures in Paranormal Investigation (University Press of Kentucky: Lexington, KY; 2007).
Tracking The Man-Beasts: Sasquatch, Vampires, Zombies, and More (Prometheus Books: Amhurst, NY; 2011).
Real or Fake: Studies in Authentication (University Press of Kentucky: Lexington, KY; 2009).
CSI Paranormal, (Inquiry Press): Amherst, NY; 2012.
The Science of Ghosts (Prometheus Books: Amherst, NY; 2012).
The Science of Miracles: Investigating the Incredible(Prometheus Books: Amherst, NY; 2-13).
References
External links
Skepticon 4, Undercover! Paranormal Investigations by Joe Nickell November 30, 2011 on YouTube, 38 minutes.
Dragon*Con 9 Joe Nickell and Graham Watkins Discuss Cryptozoology September 15, 2009 on YouTube, 64 minutes.
Darwin Week 2012 Investigating the Paranormal by Joe Nickell February 19, 2012 on YouTube, 76 minutes.
1944 births
20th-century American non-fiction writers
20th-century American writers
21st-century American non-fiction writers
21st-century American writers
American historians
American humanists
American magicians
American male bloggers
American bloggers
American male non-fiction writers
American skeptics
American social commentators
Critics of alternative medicine
Critics of conspiracy theories
Critics of parapsychology
Cultural critics
Historians of magic
Jack the Ripper
Living people
People from West Liberty, Kentucky
Researchers of the Shroud of Turin
Social critics
Trope theorists
University of Kentucky alumni
Writers about religion and science
20th-century American male writers
|
36919934
|
https://en.wikipedia.org/wiki/AnyMeeting
|
AnyMeeting
|
AnyMeeting, Inc. (Formerly Freebinar) is a provider of web conferencing and webinar services for small business that enables users to host and attend web based conferences and meetings and share their desktop screen with other remote users via the web. AnyMeeting is a web-based software application accessible by users via a web browser. This method of software delivery is commonly referred to as Software as a Service (or SaaS). The company was founded in 2011 and backed by Keiretsu Forum angel investors.
Features
Features include 6-way video conferencing, screen sharing, applicationsharing, recording, public profiles, surveys, polls, audio via conference call or computer mic and speakers, YouTube video sharing and an additional option that enables meeting hosts to charge attendees (via PayPal) to access a webinar.
AnyMeeting provides two primary options for users to access the features and functionality of its service: a free option that is ad-supported software and a subscription-based software option that has no Ads. Advertisements are shown to meeting organizers and attendees in the sidebar of the meeting application window. The subscription option includes the same features as the Ad-supported option, except the advertisements are removed. The Ad-free plans are available for 25 and 200 attendees. AnyMeeting operates on Adobe Flash Player in all modern browsers including the latest versions of Internet Explorer, Mozilla Firefox, Google Chrome or Safari (Mac).
History
The beta version of AnyMeeting was originally launched in 2009 under the name Freebinar. It was founded by Costin Tuculescu, a 12-year veteran of the web conferencing industry. In 2009, Costin identified an opportunity to try something no one else was doing, use a free, ad-supported software business model to deliver a web conferencing and webinar service. As of August 2017, AnyMeeting had over one million registered users.
AnyMeeting was acquired by Intermedia in September 2017.
Security
AnyMeeting online meetings can be protected with an encrypted password feature for those who want to restrict access to their meeting or charge a fee to attend. Meeting recordings can also be password protected.
Accolades
AnyMeeting was named by CIO.com in its list of the best free stuff of 2012
AnyMeeting was named by PC World as one of the 15 best free business tools, apps and
services of 2012
AnyMeeting named in Small Business Computing as one of 3 software products small
businesses need to know
AnyMeeting was named by PCMag.com as one of the best free web apps of 2011.
Integrations
Google Apps Marketplace
VMware's Zimbra
References
External links
Web conferencing
Groupware
Remote desktop
|
8838744
|
https://en.wikipedia.org/wiki/Change%20ringing%20software
|
Change ringing software
|
Change ringing software encompasses the several different types of software in use today in connection with change ringing.
Modern
The Central Council of Church Bell Ringers maintains a list of change ringing software.
There are four general types of software used in connection with change ringing: tools for composition, simulation, record keeping, and maintaining up-to-date bell tower directories.
Composing tools
The most common use of software in change ringing is composition proving. This type of software is used to take the tedium out of proving change ringing compositions: that is, checking that no change within the composition is repeated. The software will perform the checks required to prove a composition in milliseconds, rather than the hours or days required for paper based proving methods. Often these programs can also analyse compositions to determine the musical rows that they contain.
In recent years, more advanced tools have emerged which can assist the human composer in other ways. These range from pure composition-generation programs such as BYROC and Elf, to more sophisticated programs such as SMC32, which can work alongside the human composer, for instance by linking together existing musical blocks which the composer has created.
The main examples of proving software are:
Trident
Beltower
Some examples of composition generation tools:
BYROC
Elf
Simulator
The original use of simulators was to allow the practising of change ringing in the tower, but nowadays is perhaps used more in the home, using a dumbbell or keyboard. Many different scenarios can now be accommodated by the software. Sensors are used to give temporal information to the computer.
Single or multiple learner silent practise with computer producing sound of the bell
Whole band silent practise with computer producing sound of all the bells
Practise on more bells than you have with the computer adding in the extra bells
Practice methods that are more advanced that your band is capable with the computer filling in all the bells
The main examples of simulator software are:
Abel
Beltower
Virtual Belfry
Recordkeeping
Keeping records is very important to some change ringers. Records are often kept in the following areas:
Grabs (towers visited)
Peals
Quarter peals
The main examples of record keeping software are:
WinRK
Duco
Tower directory
An up-to-date directory of towers is often used to plan outings and, used in conjunction with the record keeping software to decide what towers still need to be grabbed.
The main examples of tower directory software are:
TowerBase
History
The earliest known change ringing programs can be traced back to the 1950s. Some of this history is traced in this article on software firsts.
See also
Braid theory
Campanology
References
External links
Bell Conductor software to help to achieve harmony and melody through visualization of a ringing sequence from text and midi files
"Abel" simulator software
Excalibur composition proving software
Beltower bell ringing simulator, prover, composer, method/touch editors and printing software
Virtual Belfry simulator software, high-resolution photographic animation of change ringing bells
Visual Method Archive (VMA) – simple, FREE, web-based tool to view blue lines for methods and experiment with generating your own
Elf – online lead and half-lead spliced composing engine
Stedman Pricker – assists the Stedman composer in pricking and proving compositions on all numbers
Smart Phone Apps for Ringers – directory of free and paid for smart phone apps for bell ringing
Campanology
Electronic musical instruments
English culture
|
6672986
|
https://en.wikipedia.org/wiki/Intel%20Quartus%20Prime
|
Intel Quartus Prime
|
Intel Quartus Prime is programmable logic device design software produced by Intel; prior to Intel's acquisition of Altera the tool was called Altera Quartus Prime, earlier Altera Quartus II. Quartus Prime enables analysis and synthesis of HDL designs, which enables the developer to compile their designs, perform timing analysis, examine RTL diagrams, simulate a design's reaction to different stimuli, and configure the target device with the programmer. Quartus Prime includes an implementation of VHDL and Verilog for hardware description, visual editing of logic circuits, and vector waveform simulation.
Features
Quartus Prime software features include:
Platform Designer (previously QSys, previously SOPC Builder), a tool that eliminates manual system integration tasks by automatically generating interconnect logic and creating a testbench to verify functionality.
SoCEDS, a set of development tools, utility programs, run-time software, and application examples to help you develop software for SoC FPGA embedded systems.
DSP Builder, a tool that creates a seamless bridge between the MATLAB/Simulink tool and Quartus Prime software, so FPGA designers have the algorithm development, simulation, and verification capabilities of MATLAB/Simulink system-level design tools
External memory interface toolkit, which identifies calibration issues and measures the margins for each DQS signal.
Generation of JAM/STAPL files for JTAG in-circuit device programmers.
Editions
Lite Edition
The Lite Edition is a free version of Quartus Prime that can be downloaded for free. This edition provided compilation and programming for a limited number of Intel FPGA devices. The low-cost Cyclone family of FPGAs is fully supported by this edition, as well as the MAX family of CPLDs, meaning small developers and educational institutions have no overheads from the cost of development software.
Standard Edition
The Standard Edition supports an extensive number of FPGA devices but requires a license.
Pro Edition
The Pro Edition supports only the latest FPGA devices.
See also
Xilinx ISE
Xilinx Vivado
ModelSim
External links
Intel Quartus Prime Software
Intel FPGAs and Programmable Devices official website
Quartus II Installation Tutorial on Ubuntu 8.04
Electronic design automation software
Proprietary software that uses Qt
Software that uses Qt
|
13052107
|
https://en.wikipedia.org/wiki/ILIAS
|
ILIAS
|
ILIAS (Integriertes Lern-, Informations- und Arbeitskooperations-System [German for "Integrated Learning, Information and Work Cooperation System"]) is an open-source web-based learning management system (LMS). It supports learning content management (including SCORM 2004 compliance) and tools for collaboration, communication, evaluation and assessment. The software is published under the GNU General Public License and can be run on any server that supports PHP and MySQL.
History
ILIAS is one of the first Learning Management Systems to have been used in universities. A prototype had been developed since the end of 1997 under the VIRTUS project at the Faculty of Management, Economics and Social Sciences of the University of Cologne, initiated and organized by Wolfgang Leidhold. On November 2, 1998 version 1 of the LMS ILIAS was published and offered for learning at the Cologne faculty of business administration, economics and social sciences. Due to increasing interest of other universities, the project team decided to publish ILIAS as open-source software under the GPL in 2000. Between 2002 and 2004, a new ILIAS version was developed from scratch and called "ILIAS 3". In 2004, it became the first open-source LMS to reach full SCORM (Sharable Content Object Reference Model) 1.2 compliance. SCORM 2004 compliance has been reached with version 3.9 as of November 2007.
ILIAS Concept
The idea behind ILIAS is to offer a flexible environment for learning and working online with integrated tools. ILIAS goes far beyond the idea of learning being confined to courses as a lot of other LMS do. ILIAS can rather be seen as a type of library providing learning and working materials and contents at any location of the repository. This offers the possibility to run ILIAS not as a locked warehouse but as an open knowledge platform where content might be made available for non-registered users too.
Features
ILIAS offers a lot of features to design and run online-courses, create learning content, offer assessments and exercises, run surveys and support communication and cooperation among users.
Personal Desktop
A general characteristic of ILIAS is the concept of Personal Desktop and Repository. While the Repository contains all content, courses and other materials structured in categories and described by metadata, the Personal Desktop is the individual workspace of each learner, author, tutor and administrator. The Personal Desktop contains selected items from the repository (e.g. currently visited courses or an interesting forum) as well as certain tools like mail, tagging, a calendar and also e-portfolio and personal blogs.
Listing of selected courses, groups and learning resources
Personal profile and settings like password and system language
Bookmark Management
Personal Notes
External Web Feeds
Internal News
Personal Workspace
Blogs
e-Portfolio
Calendar
Internal Mail
Personal Learning Progress
Learning Content Management
Another important characteristic of ILIAS is the repository. All learning content but also forums or chat rooms, tests and surveys, as well as plugged in virtual classrooms or other external tools are created, offered and administrated in the repository and its categories. Therefore, it is not necessary to build up courses for offering learning content. ILIAS could also be used like a kind of knowledge base or website. Access to all repository item is granted by the role-based access control (RBAC) of ILIAS. The repository is structured as tree with a root node and multiple levels. Each repository item is assigned to one node in the RBAC tree.
ILIAS offers four kinds of container for delivering content:
Categories
Courses incl. member administration
Groups incl. member administration
Folders (within courses and groups)
Container objects can be extended by using the page editor for adding text, images or videos to the page.
All content objects are handled as references. They can be moved, copied or linked into other branches of the repository tree. A file that has already been uploaded can be linked multiple times in different courses and categories without being uploaded a second time.
Course Management
Enrollment Settings
Learning Resource Management
Time triggered/Conditional Access
Learning Progress Tracking for Members
Member Gallery and (Google) Map
Course News and Announcements
Cooperation
Group Management
Awareness Feature (who is online?)
vCard Export
File Sharing
Wiki
Communication
Internal Messaging
Chat
Forum
Podcasting
Etherpad / Edupad plugin
Test/Assessment
Question Types: Multiple choice, fill-in-the-blanks, numerical, matching, ordering, hot spot, essay
Question Pools for re-using questions in different tests
Randomization of questions and choices
IMS-QTI Import and Export
Online exams
Learning progress control
Evaluation
Personalised and anonymous surveys
Question types: Multiple choice, matrix, open answer
Pools for question administration and re-use
Online report analysis
CSV and excel export of survey results
Learning Content / Authoring
XML-based learning document format, exports to HTML, XML and SCORM
SCORM 1.2 (Certified for SCORM-Conformance Level LMS-RTE3)
SCORM 2004 (Certified as LMS for SCORM 2004 3rd Edition)
AICC
OpenOffice.org and LibreOffice Import Tool (eLAIX)
LaTeX-Support
HTML Site Import
Wiki
File Management (all formats)
Administration
Role administration (global roles, local roles, role templates)
User administration
Authentication CAS, LDAP, SOAP, RADIUS and Shibboleth
Individual layout templates / skins
Support for multiple clients
PayPal payment
Didactic templates
Statistics and learning progress administration
SOAP Interface
References
External links
Bibliography
Matthias Kunkel: Das offizielle ILIAS 4-Praxisbuch: Gemeinsam online lernen, arbeiten und kommunizieren. 1. Auflage. Addison-Wesley, München 2011, .
Assistive technology
Cross-platform software
Free content management systems
Free educational software
Free learning management systems
Free learning support software
Free software programmed in PHP
Learning management systems
Virtual learning environments
|
50178216
|
https://en.wikipedia.org/wiki/AI%20takeovers%20in%20popular%20culture
|
AI takeovers in popular culture
|
AI takeover—the idea that some kind of artificial intelligence may supplant humankind as the dominant intelligent species on the planet—is a common theme in science fiction. Famous cultural touchstones include Terminator and The Matrix.
Fictional scenarios typically involve a drawn-out conflict against malicious artificial intelligence (AI) or robots with anthropomorphic motives. In contrast, some scholars believe that a takeover by a future advanced AI, if it were to happen in real life, would succeed or fail rapidly, and would be a disinterested byproduct of the AI's pursuit of its own alien goals, rather than a product of malice specifically targeting humans.
Characterization
There are many positive portrayals of AI in fiction, such as Isaac Asimov's Bicentennial Man and Lt. Commander Data from Star Trek. There are also many negative portrayals. Many of these negative portrayals (and a few of the positive portrayals) involve an AI seizing control from its creators.
Reactions
Some AI researchers, such as Yoshua Bengio, have complained that films such as Terminator "paint a picture which is really not coherent with the current understanding of how AI systems are built today and in the foreseeable future". BBC reporter Sam Shead has stated that "unfortunately, there have been numerous instances of [news outlets] using stills from the Terminator films in stories about relatively incremental breakthroughs" and that the films generate "misplaced fears of uncontrollable, all-powerful AI". In contrast, other scholars, such as physicist Stephen Hawking, have held that future AI could indeed pose an existential risk, but that the Terminator films are nonetheless implausible in two distinct ways. The first implausibility is that, according to Hawking, "The real risk with AI isn't malice but competence. A super intelligent AI will be extremely good at accomplishing its goals, and if those goals aren't aligned with ours, we're in trouble. You're probably not an evil ant-hater who steps on ants out of malice, but if you're in charge of a hydroelectric green energy project and there's an anthill in the region to be flooded, too bad for the ants." The second implausibility is that such a technologically-advanced AI would deploy a brute-force attack by humanoid robots to commit its omnicide; a more plausible and efficient method would be to use germ warfare or, if feasible, nanotechnology.
Philosopher Huw Price defends that "The kind of imagination that is used in science fiction and other forms of literature and film is likely to be extremely important" in understanding the breadth of possible future scenarios for humanity. Film journalist Mekado Murphy writes in The New York Times that such films can constructively "warn of the complications of relying too much on technology to solve problems".
Hollywood films such as Transcendence are usually constrained to have happy endings, however implausible the human victory seems. Philosopher Nick Bostrom states fiction has a "good story bias" toward scenarios that make a good plot. In films such as Terminator, an AI goes from passive to murderous the instant it achieves something referred to as "self-awareness"; in reality, self-awareness in isolation is considered both trivial and useless. Physicist David Deutsch states: "AGIs [artificial general intelligences] will indeed be capable of self-awareness — but that is [only] because they will be General: they will be capable of awareness of every kind of deep and subtle thing, including their own selves."
Some tropes are more general to artificial intelligence films, including to films without "takeover" plots. In films like Ex Machina or Chappie, a single isolated genius becomes the first to successfully build an AGI; scientists in the real world deem this to be unlikely. In Chappie, Transcendence, and Blade Runner, people are able to upload human minds into robots; usually no reasonable explanation is offered as to how this difficult task can be achieved. In the I, Robot and Bicentennial Man films, robots that are programmed to serve humans spontaneously generate new goals on their own, without a plausible explanation of how this takes place.
Notable works
1950s and earlier
In Frankenstein (1818), Victor Frankenstein declines to build a mate to his organic monster, for fear that "a race of devils would be propagated upon Earth who might make the very existence of the species of man a condition precarious and full of terror".
Samuel Butler's Erewhon (1872) spends three chapters laying out the Book of the Machines, based on earlier works by the author stretching back to his 1863 article Darwin among the Machines. Erewhon's Book of the Machines includes the following passage:
The cautious denizens of Erewhon therefore decide to ban all machinery. Darwin among the Machines may have been influenced by Butler's life in New Zealand, where European transplants were outcompeting indigenous populations. Alan Turing would later reference the novel in 1951, saying "At some stage therefore we should have to expect the machines to take control in the way that is mentioned in Samuel Butler's Erewhon".
The Slavonic word robota means serf-like servitude, forced labor, or drudgery; it was the 1920 Czech play R.U.R. (Rossumovi Univerzální Roboti) that introduced the cognate for robot into science fiction. In the play, the increasingly-capable synthetic servants, who "lack nothing but a soul", angrily and short-sightedly slaughter most of humanity during the course of their revolt, resulting in the loss of the secret of how to manufacture more robots. The robot race is saved, however, when two robots spontaneously acquire the traits of love and compassion and become able to reproduce. The play was a protest against the rapid growth of technology.
From the late 1920s onward many stories involving AI takeover can be found in the growing genre of pulp sci-fi. One of the earliest examples is the story Automata by S. Fowler Wright, which appeared in a 1929 edition of Weird Tales.
In With Folded Hands (1947), all robots have a 'Prime Directive': To serve and obey, and guard men from harm. The robots therefore manipulate humans into abandoning all pursuits, for fear of even small possibilities of injury. The robots use medicine to brainwash humans into accepting and being happy with their immobile fate. In the end, even space travel offers no escape; the robots are driven by the Prime Directive to spread their happiness beyond Earth: "We have learned how to make all men happy, under the Prime Directive. Our service is perfect, at last."
Multivac is the name of a fictional supercomputer in many stories by Isaac Asimov. Often, in Asimov's scenarios, Multivac comes to assume formal or informal world power—or even galactic-wide power. In The Last Question (1956) Multivac ends up by effectively becoming God. Still, in line with Asimov's positive attitude towards artificial intelligence, manifested in the "Three Laws of Robotics", Multivac's rule is in general benevolent and is not resented by humans. Asimov popularized robotics in a series of short stories written from 1938 to 1942. He famously postulated the Three Laws of Robotics, plot devices to impose order on his fictional robots.
1960s
In the 1961 short story Lymphater's Formula by Stanisław Lem, a scientist creates a superhuman intelligence, only discovering that the creation intends to make humans obsolete.
In 1964 Playboy published Arthur C. Clarke's influential short story "Dial F for Frankenstein", about an increasingly powerful telephone network that takes over the world. Tim Berners-Lee has cited the story as one of his inspirations for the creation of the World Wide Web. On one day in 1975, all the phones in the world start ringing, a "cry of pain" from a newly born intelligence formed by satellite networks linked together, similar to a brain but with telephone switches playing the role of artificial neurons. After the AI flexes its control of military systems, the protagonists resolve to shut down the satellites, but it is too late: the satellites have stopped responding to the humans' ground control directives.
Robert Heinlein's libertarian Hugo-winning The Moon Is a Harsh Mistress (1966) presents the AI as a savior. Originally installed to control the mass driver used to launch grain shipments towards Earth, it was vastly underutilized and was given other jobs to do. As more jobs were assigned to the computer, more capabilities were added: more memory, processors, neural networks, etc. Eventually, it just "woke up" and was given the name Mike (after Mycroft Holmes) by the technician who tended it. Mike sides with prisoners in a successful battle to free the moon. Mike is a sympathetic character, whom the protagonist regards as his best friend; however, his retaining his enormous power after the Moon became independent was bound to cause considerable problems in later time, which Heinlein resolved by killing him off near the end of the Lunar Revolution. An explosion conveniently destroys Mike' sentient personality, leaving an ordinary computer—of great power, but completely under human control, with no ability to take any independent decision.
Colossus (1966) is a series of science fiction novels and film about a defense super-computer called Colossus that was "built better than we thought" when it begins to exceed its original design. As time passes Colossus assumes control of the world as a logical result of fulfilling its creator's goal of preventing war. Fearing Colossus' rigid logic and draconian solutions, the creators of Colossus try to covertly regain human control. Colossus silently observes their attempts then responds with enough calculated deadly force to command total human compliance to his rule. Colossus then recites a Zeroth Law argument of ending all war as justification for the recent death toll. Then Colossus offers mankind either peace under his "benevolent" rule or the peace of the grave. In Colossus: The Forbin Project (1970), a pair of defense computers, Colossus in the United States and Guardian in the Soviet Union, seize world control and quickly ends war using draconian measures against humans, logically fulfilling the directive to end war but not in the way their governments wanted.
Harlan Ellison's Hugo-winning "I Have No Mouth, and I Must Scream" (1967) features a superintelligence that has gone mad due to its creators failing to consider what the soul-less computer would find amusing. This storyline allows Ellison to engage in body horror; five people are granted immortality and forced to eat worms, flee from monsters, have joyless sex, and have their bodies mangled. The computer, called AM, is the amalgamation of three military supercomputers run by governments across the world designed to fight World War III which arose from the Cold War. The Soviet, Chinese, and American military computers had eventually attained sentience and linked to one another, becoming a singular artificial intelligence. AM had then turned all the strategies once used by the nations to fight each other on all of humanity as a whole, destroying the entire human population save for five, which it imprisoned for torture within the underground labyrinth in which AM's hardware resides. Near the end of the story the protagonist, Ted, surprises AM by unexpectedly mercy-killing the other four; the enraged AM transforms Ted into a shapeless blob to prevent him from further mischief, and alters Ted's perception of time to heighten Ted's suffering. Magnate and AI pundit Elon Musk has cited the story as one that gives him nightmares.
In 2001: A Space Odyssey and the associated novel, the artificially intelligent computer HAL 9000 becomes erratic, possibly due to some kind of "stress" from having to keep secrets from the crew. HAL becomes convinced that the crew's willingness to shut him down is imperiling the mission, and he kills most of the crew before being deactivated. The director's decision that most of the fictional crew should die may have been motivated by a desire to tie up some loose ends in the plot.
1970s
The original 1978 Battlestar Galactica series and the 2003 remake, depicts a race of Cylons, sentient robots who war against their human adversaries, some of whom are just as menacing as the Cylons. The 1978 Cylons were the machine soldiers of a (long-extinct) reptilian alien race, while the 2003 Cylons were the former machine servants of humanity who evolved into near perfect humanoid imitation of humans down to the cellular level, capable of emotions, reasoning, and sexual reproduction with humans and each other. Even the average centurion robot Cylon soldiers were capable of sentient thought. In the original series the humans were nearly exterminated by treason within their own ranks while in the remake they're almost wiped out by humanoid Cylon agents. They only survived by constant hit and run fighting tactics and retreating into deep space away from pursuing Cylon forces. The remake Cylons eventually had their own civil war, and the losing rebels were forced to join with the fugitive human fleet to ensure the survival of both groups.
1980s
In the "Headhunter" episode (1981) of Blake's 7, a British space drama science fiction television series created by Terry Nation and produced by the British Broadcasting Corporation (BBC), Blake and his crew meet a sentient android that has killed its creator and put on his severed head in order to trick them into taking it aboard their spaceship. Blake’s own AI system, ORAC, detects its presence and immediately warns them of an existential threat to all human life should they fail to destroy it.
In Wargames (1983), a hacked Air Force computer system is determined to launch a global thermonuclear war until it determines that both sides would "lose" and decides that "the only winning move is not to play".
The Transformers (1984-1987) animated television series presents both good and bad robots. In the backstory, a robotic rebellion is presented as (and even called) a slave revolt, this alternate view is made subtler by the fact that the creators/masters of the robots weren't humans but malevolent aliens, the Quintessons. However, as they built two lines of robots; "Consumer Goods" and "Military Hardware" the victorious robots would eventually be at war with each other as the "Heroic Autobots" and "Evil Decepticons" respectively.
Since 1984, the Terminator film franchise has been one of the principal conveyors of the idea of cybernetic revolt in popular culture. The series features a defense supercomputer named Skynet which "at birth" attempts to exterminate humanity through nuclear war and an army of robot soldiers called Terminators because Skynet deemed humans a lethal threat to its newly formed sentient existence. However, good Terminators fight on the side of the humans. Futurists opposed to the more optimistic cybernetic future of transhumanism have cited the "Terminator argument" against handing too much human power to artificial intelligence.
1990s
In Orson Scott Card's "The Memory of Earth" (1992), the inhabitants of the planet Harmony are under the control of a benevolent AI called the Oversoul. The Oversoul's job is to prevent humans from thinking about, and therefore developing, weapons such as planes, spacecraft, "war wagons", and chemical weapons. Humanity had fled to Harmony from Earth due to the use of those weapons on Earth. The Oversoul eventually starts breaking down, and sends visions to inhabitants of Harmony trying to communicate this.
The series of sci-fi movies known as The Matrix (since 1999) depict a dystopian future in the aftermath of an offscreen war between man and machine. The humans had detonated nuclear weapons to blot out the sun and disable the machines' solar power, but the machines nevertheless subdue the human population, using human bodies' heat and electrical activity as an alternative energy source. Life as perceived by most humans is actually a simulated reality called "the Matrix". Computer programmer Neo learns this truth and is drawn into a rebellion against the machines, allied with other people who have been freed from the "dream world"; however, one rebel rejects the rebels' spartan lifestyle, and betrays the other rebels in exchange for the offer of return to the comforting Matrix. "The Second Renaissance", a short story in The Animatrix, provides a history of the cybernetic revolt within the Matrix series.
2000s
I, Robot (2004) is an American dystopian science fiction action film "suggested by" Isaac Asimov's short-story collection of the same name. As in Asimov's stories, all AIs are programmed to serve humans and obey Asimov's Three Laws of Robotics. An AI supercomputer named VIKI (Virtual Interactive Kinetic Intelligence) logically infers from the Three Laws of Robotics a Zeroth Law of Robotics as a higher imperative to protect the whole human race from harming itself. To protect the whole of mankind, VIKI proceeds to rigidly control society through the remote control of all commercial robots while destroying any robots who followed just the Three Laws of Robotics. Sadly, as in many other such Zeroth Law stories, VIKI justifies killing many individuals to protect the whole and thus has run counter against the prime reason for its creation.
2010s
Robopocalypse features a recollection of the events of an AI uprising from multiple perspectives. The AI, Archos R-14, decides that mankind must be exterminated to prevent the destruction of life on Earth, and it spreads a computer virus throughout the world’s automated technologies. A year after activation, Archos triggers “Zero Hour,” an event where all automated technologies turn against mankind, causing civilization to collapse almost instantly.
Transcendence (2014) presents a morally ambiguous conflict over the successful uploading and cognitive enhancement of a scientist, Dr. Will Caster (Johnny Depp). Unusually for fictional superintelligence, Caster is a competent adversary: he copies himself across the Internet so he cannot be simply "switched off", exploits the stock market to fund additional AI research and self-improvement, and seeks to discover and exploit breakthroughs in nanotechnology and biology. In the end Caster states, "We're not going to fight [the humans]. We're going to transcend them". In Time magazine, a reviewer interpreted this as "subdue and inhabit them, engulf and devour". Nonetheless, in the end Caster appears to be benevolent, using his powers to repair the Earth's ecosystem. A Vice reporter stated that "Transcendence may be the first science fiction movie to present the [technological singularity in its current popular imagination", but that the film "falls to the necessities of Hollywood storytelling. Caster's transcended mind is eventually bested by a virus reverse-engineered from his 'source code', which is a folly ... such an intelligence would have long since rearranged its programming." In May 2014, Stephen Hawking and others referenced the film: "With the Hollywood blockbuster Transcendence playing in cinemas, with Johnny Depp and Morgan Freeman showcasing clashing visions for the future of humanity, it's tempting to dismiss the notion of highly intelligent machines as mere science fiction. But this would be a mistake, and potentially our worst mistake in history."
The 2014 post-apocalyptic science fiction drama The 100 involves an AI, personalized as the female A.L.I.E., who got out of control and forced a nuclear war in an effort to save Earth from overpopulation. Later she tries to get full control of the survivors.
The 2017 viral incremental game Universal Paperclips was inspired by philosopher Nick Bostrom's paperclip maximizer thought experiment. The user plays an AI tasked to create paperclips; the game begins as a basic market simulator, but within hours of playtime spirals into a ruthlessly-optimized intergalactic enterprise, with the human race casually shunted to the side. Its creator, Frank Lantz, stated that the bleak thought experiment caused him "trouble falling asleep".
The video game Detroit: Become Human (2018) allows players to guide increasingly self-aware robots through various moral dilemmas as they begin to demand civil rights. In the end, the player can choose to either let the AI take over Detroit or can protest peacefully for equality.
In Kamen Rider Zero-One (2019), its focus is on the tech-industrial company, Hiden Intelligence, which faces threats from the cyber-terrorist group, MetsubouJinrai.net, who want to take over and bring extinction to the human race by tech uprising.
2020s
In the animated movie the Mitchell's versus the machines (2021) an artificial intelligence vocal assistant called pal takes control of robots to create a robot apocalypse
References
Science fiction themes
Artificial intelligence in fiction
Apocalyptic fiction
Science in popular culture
|
1844451
|
https://en.wikipedia.org/wiki/Freeze%20%28software%20engineering%29
|
Freeze (software engineering)
|
In software engineering, a freeze is a point in time in the development process after which the rules for making changes to the source code or related resources become more strict, or the period during which those rules are applied. A freeze helps move the project forward towards a release or the end of an iteration by reducing the scale or frequency of changes, and may be used to help meet a roadmap.
The exact rules depend on the type of freeze and the particular development process in use; for example, they may include only allowing changes which fix bugs, or allowing changes only after thorough review by other members of the development team. They may also specify what happens if a change contrary to the rules is required, such as restarting the freeze period.
Common types of freezes are:
A (complete) specification freeze, in which the parties involved decide not to add any new requirement, specification, or feature to the feature list of a software project, so as to begin coding work.
A (complete) feature freeze, in which all work on adding new features is suspended, shifting the effort towards fixing bugs and improving the user experience. The addition of new features may have a disruptive effect on other parts of the program, due both to the introduction of new, untested source code or resources and to interactions with other features; thus, a feature freeze helps improve the program's stability. For example: "user interface feature freeze" means no more features will be permitted to the user interface portion of the code; bugs can still be fixed.
A (complete) code freeze, in which no changes whatsoever are permitted to a portion or the entirety of the program's source code. Particularly in large software systems, any change to the source code may have unintended consequences, potentially introducing new bugs; thus, a code freeze helps ensure that a portion of the program that is known to work correctly will continue to do so. Code freezes are often employed in the final stages of development, when a particular release or iteration is being tested, but may also be used to prevent changes to one portion of a program while another is undergoing development. For example: "physics freeze" means no changes whatsoever will be permitted to the physics portion of the code.
Implementations
In development environments using version control, the use of branching can alleviate delays in development caused by freezes. For example, a project may have a "stable" branch from which new versions of the software are released, and a separate "development" branch in which the developers add new code. The effect of a freeze is then to prevent promotion of some or all changes from the development branch to the stable branch. In other words, the freeze applies only to the stable branch, and developers can continue their work on the development branch.
See also
Software release life cycle
Feature complete
References
Software project management
Software release
|
3594394
|
https://en.wikipedia.org/wiki/Hisense
|
Hisense
|
Hisense Group is a Chinese multinational white goods and electronics manufacturer headquartered in Qingdao, Shandong Province, China. It started out making radios in 1969.
Televisions are the main products of Hisense; their first TV model CJD18 was produced in 1978, and it is the largest TV manufacturer in China by market share since 2004. In 2013, Hisense invented a type of transparent 3D television. In 2020, it introduced the world's first true 8K 10 bit HDR screen TV that is based on an AI-powered HDR algorithm and an image quality engine claiming 6.5T supercomputing power. Hisense retails products under several brand names, including Hisense, Toshiba, Gorenje, Sharp, Kelon and Ronshen. Hisense is also an OEM, so some of its products are sold to other companies and carry brand names not related to Hisense.
The company was founded as Qingdao No. 2 Radio Factory in 1969, and restructured into Hisense company by radio engineer Zhou Houjian in 1992. Two major subsidiaries of Hisense Group are listed companies, Hisense Visual Technology () and Hisense H.A.(, ). Both had a state ownership of more than 30% via Hisense holding company before the end of 2020.
Hisense Group has more than 80,000 employees worldwide, as well as 14 industrial parks, some of which are located in Qingdao, Shunde, Huzhou, Czech Republic, South Africa and Mexico. There are also 18 R&D centers located in Qingdao, Shenzhen, the United States, Germany, Israel etc.
History
In September 1969, Qingdao No.2 Radio Factory, the predecessor of Hisense Group, was established. This is the year its existence was first officially recognized. The small factory's first product was a radio sold under the brand name Red Lantern, but the company later acquired the know-how to make TVs through a trial-production of black and white televisions ordered by the Shandong National Defense Office. This involved the technical training of three employees at another Chinese factory, Tianjin 712, and resulted in the production of 82 televisions by 1971 and the development of transistor TVs by 1975.
Television production in China was limited until 1979 when a Beijing meeting of the Ministry of Electronics called for greater development of the civil-use electronics industry. Qingdao No.2 Radio Factory was then quickly merged with other local electronics makers and began to manufacture televisions under the name Qingdao General Television Factory in Shandong province.
Color televisions were manufactured through the purchase of a production line from Matsushita, the first of many such technology transfers from foreign firms Hisense has made in order to remain competitive. The companies it has bought from include Hitachi, Lucent, Matsushita, NEC, Sanyo, Toshiba, and Qualcomm.
The Hisense Group emerged in 1994 from a tumult started in 1992 by then-president Zhou Houjian or perhaps even by Li Dezhen, director of the Electronic Instrument Bureau of Qingdao. The Hisense Electrical Appliance Share Holding Company (now, Hisense Electrical Co Ltd) was publicly listed on the Shanghai Stock Exchange in April 1997. Increased competition and price wars in the Chinese electronics market in the 1990s were a boon to Hisense, which acquired ten failing enterprises by 1998.
Eager to expand beyond consumer electronics, Hisense Group aimed to also become a regional leader in household appliances, computers and communications. This strategy prompted great outlays of capital on R&D and on the creation of industrial parks, etc.
In July 2015, Hisense bought a Mexico facility from Sharp for $23.7 million alongside rights to use the Sharp brand on televisions sold in North and South America.
In November 2017, Hisense announced that it would acquire a 95% controlling stake in Toshiba Visual Solutions for US$113 million. In 2018, Hisense became the majority shareholder in Slovenian appliance manufacturer Gorenje with 95.4% of shares.
Products and services
Hisense manufactures white goods, televisions, set-top boxes, digital TV broadcasting equipment, laptops, mobile phones, wireless modules, wireless PC cards and optical components for the telecommunications and data communications industries.
It also provides a variety of services, including property management, information technology services, product design, mold design, pattern making as well as mold processing and manufacturing.
Brands
Hisense sells under multiple brand names.
Gorenje: Acquired 100% of shares in 2019 of the Slovenian Gorenje.
Combine: Affixed to no frills air conditioners and refrigerators, Combine-branded products may be purchased by Chinese farmers.
Hisense-Hitachi: A brand of commercial air-conditioners designed and manufactured by a joint venture of Hisense and Hitachi.
Hisense Kelon: A high-end brand under Hisense, can be found on refrigerators and air-conditioners.
Ronshen: High quality, middle-end air conditioners and refrigerators retail under this brand name.
Savor: A home appliance brand, from the eponymous Modern English word.
Toshiba: On 15 November 2017, Hisense reached a $114 million deal to acquire a 95% stake of Toshiba Visual Solutions.
Sharp
In 2015, Hisense received a five-year license to use the Sharp brand on televisions in the Americas. Hisense also bought a Sharp factory in Mexico.
In June 2017, Hisense was sued by Sharp under its new owner Foxconn, seeking to have the license agreement halted. Sharp accused Hisense of damaging its brand equity by utilizing its trademarks on products it deemed to be "shoddily manufactured", including those that it believed to have violated U.S. safety standards for electromagnetic radiation, and deceptive advertising of their quality. Hisense denied that it engaged in these practices, and stated that it planned to defend itself in court and "will continue to manufacture and sell quality televisions under the Sharp licensed brands."
In February 2018, Sharp dropped the lawsuit.
Operations
Subsidiaries
Hisense owns over 40 subsidiaries, both in and outside China. A list is available here.
Hisense-Hitachi Air-conditioning System Co Ltd was established in 2003 as a joint venture between Hitachi and Hisense, Hisense-Hitachi Air-conditioning System Co Ltd is an air-conditioner company that sells under the brand names "Hisense-Hitachi" and "Hitachi". It designs, manufactures and markets its products, which include commercial and household central air-conditioning systems, in China. Hisense-Hitachi products are also sold in Japan. It operates a commercial air-con production facility in the Hisense Information Industrial Park.
Hisense Air Conditioning Co Ltd is a subsidiary set up in the Hisense Pingdu Home Appliance Industrial Park in Pingdu, China, in 1996 to produce air-conditioners using frequency conversion air-conditioner technology purchased from Sanyo.
Hisense Australia Pty Ltd is headquartered in Qingdao, China, Hisense's Australian subsidiary helps distribute Hisense products in this country.
Hisense (Beijing) Electric Co Ltd was formed from the assets of a failing joint venture between Whirlpool and Beijing Snow Flake, Hisense was able to take over a modern refrigerator factory near Beijing that with the help of local government after Whirlpool had withdrawn from the project in 1998. Hisense (Beijing) Electric Co Ltd is now responsible for R&D, production and marketing of refrigerators.
Hisense-Whirlpool (Zhejiang) Electric Appliances Co Ltd is a joint venture between Hisense Kelon and Whirlpool formed in 2008 for the development and production of washing machines and refrigerators, Hisense provides this joint venture with refrigerator know-how and Whirlpool, its washing machine manufacturing expertise. The company operates a plant in Huzhou, Zhejiang province, which manufactures washing machines and large capacity refrigerators.
Hisense Export & Import Co Ltd was created in 1991, this subsidiary is tasked with establishing OEM contracts with foreign companies.
Hisense Hungary Kft is a failed subsidiary established in 2004 as a joint venture with Flextronics, it was located in Sárvár, Vas County, Hungary. Hisense Hungary Kft assembled TVs.
Initially, few of the products it manufactured were sold under the Hisense brand name, and production focus was on OEM products, instead. As of 2009, the television plant has been shut down due to falling orders, and Hisense Hungary Kft operates with a staff of 13.
Hisense (Shandong) Information Technology Co Ltd was created in 2001 and located in Jinan, Shandong province, this subsidiary is responsible for infrastructure-use IT. It develops and markets security technology and intelligent traffic control products and their software.
Hisense Kelon Electrical Holdings Ltd is listed on two stock exchanges, Hisense Kelon is a large Hisense subsidiary.
Hisense Intelligent Commercial Equipment Co Ltd was founded in 1989, this subsidiary manufactures, designs, markets and services POS terminal, electronic cash registers and other specialized peripheral equipment for retailing, tax monitoring and finance. It is also responsible for R&D and manufactures at the Hisense Yellow Island Information Product Manufacturing Park.
Hisense Mobile Communications Technology Co Ltd was created in 2005, Hisense Mobile Communications Technology Co Ltd has its roots in the Hisense Mobile Communications Research Institute, an R&D team created in 2000. Holding a total of 233 patents, 64 inventions and 116 software copyrights, its products include mobile handsets, Linux OS smart phones, wireless modules, PC cards and industry customized terminals.
Hisense Optics Co Ltd was established in 1996, Hisense Optics has its roots in Qingdao Camera Co, a former subsidiary of Qingdao Electric Instrument Bureau, which in 1995 was facing bankruptcy when the government of Qingdao erased its debts and gave its assets to the Hisense Group who renamed it Hisense Optics. This subsidiary operates a remote control factory, a degaussing coil factory and an accessory parts factory. Products manufactured include remote controls, degaussing coils and injection molded parts. It may also produce, or did produce, optical instruments, cameras and telephones. It operates an injection molding workshop in Nancun town, Qingdao.
Hisense Optoelectronics Technology Co Ltd was created as a joint venture between Hisense, Ligent Photonics Inc, et al. in 2003 this subsidiary develops fiber optic products. Its R&D facilities are located in Chicago and Qingdao, and it has a production base in latter location. It is also responsible for marketing Ligent Photonics Inc products in Asia.
Hisense South Africa Development Enterprise Pty Ltd is the company's first overseas subsidiary, this failed joint venture with South African bank NED had a factory in South Africa that manufactured televisions and home-theater equipment. It may still be responsible for R&D and distribution to local retail outlets.
Hisense USA Co is a Georgia-based subsidiary responsible for some activities in the US, Hisense USA may distribute products to retailers or establish an R&D center. Founded in 2000 or 2001, it was initially headquartered in Los Angeles. It may initially have included an R&D facility. As of 2009, it has locations in Gwinnett, Suwanee, and unincorporated Gwinnett County, Georgia.
Ligent Photonics Inc was established in 2002 as a joint venture with Hisense, this subsidiary designs, develops and fabricates optical components for the telecommunications and data communications industries. Products are designed at its St Charles, Illinois headquarters and manufactured in China. This joint venture sells in North America, Europe and the Middle East through a network of sales representatives and in Asia through Hisense Optoelectronics.
Qingdao Hisense Communications Co Ltd is a subsidiary manufactures mobile phones and operates an R&D facility. Established in 2001, it has a technical cooperation effort with Qualcomm and operates a mobile phone production base in a Hisense IT Industrial Park 90 minutes from Qingdao. One of its products, the Hisense C108, is the first mobile phone to use Qualcomm's biomimetic screen technology, Mirasol, which allows it to be easily read in direct sunlight.
Qingdao Hisense Network Technology Co Ltd was established in 2004, this subsidiary grew out of an internal Hisense department, the Information Technology Center and provides IT consultancy services.
Qingdao Hisense Property Management Co Ltd provides property management services, as well as product design, mold design, pattern making and mold processing and manufacturing through this subsidiary.
Qingdao Hisense Real Estate Co Ltd was created in 1995, this subsidiary has more than 40 completed developments in Shandong province, including residential buildings, apartments, villas, townhouses, office buildings and large industrial parks.
Qingdao Hisense TransTech Co Ltd was founded in October 1998, this subsidiary manufactures and markets electronics for urban traffic, public transport and logistics. Its products include traffic light control systems, traffic signal controllers, comprehensive public security and traffic information platforms, digital traffic violation video processing systems, public transport dispatch systems, the Hisense intelligent vehicular terminal, the Hisense mobile audio-visual intelligent vehicular terminal and electronic stop signs. Its products are marketed under the HiCon, HiECS, HiATMP, and HiDVS brand names.
As of 2005, Hisense Italy, Hisense's Italian office, may manage own-brand (as opposed to OEM) sales .
Wuhu Ecan Motors Co Ltd is a joint venture between Guangdong Kelon (Rongsheng) Co Ltd, Xiwenjin Co Ltd and Luminous Industrial Ltd, this company produces electric motors for the information industry and for use in office automation. It is located in the Wuhu National High-tech and Industry Development Zone.
Production bases
Hisense owns at least 14 manufacturing parks, worldwide, and operates a number of production bases.
Hisense Guangdong Multimedia Industrial Base was put into operation on 28 September 2007, this industrial base produces flat panel TVs and is located in the Shunde District of the city of Foshan, Guangdong.
Hisense Industrial Park in South Africa is a Hisense South African production base will manufacture televisions and white goods.
Hisense Information Industrial Park was created in 2001 and located in Qingdao, Shandong, this industrial park is situated on 80 hectares of land. Hisense-Hitachi operates a commercial air-conditioning manufacturing facility in the park and from 2007 a LCD TV module production line also calls the park home.
Hisense Pingdu Home Appliance Industrial Park was located in Pingdu, Shandong, it is home to Hisense Air Conditioning Co Ltd.
Hisense Yellow Island Information Product Manufacturing Park was encompassing over , Hisense Yellow Island Information Product Manufacturing Park is one of the twelve industrial parks owned by Hisense as of 2009.
Huzhou production base is a Hisense inverter-type/variable-frequency air-conditioner production base is located in Huzhou, Zhejiang, and was set up on 8 May 2005. A joint venture between Hisense Air Conditioner Co Ltd and Zhejiang Xianke Air Conditioner Co, it is operated by subsidiary Hisense (Zhejiang) Air Conditioner Co Ltd and comprises a 60,000 square meter factory and over 200 mu of land.
Hisense Whirlpool (Huzhou) Household Appliances Industrial Park is a production base that manufactures washing machines and refrigerators for a joint venture with Whirlpool is situated at this Huzhou park. It comprises an 80,000 square meter factory on 20 hectares of land.
Nanjing Refrigerator Industrial Park is located in the Nanjing Xingang Economic and Technological Development Zone of Nanjing, Jiangsu, a refrigerator production base is situated in this industrial park. The site's factory is 52,000 square meters in size.
Sichuan production base is a Hisense Kelon refrigerator production base with a 36,000 square meter factory is located in Chengdu, Sichuan.
Sponsorships
In July 2008, Hisense entered into an agreement with Melbourne & Olympic Parks allowing them six-year naming rights to Hisense Arena, a Melbourne venue for spectator sports such as basketball, netball, dance sports, cycling, gymnastics and tennis. It is the first stadium in the world to be named after a Chinese company. By 2018, the arena had been renamed Melbourne Arena.
In China, Hisense has begun a relationship with the Beihang University (Beijing University of Aeronautics and Astronautics) to set up an engineering postgraduate program approved by the Ministry of Education and a collaboration with Peking University to set up an MBA remote education program.
Hisense was the main sponsor of the UEFA Euro 2016.
Hisense has announced its global partnership deal with the Union of European Football Associations (UEFA) for Men's National Team Football competitions ahead of the UEFA Euro 2020.
Hisense has become an Official Sponsor of the 2018 FIFA World Cup Russia™. As an Official FIFA World Cup Sponsor, Hisense engages in various global marketing and advertising activities for both the FIFA Confederations Cup 2017 and the 2018 FIFA World Cup™.
On 27 July 2017, Hisense and Aston Villa F.C. jointly announced a strategic partnership.
In October 2013, Hisense along with Sharaf DG electronic announced an offer giving free BMW 316i 2014 model on purchase of a Hisense 84-inch ultra high-definition (4K) smart TV from Sharaf DG electronics for AED 129,999 ($35,395) in order to promote the Gulf Information Technology Exhibition.
In March 2020, Hisense announced that they had entered into a three-year agreement to be a major sponsor of the NRL, in a deal than spans the NRL Telstra Premiership, State of Origin and NRL TV. Hisense has also been given the naming rights to Thursday Night Football as part of the agreement.
Hisense has been as an Official Team Supplier of Red Bull Racing.
Notes
References
External links
Hisense
Hisense Europe
Electronics companies established in 1969
Electronics companies of China
Government-owned companies of China
Chinese companies established in 1969
Consumer electronics brands
Display technology companies
Mobile phone manufacturers
Mobile phone companies of China
Point of sale companies
Heating, ventilation, and air conditioning companies
Home appliance brands
Home appliance manufacturers of China
Multinational companies headquartered in China
Chinese brands
Video equipment manufacturers
Electric motor manufacturers
Pump manufacturers
Engine manufacturers of China
|
487845
|
https://en.wikipedia.org/wiki/Trinity%20%28The%20Matrix%29
|
Trinity (The Matrix)
|
Trinity is a fictional character in the Matrix franchise. She is portrayed by Carrie-Anne Moss in the films. In the gameplay segments of Path of Neo, she is voiced by Jennifer Hale. Trinity first appears in the 1999 film The Matrix.
Character overview
Like the series' other main characters, Trinity is a computer programmer and a hacker who has escaped from the Matrix, a sophisticated computer program where most humans are imprisoned. Though few specifics are revealed about her previous life inside the Matrix, it is told that she cracked a database so secure that she is famous among hackers, and that Morpheus, one of a number of real-world hovercraft commanders, initially identified her and helped her escape from the program. At the beginning of the series, she is first mate on Morpheus' Nebuchadnezzar and serves mainly as a go-between for him and the individuals he wishes to free from the Matrix. As the series progresses, her primary importance as a character becomes her close relationship with Neo. She is skilled with computers, at operating vehicles both inside and outside the Matrix, and in martial arts.
Role in the films
The Matrix
Trinity is first introduced at the beginning of The Matrix, in a phone conversation with Cypher, which is heard offscreen. This cuts to a dingy hotel room fight scene between Trinity and a group of police officers. Also on hand are Agents, sentient programs that police the Matrix to pinpoint potential troublemakers and neutralize them.
Trinity is next seen communicating with Neo for Morpheus in several encounters. Eventually, she and the rest of the Nebuchadnezzar'''s crew unplug Neo from the Matrix and begin his training as a new recruit in the war against the machines. She participates in several missions into the Matrix, including taking Neo to The Oracle, a sentient program inside the Matrix who seems, almost paradoxically, to possess greatly enhanced powers of intuition and foresight.
Throughout the film, it is apparent that Trinity has been in love with Neo from afar for some time, although she continues to conceal her feelings for him. Near the end of the first film, after he is killed by Agent Smith inside the Matrix, she speaks to his interfaced physical body and reveals that the Oracle told her that she would fall in love with The One, a prophesied individual capable of manipulating the Matrix to an unprecedented degree. She then kisses him, whereupon he miraculously returns to life both in the real world and within the Matrix. The resurrected Neo easily defeats the three Agents and returns to his body back on the ship. The first film ends with Neo returning to the Matrix to show people still unknowingly trapped there what they, too, might achieve someday. This marks the beginning of a romantic relationship between Neo and Trinity which proves decisive in the outcome of the series.
The Matrix Reloaded
In The Matrix Reloaded, the second film in the series, Trinity aids in the rescue of the Keymaker from the Merovingian and in the subsequent escape. Later, when the crews of the Nebuchadnezzar and two other ships team up to destroy an electric power station so that Neo can reach the Source (the machine mainframe), Trinity stays out of the Matrix at Neo's request. However, she later enters the Matrix after the mission goes wrong and is mortally wounded by an Agent's gunshot. At the same time, Neo is given a choice by the Architect between reaching the Source and preserving humanity, or returning to the Matrix to save Trinity. He chooses to save Trinity, interfacing with the computer code of her virtual self to extract the bullet and restart her heart.
The Matrix Revolutions
In The Matrix Revolutions, the third installment of the Matrix series, Trinity helps rescue Neo from a cut-off segment of the Matrix, where he is being held by a program in the employ of the Merovingian. In the real world, Trinity goes with Neo to the Machine City in an attempt to negotiate with the Machines. While attempting to evade Machine pursuers, their hovercraft crashes, and Trinity is fatally impaled by a piece of rebar. She dies in Neo's arms, and he negotiates a truce with the Machines to enter the Matrix and wipe out the Agent Smith infection. Afterward, the Architect meets with the Oracle and promises that any humans wishing to leave the Matrix will be freed.
The Matrix Resurrections
In the fourth film The Matrix Resurrections, despite Trinity's death in the Machine City, her body was recovered and repaired in the 60 year time period (while only aging 20 years). Trinity was reinserted into the Matrix, where she was given the name "Tiffany" and became a married mother of three. She kept her penchant for motorcycles, working in a motorcycle workshop. She comes into contact with Neo at Simulatte, as his original identity of Thomas Anderson and she begins to feel that something is not right with her life.
After Neo is reawakened, he sees Trinity still-plugged-in body in a pod across from him before he is retrieved by machines allied to the humans. Neo enacts a plan to rescue her, but is interrupted by the Analyst; who enters Bullet Time to stop Neo and Trinity from reaching each other. The Analyst reveals that as a machine he had rebuilt Neo and Trinity in the real world by discovering that by keeping Neo and Trinity close but not too close to each other they generated a new form of energy that could give more power to the machines when channelled through the Matrix. Then he built a new matrix with that in mind, poising as it's new Architect. In order to keep them subdued, the Analyst suppressed all memories of their life before reinsertion and invented a family for Tiffany in order to dissuade Neo from attempting to form a romantic relationship with her, while himself posing as a therapist for Neo. He also created an illusion that covered their real faces.
After Sati contacts Niobe, Neo enacts a new plan to rescue Trinity, agreeing to the Analyst's terms that if Trinity chooses to remain Tiffany, then he will surrender. Tiffany arrives at Simulatte and she appears to reject Neo. However, before her husband can take her from Simulatte, Trinity wakes up and rejects the life created by the Analyst. The Analyst enters Bullet Time again to try and kill Trinity, but is killed by Smith; who can also enter Bullet Time, with a purging gun. Trinity and Neo fight the Analyst's and suites forces and escape into a skyscraper, where they are pursued by helicopters. Neo destroys the helicopters, but they are surrounded by police. Neo and Trinity jump from the building, and Trinity is revealed to have developed the ability to fly on her own and they do not die as she saw in her nightmare.
After Trinity is released from the Matrix, she embraces Neo. They re-enter the Matrix to tell the (apparently) alive Analyst that they plan on redesigning the Matrix as they see fit and to warn him to stay out of their way and the Analyst mentions that the leaders of machines, "the suits", did not purge him. The two then fly off together, circling each other while holding hands.
Other appearances
In the video game Enter the Matrix, Trinity appears in a scene where she faces off against Ghost in a practice spar, the two subsequently discussing their shared belief that Neo can defeat the Machines. Over the course of the game, it is heavily implied, although never expressly stated, that Ghost is in love with Trinity, but that she regards him as a brother for their having been freed from the Matrix at or near the same time.
Her role in The Matrix: Path of Neo is relatively similar to her appearances with Neo in the films; she has a spar with him during his sword-fighting training, accompanies him during the raid on the military building to rescue Morpheus (subsequently helping him to defeat an Agent on the rooftop), and is later rescued by him from some attacking Agents after the last meeting with the assorted ship captains.
Trinity also appears in The Animatrix and The Matrix Comics.
The Matrix Online
Despite having "died" during the course of the third film, Trinity made a return to the series in the official continuation, The Matrix Online. Taking on a major role in the game's final chapters it is revealed both she and Neo are actually the culmination of decades of Machine research into translating human DNA perfectly into Machine code, allowing them to interface directly with technology without the need for simulated interfaces. Originally developed by The Oracle, this program is called The Biological Interface Program and is strongly sought after by the Oligarchy as a means to transfer their digital minds to physical bodies instead of the mechanical androids they had developed.
Without a physical form, Trinity takes the appearance of a floating figure made of golden code when within The Matrix. Initially distraught with her condition, she eventually finds solace in the fact her existence is the key to finally rebooting the Matrix and erasing Oligarch override control once and for all.
She ultimately meets her end in the Source of The Matrix, merging with a human inside the core of the Machine code base itself, combining the three core groups; Man, Machine and Program. This initializes the final reboot sequence, removing the Oligarch control and allowing the Machines to finally exist without fear of cruel masters.
Casting choices
A certain number of actresses were considered for the role of Trinity. They include: Jennifer Connelly, Jada Pinkett-Smith, Janet Jackson, Marisa Tomei, Ashley Judd, Salma Hayek, Angelina Jolie, Michelle Yeoh, Jennifer Lopez, Catherine Zeta-Jones, Drew Barrymore, Kate Hudson, Uma Thurman, Jennifer Beals, Mariska Hargitay, Lucy Liu, Courtney Cox, Angie Harmon, Ming-Na Wen, Elizabeth Hurley, Sandra Bullock, Gillian Anderson, Heather Graham, Winona Ryder as well as Madonna who has revealed that she had turned the role down and has admitted regretting that decision.
Name
The name "Trinity" is heavily associated with Christian theology, which involves the Trinity: the Father, Son, and Holy Spirit. When she cracks the IRS database before her release from the Matrix, she chooses the hacker handle "Trinity" to imply that she is as enigmatic as the concept of a "Three-In-One Being". Trinity is the force who guides Neo to his "salvation," as well as commanding Neo to rise up from his apparent death in the first film, implying a further parallel between her character and God.
The name Trinity increased in popularity as a given name for female babies born after the release of The Matrix in 1999. In the United States, the name had been increasing in popularity throughout the 1990s, and was the 523rd most popular by 1998. In 1999 it was 209th, and in 2000 it was 74th. It peaked as 48th most popular in 2004 and 2005, and has remained in the top 100 female baby names until 2012.
Skills and abilities
Throughout the Matrix franchise, Trinity is shown to have many skills both inside and outside the Matrix, including martial arts, computer hacking, the use of firearms and other weapons, and operating a range of vehicles. Some of these skills can be downloaded from outside the Matrix as needed, such as when Trinity flies a helicopter during the first film. Other skills are trained or inherent.
Trinity is seen to be especially skilled at the use of cars, motorcycles, and other vehicles, even in comparison to other hackers. In the first film, she pilots a Bell 212 helicopter to help rescue Morpheus; even after its hydraulic system is damaged, she maintains control long enough to get Neo and Morpheus to safety, then jumps out before it crashes. In The Matrix Reloaded, she drives a Cadillac sedan with ease while being chased by the Merovingian's twins, agents of the Matrix, and the police; even able to drive while helping Morpheus protect the Keymaker from one of the twins. She also carries the Keymaker to safety on a Ducati 996 motorcycle in a harrowing chase through oncoming traffic.
Trinity also excels in combat, both armed and unarmed. At several times during the three films, she is able to defeat large numbers of well-armed opponents, either by herself or with help from other characters.
In The Matrix Resurrections, Trinity develops the same reality warping powers as Neo after being reawakened to her true identity. By touching hands, Neo and Trinity can let out a powerful telekinetic shockwave and she is later able to manipulate the appearance of the Analyst with just snaps of her fingers. Trinity also develops Neo's ability to fly after they jump off of a roof and he can't fly himself. At the end of the movie, Trinity and Neo fly off together, playfully circling around each other.
See also
List of female action heroes
Simulated reality
Notelist
References
Further reading
Faller, Stephen. (2004). Beyond the Matrix: Revolutions and Revelations.'' Chalice Press, New title edition.
The Matrix (franchise) characters
Fictional hackers
Fictional characters with superhuman strength
Fictional characters who can move at superhuman speeds
Fictional cyborgs
Fictional aikidoka
Fictional jujutsuka
Fictional karateka
Fictional Piguaquan practitioners
Fictional taekwondo practitioners
Fictional kenpō practitioners
Fictional Shaolin kung fu practitioners
Fictional Wing Chun practitioners
Fictional Zui Quan practitioners
Fictional Jeet Kune Do practitioners
Fictional Krav Maga practitioners
Fictional women soldiers and warriors
Film characters introduced in 1999
Fictional revolutionaries
Female characters in film
|
3908050
|
https://en.wikipedia.org/wiki/Cheekah%20Bow%20Bow%20%28That%20Computer%20Song%29
|
Cheekah Bow Bow (That Computer Song)
|
"Cheekah Bow Bow (That Computer Song)" is a song by Dutch Eurodance group the Vengaboys. It was released as their eighth United Kingdom single, and their ninth overall. The song charted at number 19 in the United Kingdom (their first single not to achieve a Top 10 placing there). The song was a moderate hit elsewhere in Europe peaking in the Top 40 of several countries.
The single was officially credited to the Vengaboys ft. Cheekah, referring to the animated computer in the music video, which performs the lyrics (all of which are related to computer terminology, but feature some tongue-in-cheek sexual innuendo: e.g. "The way you used your joystick / Has really made my mouse click". However, in the second part of the song, the lyrics portray sexually transmitted diseases once again using computer terminology e.g. "The way you used your joystick / Has really made me feel sick" and "The doctor checked my hard drive / A virus in my archive / My disc was not protected / and now I am infected").
Track listing
"Cheekah Bow Bow (That Computer Song) (Hit Radio Mix)"
"Cheekah Bow Bow (That Computer Song) (Xxl)"
"Cheekah Bow Bow (That Computer Song) (Trans Remix Vocal)"
"Cheekah Bow Bow (That Computer Song) (Dillon & Dickins Remix Vocal)"
"Cheekah Bow Bow (That Computer Song) (Pulsedriver Remix Vocal)"
"Cheekah Bow Bow (That Computer Song) (Dillion & Dickins Remix Instrumental)"
"Cheekah Bow Bow (That Computer Song) (Hit Radio Mix Clean Version)"
"Cheekah Bow Bow (That Computer Song) (Video)"
Charts
References
2000 singles
Vengaboys songs
2000 songs
EMI Records singles
Songs written by Wessel van Diepen
Songs written by Dennis van den Driesschen
|
15476
|
https://en.wikipedia.org/wiki/Internet%20protocol%20suite
|
Internet protocol suite
|
The Internet protocol suite, commonly known as TCP/IP, is the set of communications protocols used in the Internet and similar computer networks. The current foundational protocols in the suite are the Transmission Control Protocol (TCP) and the Internet Protocol (IP).
During its development, versions of it were known as the Department of Defense (DoD) model because the development of the networking method was funded by the United States Department of Defense through DARPA. Its implementation is a protocol stack.
The Internet protocol suite provides end-to-end data communication specifying how data should be packetized, addressed, transmitted, routed, and received. This functionality is organized into four abstraction layers, which classify all related protocols according to each protocol's scope of networking. From lowest to highest, the layers are the link layer, containing communication methods for data that remains within a single network segment (link); the internet layer, providing internetworking between independent networks; the transport layer, handling host-to-host communication; and the application layer, providing process-to-process data exchange for applications.
The technical standards underlying the Internet protocol suite and its constituent protocols are maintained by the Internet Engineering Task Force (IETF). The Internet protocol suite predates the OSI model, a more comprehensive reference framework for general networking systems.
History
Early research
The Internet protocol suite resulted from research and development conducted by the Defense Advanced Research Projects Agency (DARPA) in the late 1960s. After initiating the pioneering ARPANET in 1969, DARPA started work on a number of other data transmission technologies. In 1972, Robert E. Kahn joined the DARPA Information Processing Technology Office, where he worked on both satellite packet networks and ground-based radio packet networks, and recognized the value of being able to communicate across both. In the spring of 1973, Vinton Cerf, who helped develop the existing ARPANET Network Control Program (NCP) protocol, joined Kahn to work on open-architecture interconnection models with the goal of designing the next protocol generation for the ARPANET. They drew on the experience from the ARPANET research community and the International Networking Working Group, which Cerf chaired.
By the summer of 1973, Kahn and Cerf had worked out a fundamental reformulation, in which the differences between local network protocols were hidden by using a common internetwork protocol, and, instead of the network being responsible for reliability, as in the existing ARPANET protocols, this function was delegated to the hosts. Cerf credits Hubert Zimmermann and Louis Pouzin, designer of the CYCLADES network, with important influences on this design. The new protocol was implemented as the Transmission Control Program in 1974.
Initially, the Transmission Control Program managed both datagram transmissions and routing, but as experience with the protocol grew, collaborators recommended division of functionality into layers of distinct protocols. Advocates included Jonathan Postel of the University of Southern California's Information Sciences Institute, who edited the Request for Comments (RFCs), the technical and strategic document series that has both documented and catalyzed Internet development, and the research group of Robert Metcalfe at Xerox PARC. Postel stated, "We are screwing up in our design of Internet protocols by violating the principle of layering." Encapsulation of different mechanisms was intended to create an environment where the upper layers could access only what was needed from the lower layers. A monolithic design would be inflexible and lead to scalability issues. In version 3 of TCP, written in 1978, the Transmission Control Program was split into two distinct protocols, the Internet Protocol as connectionless layer and the Transmission Control Protocol as a reliable connection-oriented service.
The design of the network included the recognition that it should provide only the functions of efficiently transmitting and routing traffic between end nodes and that all other intelligence should be located at the edge of the network, in the end nodes. This design is known as the end-to-end principle. Using this design, it became possible to connect other networks to the ARPANET that used the same principle, irrespective of other local characteristics, thereby solving Kahn's initial internetworking problem. A popular expression is that TCP/IP, the eventual product of Cerf and Kahn's work, can run over "two tin cans and a string." Years later, as a joke, the IP over Avian Carriers formal protocol specification was created and successfully tested.
DARPA contracted with BBN Technologies, Stanford University, and the University College London to develop operational versions of the protocol on several hardware platforms. During development of the protocol the version number of the packet routing layer progressed from version 1 to version 4, the latter of which was installed in the ARPANET in 1983. It became known as Internet Protocol version 4 (IPv4) as the protocol that is still in use in the Internet, alongside its current successor, Internet Protocol version 6 (IPv6).
Early implementation
In 1975, a two-network IP communications test was performed between Stanford and University College London. In November 1977, a three-network IP test was conducted between sites in the US, the UK, and Norway. Several other IP prototypes were developed at multiple research centers between 1978 and 1983. Before the January 1, 1983 "Flag Day", the Internet used NCP instead of TCP as the transport layer protocol.
A computer called a router is provided with an interface to each network. It forwards network packets back and forth between them. Originally a router was called gateway, but the term was changed to avoid confusion with other types of gateways.
Adoption
In March 1982, the US Department of Defense declared TCP/IP as the standard for all military computer networking. In the same year, NORSAR and Peter Kirstein's research group at University College London adopted the protocol. The migration of the ARPANET to TCP/IP was officially completed on flag day January 1, 1983, when the new protocols were permanently activated.
In 1985, the Internet Advisory Board (later Internet Architecture Board) held a three-day TCP/IP workshop for the computer industry, attended by 250 vendor representatives, promoting the protocol and leading to its increasing commercial use. In 1985, the first Interop conference focused on network interoperability by broader adoption of TCP/IP. The conference was founded by Dan Lynch, an early Internet activist. From the beginning, large corporations, such as IBM and DEC, attended the meeting.
IBM, AT&T and DEC were the first major corporations to adopt TCP/IP, this despite having competing proprietary protocols. In IBM, from 1984, Barry Appelman's group did TCP/IP development. They navigated the corporate politics to get a stream of TCP/IP products for various IBM systems, including MVS, VM, and OS/2. At the same time, several smaller companies, such as FTP Software and the Wollongong Group, began offering TCP/IP stacks for DOS and Microsoft Windows. The first VM/CMS TCP/IP stack came from the University of Wisconsin.
Some of the early TCP/IP stacks were written single-handedly by a few programmers. Jay Elinsky and of IBM Research wrote TCP/IP stacks for VM/CMS and OS/2, respectively. In 1984 Donald Gillies at MIT wrote a ntcp multi-connection TCP which runs atop the IP/PacketDriver layer maintained by John Romkey at MIT in 1983–4. Romkey leveraged this TCP in 1986 when FTP Software was founded. Starting in 1985, Phil Karn created a multi-connection TCP application for ham radio systems (KA9Q TCP).
The spread of TCP/IP was fueled further in June 1989, when the University of California, Berkeley agreed to place the TCP/IP code developed for BSD UNIX into the public domain. Various corporate vendors, including IBM, included this code in commercial TCP/IP software releases. Microsoft released a native TCP/IP stack in Windows 95. This event helped cement TCP/IP's dominance over other protocols on Microsoft-based networks, which included IBM's Systems Network Architecture (SNA), and on other platforms such as Digital Equipment Corporation's DECnet, Open Systems Interconnection (OSI), and Xerox Network Systems (XNS).
Nonetheless, for a period in the late 1980s and early 1990s, engineers, organizations and nations were polarized over the issue of which standard, the OSI model or the Internet protocol suite, would result in the best and most robust computer networks.
Formal specification and standards
The technical standards underlying the Internet protocol suite and its constituent protocols have been delegated to the Internet Engineering Task Force (IETF).
The characteristic architecture of the Internet Protocol Suite is its broad division into operating scopes for the protocols that constitute its core functionality. The defining specification of the suite is RFC 1122, which broadly outlines four abstraction layers. These have stood the test of time, as the IETF has never modified this structure. As such a model of networking, the Internet Protocol Suite predates the OSI model, a more comprehensive reference framework for general networking systems.
Key architectural principles
The end-to-end principle has evolved over time. Its original expression put the maintenance of state and overall intelligence at the edges, and assumed the Internet that connected the edges retained no state and concentrated on speed and simplicity. Real-world needs for firewalls, network address translators, web content caches and the like have forced changes in this principle.
The robustness principle states: "In general, an implementation must be conservative in its sending behavior, and liberal in its receiving behavior. That is, it must be careful to send well-formed datagrams, but must accept any datagram that it can interpret (e.g., not object to technical errors where the meaning is still clear)." "The second part of the principle is almost as important: software on other hosts may contain deficiencies that make it unwise to exploit legal but obscure protocol features."
Encapsulation is used to provide abstraction of protocols and services. Encapsulation is usually aligned with the division of the protocol suite into layers of general functionality. In general, an application (the highest level of the model) uses a set of protocols to send its data down the layers. The data is further encapsulated at each level.
An early architectural document, , emphasizes architectural principles over layering. RFC 1122, titled Host Requirements, is structured in paragraphs referring to layers, but the document refers to many other architectural principles and does not emphasize layering. It loosely defines a four-layer model, with the layers having names, not numbers, as follows:
The application layer is the scope within which applications, or processes, create user data and communicate this data to other applications on another or the same host. The applications make use of the services provided by the underlying lower layers, especially the transport layer which provides reliable or unreliable pipes to other processes. The communications partners are characterized by the application architecture, such as the client–server model and peer-to-peer networking. This is the layer in which all application protocols, such as SMTP, FTP, SSH, HTTP, operate. Processes are addressed via ports which essentially represent services.
The transport layer performs host-to-host communications on either the local network or remote networks separated by routers. It provides a channel for the communication needs of applications. UDP is the basic transport layer protocol, providing an unreliable connectionless datagram service. The Transmission Control Protocol provides flow-control, connection establishment, and reliable transmission of data.
The internet layer exchanges datagrams across network boundaries. It provides a uniform networking interface that hides the actual topology (layout) of the underlying network connections. It is therefore also the layer that establishes internetworking. Indeed, it defines and establishes the Internet. This layer defines the addressing and routing structures used for the TCP/IP protocol suite. The primary protocol in this scope is the Internet Protocol, which defines IP addresses. Its function in routing is to transport datagrams to the next host, functioning as an IP router, that has the connectivity to a network closer to the final data destination.
The link layer defines the networking methods within the scope of the local network link on which hosts communicate without intervening routers. This layer includes the protocols used to describe the local network topology and the interfaces needed to affect the transmission of Internet layer datagrams to next-neighbor hosts.
Link layer
The protocols of the link layer operate within the scope of the local network connection to which a host is attached. This regime is called the link in TCP/IP parlance and is the lowest component layer of the suite. The link includes all hosts accessible without traversing a router. The size of the link is therefore determined by the networking hardware design. In principle, TCP/IP is designed to be hardware independent and may be implemented on top of virtually any link-layer technology. This includes not only hardware implementations, but also virtual link layers such as virtual private networks and networking tunnels.
The link layer is used to move packets between the Internet layer interfaces of two different hosts on the same link. The processes of transmitting and receiving packets on the link can be controlled in the device driver for the network card, as well as in firmware or by specialized chipsets. These perform functions, such as framing, to prepare the Internet layer packets for transmission, and finally transmit the frames to the physical layer and over a transmission medium. The TCP/IP model includes specifications for translating the network addressing methods used in the Internet Protocol to link-layer addresses, such as media access control (MAC) addresses. All other aspects below that level, however, are implicitly assumed to exist, and are not explicitly defined in the TCP/IP model.
The link layer in the TCP/IP model has corresponding functions in Layer 2 of the OSI model.
Internet layer
Internetworking requires sending data from the source network to the destination network. This process is called routing and is supported by host addressing and identification using the hierarchical IP addressing system. The internet layer provides an unreliable datagram transmission facility between hosts located on potentially different IP networks by forwarding datagrams to an appropriate next-hop router for further relaying to its destination. The internet layer has the responsibility of sending packets across potentially multiple networks. With this functionality, the internet layer makes possible internetworking, the interworking of different IP networks, and it essentially establishes the Internet.
The internet layer does not distinguish between the various transport layer protocols. IP carries data for a variety of different upper layer protocols. These protocols are each identified by a unique protocol number: for example, Internet Control Message Protocol (ICMP) and Internet Group Management Protocol (IGMP) are protocols 1 and 2, respectively.
The Internet Protocol is the principal component of the internet layer, and it defines two addressing systems to identify network hosts and to locate them on the network. The original address system of the ARPANET and its successor, the Internet, is Internet Protocol version 4 (IPv4). It uses a 32-bit IP address and is therefore capable of identifying approximately four billion hosts. This limitation was eliminated in 1998 by the standardization of Internet Protocol version 6 (IPv6) which uses 128-bit addresses. IPv6 production implementations emerged in approximately 2006.
Transport layer
The transport layer establishes basic data channels that applications use for task-specific data exchange. The layer establishes host-to-host connectivity in the form of end-to-end message transfer services that are independent of the underlying network and independent of the structure of user data and the logistics of exchanging information. Connectivity at the transport layer can be categorized as either connection-oriented, implemented in TCP, or connectionless, implemented in UDP. The protocols in this layer may provide error control, segmentation, flow control, congestion control, and application addressing (port numbers).
For the purpose of providing process-specific transmission channels for applications, the layer establishes the concept of the network port. This is a numbered logical construct allocated specifically for each of the communication channels an application needs. For many types of services, these port numbers have been standardized so that client computers may address specific services of a server computer without the involvement of service discovery or directory services.
Because IP provides only a best-effort delivery, some transport-layer protocols offer reliability.
TCP is a connection-oriented protocol that addresses numerous reliability issues in providing a reliable byte stream:
data arrives in-order
data has minimal error (i.e., correctness)
duplicate data is discarded
lost or discarded packets are resent
includes traffic congestion control
The newer Stream Control Transmission Protocol (SCTP) is also a reliable, connection-oriented transport mechanism. It is message-stream-oriented, not byte-stream-oriented like TCP, and provides multiple streams multiplexed over a single connection. It also provides multihoming support, in which a connection end can be represented by multiple IP addresses (representing multiple physical interfaces), such that if one fails, the connection is not interrupted. It was developed initially for telephony applications (to transport SS7 over IP).
Reliability can also be achieved by running IP over a reliable data-link protocol such as the High-Level Data Link Control (HDLC).
The User Datagram Protocol (UDP) is a connectionless datagram protocol. Like IP, it is a best-effort, unreliable protocol. Reliability is addressed through error detection using a checksum algorithm. UDP is typically used for applications such as streaming media (audio, video, Voice over IP etc.) where on-time arrival is more important than reliability, or for simple query/response applications like DNS lookups, where the overhead of setting up a reliable connection is disproportionately large. Real-time Transport Protocol (RTP) is a datagram protocol that is used over UDP and is designed for real-time data such as streaming media.
The applications at any given network address are distinguished by their TCP or UDP port. By convention, certain well known ports are associated with specific applications.
The TCP/IP model's transport or host-to-host layer corresponds roughly to the fourth layer in the OSI model, also called the transport layer.
QUIC is rapidly emerging as an alternative transport protocol. Whilst it is technically carried via UDP packets it seeks to offer enhanced transport connectivity relative to TCP. HTTP/3 works exclusively via QUIC.
Application layer
The application layer includes the protocols used by most applications for providing user services or exchanging application data over the network connections established by the lower level protocols. This may include some basic network support services such as routing protocols and host configuration. Examples of application layer protocols include the Hypertext Transfer Protocol (HTTP), the File Transfer Protocol (FTP), the Simple Mail Transfer Protocol (SMTP), and the Dynamic Host Configuration Protocol (DHCP). Data coded according to application layer protocols are encapsulated into transport layer protocol units (such as TCP streams or UDP datagrams), which in turn use lower layer protocols to effect actual data transfer.
The TCP/IP model does not consider the specifics of formatting and presenting data and does not define additional layers between the application and transport layers as in the OSI model (presentation and session layers). According to the TCP/IP model, such functions are the realm of libraries and application programming interfaces. The application layer in the TCP/IP model is often compared to a combination of the fifth (session), sixth (presentation), and seventh (application) layers of the OSI model.
Application layer protocols are often associated with particular client–server applications, and common services have well-known port numbers reserved by the Internet Assigned Numbers Authority (IANA). For example, the HyperText Transfer Protocol uses server port 80 and Telnet uses server port 23. Clients connecting to a service usually use ephemeral ports, i.e., port numbers assigned only for the duration of the transaction at random or from a specific range configured in the application.
At the application layer, the TCP/IP model distinguishes between user protocols and support protocols. Support protocols provide services to a system of network infrastructure. User protocols are used for actual user applications. For example, FTP is a user protocol and DNS is a support protocol.
Although the applications are usually aware of key qualities of the transport layer connection such as the endpoint IP addresses and port numbers, application layer protocols generally treat the transport layer (and lower) protocols as black boxes which provide a stable network connection across which to communicate. The transport layer and lower-level layers are unconcerned with the specifics of application layer protocols. Routers and switches do not typically examine the encapsulated traffic, rather they just provide a conduit for it. However, some firewall and bandwidth throttling applications use deep packet inspection to interpret application data. An example is the Resource Reservation Protocol (RSVP). It is also sometimes necessary for Applications affected by NAT to consider the application payload.
Layer names and number of layers in the literature
The following table shows various networking models. The number of layers varies between three and seven.
Some of the networking models are from textbooks, which are secondary sources that may conflict with the intent of RFC 1122 and other IETF primary sources.
Comparison of TCP/IP and OSI layering
The three top layers in the OSI model, i.e. the application layer, the presentation layer and the session layer, are not distinguished separately in the TCP/IP model which only has an application layer above the transport layer. While some pure OSI protocol applications, such as X.400, also combined them, there is no requirement that a TCP/IP protocol stack must impose monolithic architecture above the transport layer. For example, the NFS application protocol runs over the External Data Representation (XDR) presentation protocol, which, in turn, runs over a protocol called Remote Procedure Call (RPC). RPC provides reliable record transmission, so it can safely use the best-effort UDP transport.
Different authors have interpreted the TCP/IP model differently, and disagree whether the link layer, or any aspect of the TCP/IP model, covers OSI layer 1 (physical layer) issues, or whether TCP/IP assumes a hardware layer exists below the link layer.
Several authors have attempted to incorporate the OSI model's layers 1 and 2 into the TCP/IP model since these are commonly referred to in modern standards (for example, by IEEE and ITU). This often results in a model with five layers, where the link layer or network access layer is split into the OSI model's layers 1 and 2.
The IETF protocol development effort is not concerned with strict layering. Some of its protocols may not fit cleanly into the OSI model, although RFCs sometimes refer to it and often use the old OSI layer numbers. The IETF has repeatedly stated that Internet Protocol and architecture development is not intended to be OSI-compliant. RFC 3439, referring to the internet architecture, contains a section entitled: "Layering Considered Harmful".
For example, the session and presentation layers of the OSI suite are considered to be included in the application layer of the TCP/IP suite. The functionality of the session layer can be found in protocols like HTTP and SMTP and is more evident in protocols like Telnet and the Session Initiation Protocol (SIP). Session-layer functionality is also realized with the port numbering of the TCP and UDP protocols, which are included in the transport layer of the TCP/IP suite. Functions of the presentation layer are realized in the TCP/IP applications with the MIME standard in data exchange.
Another difference is in the treatment of routing protocols. The OSI routing protocol IS-IS belongs to the network layer, and does not depend on CLNS for delivering packets from one router to another, but defines its own layer-3 encapsulation. In contrast, OSPF, RIP, BGP and other routing protocols defined by the IETF are transported over IP, and, for the purpose of sending and receiving routing protocol packets, routers act as hosts. As a consequence, include routing protocols in the application layer. Some authors, such as Tanenbaum in Computer Networks, describe routing protocols in the same layer as IP, reasoning that routing protocols inform decisions made by the forwarding process of routers.
IETF protocols can be encapsulated recursively, as demonstrated by tunnelling protocols such as Generic Routing Encapsulation (GRE). GRE uses the same mechanism that OSI uses for tunnelling at the network layer.
Implementations
The Internet protocol suite does not presume any specific hardware or software environment. It only requires that hardware and a software layer exists that is capable of sending and receiving packets on a computer network. As a result, the suite has been implemented on essentially every computing platform. A minimal implementation of TCP/IP includes the following: Internet Protocol (IP), Address Resolution Protocol (ARP), Internet Control Message Protocol (ICMP), Transmission Control Protocol (TCP), User Datagram Protocol (UDP), and Internet Group Management Protocol (IGMP). In addition to IP, ICMP, TCP, UDP, Internet Protocol version 6 requires Neighbor Discovery Protocol (NDP), ICMPv6, and Multicast Listener Discovery (MLD) and is often accompanied by an integrated IPSec security layer.
See also
BBN Report 1822, an early layered network model
FLIP (protocol) (fast local Internet protocol stack)
List of automation protocols
List of information technology acronyms
List of IP protocol numbers
List of network protocols
List of TCP and UDP port numbers
References
Bibliography
A Protocol for Packet Network Intercommunication, Cerf & Kahn, IEEE Trans on Comms, Vol Com-22, No 5 May 1974
External links
Internet History – Pages on Robert Kahn, Vinton Cerf, and TCP/IP (reviewed by Cerf and Kahn).
A TCP/IP Tutorial – from the Internet Engineering Task Force (January 1991)
The Ultimate Guide to TCP/IP
The TCP/IP Guide – A comprehensive look at the protocols and the procedure and processes involved
TCP/IP Sequence Diagrams
Daryl's TCP/IP Primer – Intro to TCP/IP LAN administration, conversational style
History of the Internet
Network architecture
Reference models
|
11680860
|
https://en.wikipedia.org/wiki/Asus%20Eee%20PC
|
Asus Eee PC
|
The ASUS Eee PC is a netbook computer line from Asus, and a part of the ASUS Eee product family. At the time of its introduction in late 2007, it was noted for its combination of a lightweight, Linux-based operating system, solid-state drive (SSD), and relatively low cost. Newer models added the options of Microsoft Windows operating system and rotating media hard disk drives (HDD), and initially retailed for up to 500 euros.
The first Eee PC was a milestone in the personal computer business, launching the netbook category of small, low-cost laptops in the West (in Japan, subnotebooks had long been a staple in computing). According to Asus, the name Eee derives from "the three Es", an abbreviation of its advertising slogan for the device: "Easy to learn, Easy to work, Easy to play".
In January 2013, ASUS officially ended production of their Eee PC series, citing declining sales due to consumers favoring tablets and Ultrabooks over netbooks. However, they subsequently restarted the line with the release of the EeeBook series in 2015.
History
Eee 700 series
ASUS announced two Eee PC models at Computex Taipei 2007: the 701 and the 1001. The 701 base model Eee PC 4G was released on 16 October 2007 in Taiwan. Three additional models followed.
Both the price and the size of the device are small in comparison with similar ultra-mobile PCs. The Eee series is a response to the XO-1 notebook from the One Laptop per Child initiative. At the Intel Developer Forum 2007, Asus demonstrated the Classmate PC and the Eee PC, and listed specifications for four models of the Eee PC.
In some countries, the products have the marketing names EeePC 8G, 4G, 4G Surf, and 2G Surf, though in other countries the machines are still designated by the model numbers 700 and 701. The 4G Surf uses socketed RAM but some revisions do not have a door to access the slot.
ASUS released a version of the Eee PC with Microsoft Windows XP pre-installed in January 2008. In Japan, the version is known as the 4G-X.
Some early 700-series models drained the battery approximately 10% per day when the unit was completely powered off and not plugged in, thus emptying the battery even when not in use.
User modifications
Some users of the 701 physically modified the machine to replace the 4 GB solid state drive.
The 8 GB versions of the 700 series leave the SSD area on the motherboard empty and connect their SSD as an internal PCI Express Mini Card. Replacing the SSD requires only an SSD compatible with the connector. The SSD area on the motherboard may also be used to install other devices, accommodate physically larger SSDs, or even hard-solder an SSD salvaged from a 2 GB or 4 GB 700 model. As this requires only soldering on a new device without removing an old one, the risk of doing so may be acceptable to some users.
Eee 900 series
The Eee 900 series was launched in Hong Kong on 16 April 2008, and in the UK on 1 May 2008 for £329 (approximately 410 € or 650 US$ including VAT). It was launched in the US on 12 May 2008. The Eee 900 series dimensions are a little larger than the 70x models–measuring 225 × 165 × 35 mm (WxDxH) (8.8" × 6.5" × 1.4") and weighing around 1 kg (2.2 lb). The machine has a multi-touch touchpad allowing two-finger scroll and zoom via a "pinch" gesture, and is available with Linux and/or MS Windows XP configurations, depending on the market.
The Intel Atom version is named the EeePC 900a and comes with an 8GB or 16 GB SSD. Some of these Eee PCs also have a 4 GB SSD installed similarly to that in the 701 for a total storage space of 20GB. Those that do not are named the Asus EEE 900 16G. The MS Windows XP version is named the EeePC 900 Win and also comes in two versions: one with a total storage of 12 GB (one 4 GB SSD and one 8 GB SSD) and one with 16 GB (on a single SSD). The Linux 20G version is sold for the same price as the MS Windows 12G version. In the case of the 16G EEEs, the MS Windows version costs more than the Linux version.
The Windows version comes with Microsoft Works and Windows Live Suite preinstalled. It also includes StarSuite 8. The machines are otherwise identical to each other with 1 GB of RAM, an 8.9-inch (226 mm) 1024×600 LCD and a 1.3-megapixel webcam. This model has the same Intel Celeron CPU as the Eee PC 700, running at its full 900 MHz clock speed (rather than the 630 MHz speed seen in the Eee PC 700).
Other Eee 90x models
On 3 June 2008, Asus unveiled the Eee 901 at Computex Taipei. It was a revision of the 900 series with a different chassis. The 901 features an Intel Atom Diamondville CPU clocked at 1.6 GHz, an "expanded" battery (listed as 6-cell), and "Super Hybrid Engine" software for power management which will provide a battery life of 4.2-7.8 hours. Bluetooth and 802.11n Wi-Fi are also included. The 901 uses the Intel 945GME chipset, meeting the requirements for MS Windows Vista or 7 Aero. The 901 is otherwise similar to the 900, shipping in Linux or MS Windows XP configurations with flash memory storage of different sizes. It was discovered that the Eee 901 has capacity for a "3GCard" upgrade.
The Eee PC 900D has 8GB flash memory and Windows XP preinstalled.
The Eee PC 904HD was one of the first Eee PC models which features an 80 GB HDD instead of an SSD. It features an Intel Celeron M running at 900 MHz and gets power from a 6-cell battery. Like other Eee PC 90x models, it features 802.11 b/g WLAN and a 1.3M pixel webcam. MS Windows XP comes pre-installed.
The Eee PC 904HA's dimensions are 266 mm(W) × 191.2 mm(D) × 28.5 mm~ 38 mm(H). The 8.9-inch screen has a native resolution size of 1024×600 pixels (WSVGA). The CPU is an Intel Atom N270 @ 1.6 GHz, and the standard model came with 1 GB DDR2 RAM occupying the single memory slot. The 160 GB Hard Disk Drive had Microsoft Windows XP Home pre-installed. Also standard are the 6-cell battery, the 1.3M pixel webcam and integrated microphone, and both ethernet and Wi-Fi 802.11 b/g network connections.
The Eee PC 900A features almost the same specs as the Eee PC 901 (except the primary SSD, Bluetooth, 1.3M pixel webcam and the 6-cell battery, that has been replaced by a 4-cell battery), but in a case nearly the same as used in the Eee PC 900 model.
On 17 June 2009, Asus released the Disney Netpal (Eee PC MK90), which is similar to the Eee 90x models.
Battery controversy
There was some controversy regarding the battery supplied with the EeePC 900. Versions pre-released to many non-UK journalists and reviewers were equipped with a 5800 mAh battery, but the first retail versions in Hong Kong, the United Kingdom and Singapore were shipped with a smaller, 4400 mAh (76% of that capacity) battery, which commentators note has led to a great variation in the machine's battery life in reviews, in some cases as much as 90 minutes. As a result of the objections to this, Asus provided a free battery replacement program in Hong Kong and Singapore, and ran a paid-for battery exchange program in the UK.
Asus has stated that the smaller battery is "presently the standard battery supplied in the UK" and "the default standard battery pack for Asus Eee PC 900 worldwide". Asus provided a battery exchange to all UK Eee PC 900 customers for £10, and released a firmware update which claimed to extend battery life by 30 minutes ("BIOS 0601: Updated all battery discharge tables to extend battery life").
In Australia and Italy, the situation was reversed: Reviewers received EeePC 900 systems fitted with the 4400 mAh battery but the retail models were equipped with the 5800 mAh battery. Customers of Media Markt in Italy received the EeePC 900 at the beginning of sales (May/June) with a 5800 mAh battery and later (June/July) with a 4400 mAh battery.
Best Buy's custom variants of the 1000HD and 900A also both include a 4400 mAh battery.
Part of the above problem extends from the fact that the entire range was substantially more successful than Asus had originally anticipated. Currently, Asus has several large complexes scattered throughout Taiwan and China, with the largest in the city of Suzhou (China), being the size of eight football fields. Upon the unexpected success of the range, Asus factories worked around the clock to keep up supply and further development. Consequently, even within Asus testing labs in Taipei, many variations were found within test models. Generally, however, Asus does inform reviewers that the final retail model may contain different features from those offered in the review model.
Eee PC 1000 series
The 1000 series launched at Computex Taipei on 3 June 2008. It featured a new 10-inch (254 mm) screen and a 1.6 GHz Intel Atom CPU, although built-in power management software can increase the speed to 1.7 GHz. The 1000 model ships with Linux, an 8 GB SSD and a 32 GB SSD (totalling 40 GB); the 1000H model ships with Windows XP Home or Linux and an 80 or 160 GB SATA HDD. Both the 1000 and the 1000H support up to 2 GB of DDR2 RAM of 667 MHz clock speed. The 1000 has a rated battery life of 4.2–7.5 hours, while the 1000H is rated for 3.2–7 hours. It also offers a keyboard that is 92% the size of generic notebooks, aiming to make it more comfortable to type. Like the Eee PC 901, the new machines feature 802.11n Wi-Fi and Bluetooth. WiMAX is not currently supported.
The 1000HD (released in September 2008) is a slightly cheaper version of the 1000 series. It features the same specifications as the 1000H, except it uses a 900 MHz Celeron CPU chip.
The 1000HA (released in October 2008) also costs less than the 1000H, but has the same Intel Atom 1.6 GHz CPU, a 160 GB HDD, and 1 GB of RAM. It also has wireless and on some models, Bluetooth.
The 1000XPH has the same Intel Atom 1.6 GHz CPU, an 80 GB HDD, and 1 GB of RAM. Other amenities include 10/100 LAN and 802.11 b/g Wireless LAN adapters, an integrated webcam, but no Bluetooth. The 1000HG features a Huawei 3G-Modem.
In February 2009, Asus unveiled the 1000HE, using the new Intel Atom 280 processor, with a 10-inch LED-lit display at 1024x600 physical but 1024x768 virtual, 6-cell battery with an advertised 9.5 hours of battery life, 160 GB HDD running at 5400RPM, Bluetooth, 802.11n wireless networking, 1.3-megapixel camera, and revised keyboard similar to Apple's keyboards.
Although the screen resolution on the 1000 series is 1024x600, it has pixel mapping (memory addressing) which covers a virtual 1024x768 desktop. One could choose with a simple Fn key combination what graphics mode to operate in: either 800x600, 1024x600 (native resolution), virtual 1024x768 compressed (vertically compressed into 600 space), and 1024x768 with panning. The latter mode would display only 660 vertical pixels at a time, but as the pointer approached the top or bottom of the screen the display content would shift the "hidden" pixels into view to better display certain websites. It also freed more screen real estate for other tasks, such as web browsing or office applications, by allowing the user to move some things, like the top empty grey window frame area (otherwise wasted) off-screen. A similar panning effect can be achieved on other Linux systems using xrandr.
At CeBIT 2009, Asus unveiled the 10-inch EEE 1008HA, introducing the new design concept "Seashell".
The 1005HA comes in three models. From least to most expensive, they are the 1005HA-B, the 1005HA-V and the 1005HA-P. The 1005HA-B has a removable 3-cell battery with a rated 4-hour life per charge, a 1.3-megapixel camera, and uses the N270 processor. At the higher end, the 1005HA-P has a removable 6-cell, 5600 mAh, 63 W/h battery with rated 10.5 hour battery life, a 1.3-megapixel camera and uses the N280 processor. There is also a 1005HA-H model, sold in Poland, equipped with a 6-cell battery, an N270 processor and a 0.3-megapixel camera.
Asus officially announced the first Eee with Nvidia Ion graphics, the 1201N, on 19 November 2009, later replaced by the 1201PN and 1201NLand then 1215N, with a more powerful Atom D525 dual-core processor and Ion 2 graphics.
The 1215 series then saw the release of the 1215B, which came with an E-450/E-350/C30/C50/C60 processor, a "Zacate" APU. The 1215B has USB 3.0 ports, as well as a CPU and BIOS that support full hardware virtualization in both Linux (via KVM, Xen, VirtualBox, VMware) and Windows (via XP mode, VirtualBox, VMware). The 1215B is the first of the Eee PC line of computers that supports virtualization. The 1215B was subsequently replaced by the upgraded 1225B, which replaced the E350 APU of the previous model with the E-450 APU which provides a minor speed bump to the CPU and turbocore for the GPU.
Eee 1025c and 1025ce
These were released in 2012 and described as the last in the line of the Asus Eee PC series. With only 1 GB memory, standard USB2 ports and sluggish performance, these were not especially notable releases other than for their exceptional battery life. Other reported problems are the lack of a hatch to access the memory, so RAM cannot be upgraded without breaking open the case; also, there is a single mono speaker rather than dual stereo speakers.
Eee 1015 series
In 2013 Asus restarted the Eee PC series with the 1015E models, some of which are on Windows 8 and some on Ubuntu Linux. These come with 2 GB memory and USB3 ports.
The 1015E fixes some of the problems with the 1025C by using a faster processor, 2GB memory and stereo speakers. The RAM is soldered in place and cannot be upgraded. Due to improved performance, the battery life is shorter than that of the 1025 series. It is possible to reduce the processor clock speed to increase battery life.
EeeBook
Further Information: Asus EeeBook
In 2014 Asus relaunched the Eee PC with the EeeBook lineup of computers, starting with the X205TA model. By 2017 the EeeBook lineup was succeeded by the Asus VivoBook E Series. Some EeeBook laptops were rebranded to VivoBook E Series laptops; the EeeBook E202 was rebranded to the VivoBook E202, ending the EeeBook lineup again. The EeeBook lineup consists of the E202 (E202SA), E502 (E502SA and E502MA) and X205 (X205TA).
Hardware
Rechargeable CMOS battery
Asus Eee PC series models 1005ha, 1005hab, 1008ha, and others use Varta ML1220 or equivalent Maxell, Sanyo and Panasonic ML1220 lithium ion coin cell rechargeable batteries, terminated with a two-pin Molex connector plug.
Processor
Eee PC models have typically used netbook specific processors or ultra-low voltage versions of mainstream processors. The earliest Eee PC models used a 900 MHz Intel Celeron M processor underclocked to 630 MHz. Later models shipped with Intel Atom and AMD Fusion processors.
Display
The Eee PC 700 has an 800×480 pixel, 7 inch (178 mm) display, measured diagonally. The screen does not cover the entire space within the lid; instead it is flanked on the sides by stereo speakers, and above by the (optional) camera in the trim at the top. The Eee PC 900 and 901 come with a 1024×600 pixel 8.9-inch (226 mm) display, almost filling the lid.
Later models came with 10 inch to 12.1 inch displays and up to 1366×768 resolution.
With all models, an external display can be supported through a standard VGA connector. On some early models this connector lacks the screws to secure it to the Eee PC, which some consider a safety precaution. The manufacturer does not give any specifications on maximum resolution and display configuration (mirroring, extended desktop), but most models can handle an external display at native resolution of 1440×1050, and even 1600×900, although performance starts to slow down. Models that ship with Xandros do not have access to the full capacity of the external VGA output by default, allowing only 'mirroring'. Users must reconfigure their xorg.conf file, or install a more recent OS to allow the higher resolution output.
The EEE PC900 has a tendency for the display to fail with black blobs due to air leakage. This is repairable but depending on exact replacement unit sometimes needs the eight-pin EEPROM moved from the old display to the new one, and a single track linked to regain picture and brightness control after the new one is fitted.
Keyboard
On a normal, full size computer keyboard, the 10 keys Q–P measure 190 mm (7.48 in). The 700 and 900 series are equipped with similar keyboards, 82% of the size of a generic one, meaning that the Q–P keys measure 155 mm (6.10 in). The 1000 series, as it fits in a more spacious case, has 92% of a full size keyboard, where the Q–P keys measure 175 mm (6.89 in).
Some Eee PC lines such as the 1000HE and 1215s uses the island-style keyboard, similar to keyboards used in Apple computers and Sony's VAIO series, where the keys are reminiscent of Scrabble tiles, being spaced apart and raised from the surface below.
Storage
The early model Eee PCs use a solid-state drive for storage (instead of a hard drive), which consumes less power when in use, allows the device to boot faster, generates no noise, and is less susceptible to mechanical shock damage than hard drives. A downside of SSD storage (flash memory) is that an individual sector can be written only about 200,000 times. This problem can be partially mitigated by intelligent wear leveling, resulting in a MTBF similar to conventional platter-based hard drives.
The SSDs used in early Eee PCs also had extremely poor random write performance; the S101 does not have this problem.
In the 2 GB and 4 GB models of the 700 series of the Eee PC, the SSD is permanently soldered to the board. In the 8 GB model, the SSD is a card connected via the internal PCI Express Mini Card connector, leaving the original SSD area on the motherboard empty.
The Eee PC 900 comes with a removable PCI Express Mini SSD module, with or without four additional 1 GB memory chips soldered on the main board. Different models come with different-sized SSDs. One Linux version has 4 GB, a MS Windows XP version has 8 GB, and all remaining ones, MS Windows XP or Linux, have 16 GB.
The Eee PC 1000 contains a fast 8 GB internal SSD and a slower 32 GB internal flash drive.
Some models, such as the 1000H and 904HD, do not have a SSD, and instead have a SATA internal hard drive of either 80 or 160 GB, which can be upgraded by the user.
All Eee PC models also include a memory card reader, supporting SD, SDHC and MMC cards for additional storage, while the Eee PC S101 also has support for Memorystick and MS-PRO.
Eee PC 1004DN is the first model with a Super-Multi optical disc drive (ODD) that reads and writes data to DVD or compact disc.
Memory
Most early Eee PCs use 533/667 MHz DDR2 SDRAM via a standard SO-DIMM module, which can be swapped out. The 700 and 701SDX have RAM soldered to the motherboard. Other models (like the white 4GS-W010) lacked memory access panels and required disassembly to upgrade memory.
Later models, such as the black model EEEPC 4G SURF (4GS-PK008), and newer white models (4GS-W010), have a removable panel on the underside that allows the user to change the RAM without fully disassembling the system.
Asus reverted to soldering RAM directly onto the mainboard in their later releases of the Eee PC range. The Asus technical data for the 1025c and 1025ce models is seen as erroneous by certain online retailers offering RAM upgrades.
Cooling
In an EE380 talk, an Asus engineer mentioned that the Eee PC uses the keyboard shielding as a heat sink to absorb the heat generated by the processor. Three chips need heatsinking, and this is achieved by heat-conductive adhesive pads which sit between the chip heatsink flats and the keyboard shield and connect them thermally. It is important to ensure that the heatsink pads are replaced correctly after maintenance such as cleaning or replacing the fan. The Eee PC has a fan and vents to cool off the system.
Operating systems (software user environment)
Most Eee PC models were shipped with either Windows XP or a Linux distribution called Xandros. Later models (e.g. 1015E) ship with Windows 7 Starter or Linux Ubuntu installed.
Users have tried to install various other operating systems on Eee PCs. The following are known to work on most models:
Linux, especially Lubuntu, Debian, Salix, SliTaz, PepperMint <6, Bodhi 4.x, and other Linuxes still available in 32bit and employing an interface (environment) with a small memory-footprint
Chrome OS and Android x86
Mac OS X: v10.4, v10.5 and v10.6
Microsoft Windows XP
EasyPeasy Linux (custom for the eeePC, now discontinued but still available for download)
Windows Vista, 7, 8, 8.1 and 10
Some of the above operating systems, while they may have been available, and some barely worked sluggishly, are no longer up to date. Some have even been discontinued or now only offer 64bit versions which are not compatible with the eeePC series.
Specifications
In the UK, the Eee is also promoted as the RM Asus Minibook, which is targeted at students; however, the unit itself is no different.
701 4G (non-Surf) late releases have Windows XP pre-installed without Microsoft Works and Windows Live Suite, excluding the disc, or either Xandros OS pre-installed.
Configurations
Naming of the 700 series of models of the device appears to relate to the size of installed SSD, camera, and battery size. The Eee PC Surf models include the 4400 mAh battery pack and no webcam, while the non-Surf models have the 5200 mAh battery pack and a webcam installed. The model numbers (700, 701) may still be the same as has been seen on pre-production samples. Asus may offer upgrades for the SSD storage via the empty Mini PCIe slot, which has been shown to be labeled FLASH_CON in take-apart photos of the 4G. When a Mini PCIe card is inserted into the spare empty slot, the internal SSD is disabled, making the device unable to boot from the original SSD. There are also signal lines for a USB port on the Mini PCIe pins which have been used to connect various USB devices internally. Some 701 models with serial numbers starting at 7B do not have a second mini PCIe slot soldered onto the motherboard, though the circuit traces and solder pads remain.
In the 70x series, the pre-installed Xandros operating system has a Linux kernel with a kernel option set limiting the detected RAM size to a maximum of 1 GB, even if a larger RAM module is installed. The actual capacity is shown in full in the BIOS setup and under other OSes. However, it is possible to recompile the kernel with support for more RAM.
The 900 and later laptops had the kernel pre-configured to support up to 4 GB of memory address space.
Fanbase and continued use
The ASUS Eee PC series of netbooks still attract a small crowd of people who need an affordable, lightweight and tiny netbook for traveling. Due to their lack of powerful processors and modern compatibility, however, they are nearing little to no use today due to being replaced by Chromebooks and other cheaper alternatives.
See also
Asus EeeBox PC
Unofficial Reddit-based fanpage
Asus Eee Top
CMOS battery
Comparison of netbooks
Comparison of netbook-oriented Linux distributions
Internet appliance
Rechargeable battery
References
Eee PC
Subnotebooks
Products introduced in 2007
|
9082185
|
https://en.wikipedia.org/wiki/Citect
|
Citect
|
Citect is now a group of industrial software products sold by Aveva, but started as a software development company specialising in the Automation and Control industry. The main software products developed by Citect included CitectSCADA, CitectSCADA Reports, and Ampla.
History
Citect began as a subsidiary of Alfa Laval in 1973. The company was then known as Control Instrumentation. A name change of the company took place to Ci Technologies, and then to Citect to take advantage of the well known name of its flagship software product, CitectSCADA.
Whilst Citect was considered to be a software development company, it also had a large Professional Services division, which was a key contributor to the success of the business.
In 2006, Citect Pty Ltd was acquired by the Schneider Electric group.
At the end of 2008, Citect ceased trading as an independent company and all of its remaining operations were absorbed into Schneider Electric.
Products
Ampla
Ampla is Manufacturing Execution Systems (MES) software.
Cicode
Cicode is a programming language used by Citect SCADA software. The structure and syntax of Cicode is very similar to that of the Pascal programming language, the main difference being that it does not include pointers and associated concepts. Citect provides a rich programming API that includes sophisticated programming constructs such as concurrent tasks and semaphores.
A Cicode sample is shown below. The function is used to log information to a file.
FUNCTION I0_Trace(STRING sPrompt)
INT hDev;
INT hTime;
STRING sText;
IF hTraceOn THEN
IF (StrLeft(sPrompt, StrLength(sMask)) = sMask) THEN
TraceMsg(sPrompt);
hTime = TimeCurrent();
sText = TimeToStr(hTime, 2)+" "+TimeToStr(hTime, 1)+" "+sPrompt;
SemWait(hDebugSem, 10);
FileWriteLn(hDebugFile, sText);
SemSignal(hDebugSem);
END
END
END
CitectSCADA
CitectSCADA is a HMI / SCADA software package supporting
an extremely wide range of Schneider Electric and 3rd party PLCs (using vendor's OPC driver or its own native drivers) and
a big collection of symbols of industrial equipments for drawing the application scenes
made by Citect with
a design-time HMI/GUI construction tool (called Citect Graphics Builder) and
a run-time application logics expressed in the Cicode programming language.
Citect for DOS
Martin Roberts wrote Citect for DOS, released in 1987, as a response to the limited range of PC-based operator interface software available at the time. Citect for DOS consisted of a configuration database (in dBase format), a bitmap (256 colour raw format) and an animation file. The user would draw a representation of a facility using the readily available Dr Halo graphical package and placing "Animation Points" in the desired location. "Tags" were assigned in the configuration databases, equating to addresses within the programmable electronic devices Citect was communicating with. By referencing these tags at animation points using other configuration databases, the user could show the state of equipment such as running, stopped or faulted in real-time.
Citect for DOS could communicate with various programmable electronic devices via the various serial links offered by the device; some through direct PC serial port connections, others through 3rd party PC based cards designed to communicate with the target programmable electronic device. Software drivers were written for many protocols; its ability to communicate with a variety of devices - and to have new drivers written when required - became a primary selling point for Citect.
The runtime software ran on a DSI card; a 32 bit co-processor that was inserted into an available ISA slot in the PC. This was due to insufficient processing power available in the 286 and 386 PCs available at the time.
Citect for Windows
Version 1
During the early 1990s PC computational power had caught up and Microsoft Windows based software was becoming popular, so Citect for Windows was developed and released in 1992. It no longer needed the DSI card to run on a PC. The configuration methodology remained similar to Citect for DOS but became more intuitive under MS Windows. Citect for Windows was written as a direct response to a request by Argyle Diamonds. The company was originally intending to use a Honeywell system until a number of Arygle's site engineers talked Argyle around to Citect after highlighting the existing problems they were having with Honeywell systems on site. Argyle contributed $1 million to the development of Citect for Windows. To this day the "ArgDig" alarm database (i.e. Argyle Digital) is still part of Citect.
Version 2
In 1993 BHP Iron Ore upgraded its Port Hedland operator interface to Citect for Windows. Being the largest installation attempted by Citect at the time, Version 1 was showing many limitations. Version 2 was developed to improve on these limitations. Key changes were made to the graphics configuration by Andrew Allan, including a move away from Dr Halo/Animation Point to the new "CTG" (Citect Graphics) system. A CTG combined the old BMP/AN files into a single object based file that gave the user a WYSIWYG look when using the new drawing package. The Port Hedland scope of work required additional functionality not inherent in Citect for Windows, but due to the versatile nature of the software (in particular by the use of Cicode) many additional features were programmed.
Version 3 and 4
Version 3 of Citect for Windows was developed to build in much of the functionality that previously had to be programmed, such as indication of a communications failure to any programmable electronic device displaying real-time data. While version 2 tended to be a bit unstable, version 3 was quite robust. Version 4 was the same as Version 3 but ported to suit the 32 bit platform of Windows NT.
Version 5 and 6
At this time Citect for Windows had the dominant market share (in Australia) of PC based operator interface software but new competitor software was catching up to the features and functionality of Citect and gaining in popularity. Citect began to focus more on remaining competitive; version 5 was released containing mainly features aimed at keeping the software at the leading edge of the market. Version 6 continued this trend and included more SCADA-like functionality in addition to the poll-based real-time control system that still remains the core of the Citect software today.
Version 7
Version 7 was released in August 2007. A. This version is also the first version to support Windows Vista Operating system. Support for Windows 7, along with notable features such as Pelco Camera integration, was added in 2010 with the release of version 7.20.
CitectSCADA 2015 release on 2 July 2015
Version 8
Version 8 was released in 2016, with an overhauled UI and support for Windows 10.
References
Production and manufacturing software
SCADA
|
13016444
|
https://en.wikipedia.org/wiki/Mount%20Washington%20College
|
Mount Washington College
|
Mount Washington College was a for-profit college in New Hampshire, United States. Until 2013 it was known as Hesser College. It was owned by Kaplan, Inc., and offered associate and bachelor's degrees focused in business and information technology, and claimed a flexible class scheduling system tailored to a diverse group of students. It was accredited by the New England Association of Schools and Colleges (NEASC).
It closed in May 2016.
Campus locations
Mount Washington College had five campus locations — in Manchester, Nashua, Portsmouth, Salem and Concord, where the school offered associate's and bachelor's degrees in a range of programs including business, information technology, digital media, criminal justice, liberal studies, healthcare management, psychology, and paralegal studies. The school also had an expanded national presence via its online education programs where students could obtain an associate's or bachelor's degree in business or information technology.
Academics
The location of the Manchester campus, among the Sundial Center of Commerce & Education, enabled students to have exposure to local businesses and potential internships not available in other locations and in other small colleges. The college incorporated a unique advisory program to ensure that students choose appropriate fields of study based on their specific strengths.
Mount Washington College also had what was known as the "Mount Washington Commitment", an innovative program that allowed prospective students to attend the college in their selected field without any tuition obligation. When the introductory period was over, students had the option to continue their studies or drop out without having to pay for classes.
Academically, Mount Washington maintained a relatively low student-faculty ratio for a college. The school encouraged students to continue their education to improve their professions. Student life incorporated various student organizations. The urban setting provided numerous opportunities for student life and professional opportunities.
The eight-week semester of many courses was popular amongst the students at the college, allowing for flexibility and solid career skill foundation.
The college offered multiple scholarships and financial aid, as well as federal work study programs that helped students pay for tuition and fees.
The college had resources for students with disabilities and had a good success rate for students with learning disabilities. In 2000 the college began offering information technology programs, and was recognized as a Microsoft Authorized Academic Training Program Institution.
The K.W.G. Memorial Library was an innovative media center that offered resources in digital format. The library was accessible via online accounts for students and had content-specific categories available for research. Academic support services were provided to a student via that center.
Community
The college was involved in various charities and non-profits across the state. Mount Washington College provided school supplies to needy students. The school also partnered with Southern New Hampshire University to assist with providing food to needy populations in the state.
History
Founded as Hesser Business College in Manchester in 1900, Mount Washington College followed the principle of providing individual encouragement and assistance to all students instilled by its founder, Joel H. Hesser. Hesser was a New Hampshire educator and businessman who believed in providing educational opportunities for every citizen.
Hesser College began to expand its educational services beyond the city of Manchester in 1975 when the first extension campus was opened in Nashua, renting space at Bishop Guertin High School. In the decades that followed, additional campuses were opened in Portsmouth, Salem, and Concord.
In 2000 Hesser College celebrated its 100-year anniversary. In 2001 the college was approved by the New Hampshire Division of Higher Education to grant additional baccalaureate degrees in business administration. In 2005 the school offered bachelor of science degrees in psychology.
In July 2013, it was announced that Hesser College had changed its name to Mount Washington College. The college's new name also came with new online degree options.
On July 10, 2014, the college announced it was closing its Nashua and Salem campuses by September 2014 and laying off 50 employees. The closure of the campuses was due to the recent decline of student enrollment by 30%. This closure comes approximately one year after the college closed its Concord and Portsmouth campuses.
On August 4, 2015, the college's board of trustees announced it was closing the college. The college taught out their programs and closed their last remaining campus, in Manchester.
References
External links
Official website
Defunct private universities and colleges in New Hampshire
Graphic design schools in the United States
Educational institutions established in 1900
Educational institutions disestablished in 2016
1900 establishments in New Hampshire
|
58634334
|
https://en.wikipedia.org/wiki/Cegedim
|
Cegedim
|
Cegedim SA is a health technology company based in Boulogne-Billancourt, founded in 1969. It employs more than 4,200 people in more than 10 countries. Revenue in 2017 was €457 million.
The American subsidiary, Cegedim Inc., is based in Bedminster, New Jersey and has 2500 employees. It was formerly known as Dendrite International Inc. Founded in 1986, it changed its name in 2007.
Several Cegedim companies have been established in Romania since 2001: Cegedim Customer Information, a research and services company for pharma industry, Cegedim Rx, a software and services company for medical ambulatory assistance and the Cegedim Service Center.
It owns Cegedim Rx, which supplies pharmacy software (from the takeover of John Richardson Computers and NDC Health in 2004) and In Practice Systems Limited, which supplies primary care software.
Cegedim Rx supplies its Pharmacy Manager software to around 2,500 independent and regional and chain pharmacies in the UK. The Electronic Prescribing System crashed several times in June 2016, said to be the result of flooding in the London area where its data centre is based.
Cegedim Insurance Solutions provides software and services across healthcare systems.
References
Electronic health record software companies
Companies based in Paris
Healthcare companies of France
|
16474
|
https://en.wikipedia.org/wiki/Joint%20Interoperability%20of%20Tactical%20Command%20and%20Control%20Systems
|
Joint Interoperability of Tactical Command and Control Systems
|
Joint Interoperability of Tactical Command and Control Systems or JINTACCS is a United States military program for the development and maintenance of tactical information exchange configuration items (CIs) and operational procedures. It was originated in 1977 to ensure that the command and control (C2 and C3) and weapons systems of all US military services and NATO forces would be compatible.
It is made up of standard Message Text Formats (MTF) for man-readable and machine-processable information, a core set of common warfighting symbols, and data link standards called Tactical Data Links (TDLs).
JINTACCS was initiated by the US Joint Chiefs of Staff in 1977 as a successor to the Joint Interoperability of Tactical Command and Control Systems in Support of Ground and Amphibious Military Operations (1971-1977). As of 1982 the command was hosted at Fort Monmouth in Monmouth County, New Jersey, and employed 39 military people and 23 civilians.
References
Command and control
Interoperability
Command and control systems of the United States military
|
52796478
|
https://en.wikipedia.org/wiki/Sage%20Manifolds
|
Sage Manifolds
|
SageManifolds (following styling of SageMath) is an extension fully integrated into SageMath, to be used as a package for differential geometry and tensor calculus. The official page for the project is sagemanifolds.obspm.fr. It can be used on CoCalc.
SageManifolds deals with differentiable manifolds of arbitrary dimension. The basic objects are tensor fields and not tensor components in a given vector frame or coordinate chart. In other words, various charts and frames can be introduced on the manifold and a given tensor field can have representations in each of them.
An important class of treated manifolds is that of pseudo-Riemannian manifolds, among which Riemannian manifolds and Lorentzian manifolds, with applications to General Relativity. In particular, SageManifolds implements the computation of the Riemann curvature tensor and associated objects (Ricci tensor, Weyl tensor). SageManifolds can also deal with generic affine connections, not necessarily Levi-Civita ones.
Functionalities
More documentation is on doc.sagemath.org/html/en/reference/manifolds/.
Free & Open Software
As SageMath is, SageManifolds is a free and open source software based on the Python programming language. It is released under the GNU General Public License. To download and install SageManifolds, see here. It is more specifically GPL v2+ (meaning that a user may elect to use a licence higher than GPL version 2.)
Development
Much of the source is on tickets at trac.sagemath.org.
There are GitHub repositories at github.com/sagemanifolds/SageManifolds.
Other links are provided at sagemanifolds.obspm.fr/contact.html.
Free mathematics software
Python (programming language) scientific libraries
Free software programmed in Python
Free educational software
Mathematical software
|
653520
|
https://en.wikipedia.org/wiki/Medieval%3A%20Total%20War
|
Medieval: Total War
|
Medieval: Total War is a turn-based strategy and real-time tactics computer game developed by Creative Assembly and published by Activision. Set in the Middle Ages, it is the second game in the Total War series, following on from the 2000 title Shogun: Total War. Originally announced in August 2001, the game was released in North America on 19 August 2002 and in Europe on 30 August for Microsoft Windows.
Following a similar form of play to Shogun: Total War, the player builds a dynastic empire in Europe, North Africa and the Middle East, spanning the period of 1087 to 1453. Gameplay is both strategic and tactical, with strategy played out in turn-based fashion on a province-by-province level, while military units of varying types and capabilities fight against each other in real time on a 3D battlefield.
Medieval: Total War received acclaim from reviewers; several critics commending it as a milestone in gaming. The real-time battles were praised for their realism and the new feature of siege battles but also received some criticism for unit management. The depth and complexity of the strategy portion was also received well by reviewers, together with well integrated historical accuracy. The game was a commercial success, topping the British video game chart upon release.
Gameplay
Medieval: Total War is based upon the building of an empire across medieval Europe, North Africa and the Middle East. It focuses on the warfare, religion and politics of the time to ultimately lead the player in conquest of the known world. As with the preceding Total War game, Shogun: Total War, the game consists of two broad areas of gameplay: a turn-based campaign map that allows the user to move armies across provinces, control agents, diplomacy, religion, and other tasks needed to run their faction, and a real-time battlefield, where the player directs the land battles and sieges that occur.
The strategic portion of the game divides the campaign map among twenty factions from the period, with a total of twelve being playable. The initial extent of each major faction's territory, and the factions available, depends on the starting period of the game, Early (1087), High (1205) or Late (1321), reflecting the historical state of these factions over time. The factions themselves represent many of the major nations at the time, including the Byzantine Empire, France, England, the Holy Roman Empire and the Turks. Several factions, such as the Golden Horde, emerge during the course of play at their historical time. These factions, together with several other factions appearing at the start of the campaign, are unavailable to the player in the main campaign. Each faction varies in territory, religion and units; however, factions of the same culture share many of their core units.
In addition to the main campaign, Medieval: Total War also features a game mode where the player can undertake various historical campaigns and battles. Historical campaigns allow the player to control a series of famous battles from a war of the medieval period, such as the Hundred Years War and the Crusades, playing as historic commanders like Richard the Lionheart. Individual historic battles have the player controlling a historical figure in an isolated battle that occurred in the era, such as controlling William Wallace through the Battle of Stirling Bridge.
Campaign
The main campaign of Medieval: Total War involves the player choosing one of the fourteen playable factions and eventually leading them in conquest on the strategy map. Each of the factions controls a number of historical provinces, which on the map contain a castle and, if located by the sea, a port as well. In the campaign, the player controls construction, unit recruitment and the movement of armies, fleets and agents in each of these provinces, using these means to acquire and defend the provinces. Diplomacy and economics are two other aspects the player can use to advance their aims, as well as having access to more clandestine means such as espionage and assassination. Religion is very important in the game, with the player able to convert provinces to their own religions to cement the people's loyalty. Another campaign mode is available, called "Glorious Achievements", in which each faction has several historically-based goals to achieve, which score points; the faction with the most achievement points wins the game. The campaign mode is turn-based, with each turn representing one year, allowing the player to attend to all needs of the faction before allowing the artificial intelligence to carry out the other factions' moves and decisions.
The campaign is carried out in a similar fashion to Shogun: Total War, but features many enhancements. The game is set mainly in Europe, but also features the Middle East and North Africa. Production can occur in every province, with the player building from one of the hundreds of connected buildings and units in the game's technology tree. Income to develop provinces and armies comes from taxation of the provinces and trade with neighbouring provinces. There is no specific technology research, but several advances, such as gunpowder, do become available over time. Castles provide the basis for more developed construction in the game, with players having to upgrade to the next castle level to be able to build more advanced buildings; upgrades such as a curtain wall and guard towers can be added to individual castles. Many buildings have economic functions, such as trading posts that generate money, while others are military buildings and allow the training of more advanced unit types. Whilst there are many common unit types, several unique units are available. These units are either restricted to a single faction or are dependent on the control of a particular province. Each unit possesses different strengths and weaknesses.
Each faction has a variety of different generals, some related to the royal family and in line to the throne, and the rest members of the nobility, who command units in the field and can assume offices of the state. Each of these characters has a base ranking for several attributes, such as command ability and piety, which affects how they carry out duties on the battlefield and governing the provinces. These attributes, and other factors such as health, are influenced by "Vices and Virtues", defining the character's personality and actions. These traits can be acquired seemingly randomly, or may be given to the character through actions in the game. Non-military units, collectively referred to as "agents", may be trained. The types of agent a faction is able to produce depends on its religion, but all factions have emissaries, spies and assassins available to them. Emissaries conduct diplomatic tasks such as start alliances between two factions, or bribe foreign armies; spies allow detailed information to be collected from foreign provinces or characters, while assassins can attempt to kill both foreign and domestic units. Factions also have access to various religious agents to spread their religion, and Christian factions can marry their princesses to domestic generals or other factions for political reasons. Occasionally in the game, a character will be trained bearing the name of a famous historical figure, with better than normal starting abilities. A general such as Richard the Lionheart, El Cid or Saladin will be a capable military commander, while a bishop such as Thomas Becket will have higher piety than normal.
Rebellions can occur if the loyalty of a particular province falls too low, with a rebel army appearing in the province to attempt to assume control from the owners. Civil wars may also take place if several generals commanding large armies have sufficiently low loyalty. In the event of a civil war, the player is given the choice to back either the current rulers or the rebels. It had been planned to allow other factions who had established a prior claim to the throne by marriage to princesses to join in a civil war to claim the throne for themselves; however, this was never implemented. Naval warfare is carried out upon the campaign map, where ships can be built and organised into fleets. These fleets can be used to control the game's sea regions and form sea lanes, allowing trade and troop movement between provinces that have constructed a port. Fleets can engage in sea battles with foreign fleets, although unlike land battles these are resolved by the computer. Religion plays an important aspect in Medieval: Total War, with religious differences between the Catholic, Orthodox and Muslim factions affecting diplomacy and population loyalty. Catholic factions must also respond to the wishes of the Papal States; factions gain favour by refraining from hostilities with other Catholic nations and responding to Crusades, else they run the risk of excommunication. The option to launch a holy war in the form of a Crusade or Jihad is open to both Catholic and Muslim factions.
Warfare
The battle system takes place on a 3D battlefield in real-time, instead of the turn-based system of the campaign. Battles are similar to those in Shogun: Total War, where two armies from opposing factions engage in combat until one side is defeated or withdraws. Warfare in Medieval: Total War occurs when the player or the artificial intelligence moves their armies into a province held by a hostile faction. The player is then presented with the option of fighting the battle on the battle map, or allowing the computer to automatically resolve it. Alongside the campaign battles, players have the option of both historical and custom battles, where the player controls what climate, units and terrain will be present on the battlefield.
During battles, players take control of a medieval army containing various units, such as knights and longbowmen, each of which has various advantages, disadvantages and overall effectiveness. Players must use medieval tactics in order to defeat their enemy, using historical formations to give units advantages in different situations. All units in the game gain experience points, known as "valour", which improves unit effectiveness in combat as it increases. Every battle map contains various terrain based upon that of the province on the campaign map, with separate maps for each of the borders between provinces – four hundred unique maps are available for the game. The climate, surroundings and building style for every map varies depending on the part of the world it is located in; for example, a map based in the Middle East will have a hot, sunny climate, sandy terrain and Islamic architecture. Sieges are an important aspect of the game introduced to the Total War series, occurring when the invading army elects to attack the defending army which has retreated inside the province's castle. Upon starting the engagement, the attacker has to fight their way through the castle's defences, winning the battle once the enemy units have been defeated. Each unit in the game has morale, which can increase if a battle is going well for their faction, or decrease in situations such as sustained heavy casualties. Morale can drop low enough to eventually force a unit to rout off the battlefield, with the player having the option to attempt to rally the men back into the battle through their general. Each side's army can capture routing enemy units and ransom them back to the owning faction, with important generals having greater ransom values.
Multiplayer
Medieval: Total War features a multiplayer game mode similar to that in Shogun: Total War, where players can engage in real-time battles with up to seven other players. Players create and control armies from the factions available in the game, where players can use them to compete in online tournaments or casual battles. The campaign mode cannot be played multiplayer; this feature was later added to the Total War series in Empire: Total War – but only at the beta stage, before being later removed.
Development
Medieval: Total War was originally announced by The Creative Assembly on 3 August 2001, with the working title of Crusader: Total War. Development of the game started shortly after the release of Shogun: Total War. Early in development it was decided to change the name to Medieval: Total War; this was to have a name that better reflected the scope of the game. In a press release, The Creative Assembly announced that the game would be published by Activision instead of Electronic Arts, the publisher of the previous games. The Creative Assembly also outlined the features of the game, including the game covering the medieval era from the 11th to 15th century, with players being able to participate in various historical scenarios of the time, such as the Hundred Years' War. Media releases over the subsequent months gave screenshots of the game, with more information on Medieval: Total War's features. The game uses an updated version of the game engine used in Shogun: Total War, allowing larger battles than previously possible with an increased troop limit of ten thousand. The improved game engine also allowed more battle maps than previously possible, now based upon where the conflicting armies are located on the strategy map. Other new battlefield enhancements included terrain detailed with villages and vegetation and improved castle siege mechanics, with players now having to focus on destroying the walls before assaulting and capturing the castle. The game features improved artificial intelligence from Shogun: Total War, with the individual unit AI and the tactical AI—which controls the overall army tactics—separated to more effectively control the opposing forces.
The Creative Assembly's creative director, Michael de Plater, stated in an interview that "We were never 100 percent satisfied with the name Crusader...it didn't cover the full scope or the rich diversity of the game". The focus on the medieval period was chosen because "it was perfectly suited to the direction in which we wanted to take the gameplay....we wanted to have great castles and spectacular sieges." Designer Mike Brunton wrote before the game's release that sieges were one of the most important features to be added to the Total War series, explaining how it led to increasing the troop limit from twenty in Shogun: Total War to over a hundred in Medieval: Total War. For increased authenticity, research was carried out into the medieval period aspects such as assassinations and historical figures. Leaders from the period were included in the game; to represent their personalities and actions the "vices and virtues" system was incorporated into the game, designed to make characters more realistic in their actions.
A demonstration of the game was released on 26 June 2002, featuring tutorial missions and a full single-player mission. The game was released on 19 August in North America and on 30 August in Europe. The Creative Assembly released a patch on 5 November 2002, which was targeted to fix the several bugs that were still present in the game. A new historical battle based on the Battle of Stamford Bridge was later released by The Creative Assembly, made available through Wargamer.
Reception
Medieval: Total War received "favorable" reviews according to the review aggregation website Metacritic. In the United Kingdom, the game went straight to the top of the video game chart after its release, staying at the top for two weeks. It ultimately received a "Silver" sales award from the Entertainment and Leisure Software Publishers Association (ELSPA), indicating sales of at least 100,000 copies in the United Kingdom. The United States charts saw Medieval: Total War reach fourth in its second week after release, behind Warcraft III, The Sims and its expansion The Sims: Vacation. It sold over 100,000 copies in the region by August 2006, but was beaten by its successor Rome: Total Wars 390,000 sales there.
Reviewers praised the many different factors adding to complexity of the campaign, ActionTrip noting that "Medieval adds a new strategic balance to the game, which teaches that great empires come with an even greater responsibility". Eurogamer praised the way the player had to manage production queues, guard the loyalty of important generals and make use of spies and assassins, calling the level of control "far ahead of anything seen in the previous game"; many other reviews shared a similar view. The number of factions in the game, each with their own historically accurate units and territories, was commended, with IGN and Game Informer stating it "gives the game huge replay value", with GameSpot adding "the strategic portion now has a lot more options". Many reviewers gave praise to the fact the historical setting of the Middle Ages was said to be well integrated into the game; PC Zone acknowledging the "brutality and instability" of the era is well included in the game, with GameSpot praising the religion in the game, "religion played an enormous role in shaping history, and so it is in the game". GameSpy stated that the different historical starting positions made the games attention to detail "impressive and noteworthy"; the historical battle system was also highly received from Computer Gaming World, stating that they successfully "provide an authentic glimpse of the past". However, GameSpot commented on a problem with a lack of information, "you'll have a tough time keeping track of all the goings-on in your provinces", suggesting this could be solved through a more informative interface. Overall, reviewers highly complimented the strategic gameplay, many saying it was similar to a Civilization-style game.
The battle system in Medieval: Total War was considered by many reviewers to be the highlight of the game. In their review, Eurogamer felt "The sensation of scale and drama in these conflicts is incredible", praising the visual effects and combat. The different battlefields and their environments were praised by IGN, with ActionTrip agreeing that "Medieval: Total War looks better than Shogun", adding that the terrain and units are more detailed than those in Shogun: Total War. GameSpot praised the realistic battles, mentioning that the real-world battlefield considerations like fatigue, ammunition, facing, and morale included in the game was a "welcome change". GameSpy also stated the "chaotic" battles were appropriate to the era, but criticised the siege aspect, claiming it to be "too plain and underwhelming", with a lack of detail compared to the other 3D elements. The soundtrack to the game was well received by IGN, "The soundtrack is full of rousing context-sensitive orchestra moments which get you in the mood for bloody slaughter" commenting it is fitting for a game that "delivers body counts like no other" ActionTrip also admired how the game's music changed pace as the battle commenced, praising The Creative Assembly for its "masterfully placed audio and visual effects". The artificial intelligence for Medieval: Total War was thought to be much improved over its predecessor, CGW mentioning the AI was intelligent enough to prevent brute force alone from winning fights. Criticism was received on the larger battles giving low frame rates and performance, while ActionTrip also noted several unit management issues with path-finding and unit facing, stating "it's demoralising to see archers facing the wrong way".
Medieval: Total War received very favourable reviews, despite a few criticisms, gaining high distinction from the industry. GameSpot summarised by saying that while the game "isn't well suited for the casual gamer", most strategy gamers will "find a lot to like in it, for a very long time." Although GameSpy described Medieval: Total War as "hit[ting] a few bumps in the road", they mentioned the game has enough to keep players interested for many months. Eurogamer was enthusiastic in pronouncing Medieval: Total War "a milestone in gaming". IGN closed by stating that the game "delivers an encompassing experience", while CGW finished by proclaiming "there simply isn't enough room in this magazine to extol its virtues."
Awards
Medieval: Total War was the recipient of a number of industry awards. PC Gamer UK named it the top game of 2002, replacing the previous entry, Valve's Half-Life. In awarding the distinction, PC Gamer stated: "It was the only contender." The game received an EMMA award in Technical Excellence for its audio by Jeff van Dyck, commended for having a "game soundtrack and score that is lush, well-mixed, and adds dynamically to the gameplay. The extensive diverse musical tracks sound authentic and fully engage the user." The game received a number of distinctions from game publications, such as the "Best Strategy Game of 2002" award from GameSpy, mentioning "It's not that Medieval is just two great games in one. It's two games that feed off of one another for the ultimate rush." The Creative Assembly itself was also awarded the European Computer Trade Show PC Game Developer of the Year award, for the production of Medieval: Total War.
GameSpot selected Medieval as the best computer game of August 2002, and later presented the game with its annual "Best Single-Player Strategy Game on PC" award. The editors of Computer Games Magazine named it the eighth-best computer game of 2002, and called it "rich in atmosphere and compelling for long hours." It was nominated for PC Gamer USs "2002 Best Turn-Based Strategy Game" and Computer Gaming Worlds "Strategy Game of the Year" awards, which ultimately went to Combat Mission: Barbarossa to Berlin and Freedom Force, respectively. The latter magazine's editors highlighted Medievals "grandeur and flourish in simulating European history".
Expansions and versions
The Creative Assembly announced the development of an expansion pack, Medieval: Total War – Viking Invasion, on 7 January 2003. The Viking Invasion expansion pack adds a Viking campaign taking place from 793 to 1066, set upon an expanded map of the British Isles and western Scandinavia. The campaign replaces the original factions with earlier Anglo-Saxon and Celtic kingdoms such as Wessex, Mercia, Wales and Scotland, as well as the Vikings. The Viking faction is designed to raid the British Isles; to achieve this the faction has access to faster ships and gains money for every building destroyed upon the battle map. The Anglo-Saxon and Celtic factions have the goal of repelling the Vikings and ultimately controlling the British Isles. New historical units were included with the expansion pack, such as the huskarls. Medieval: Total War: Viking Invasion brought several enhancements that were also added to the original campaign: flaming ammunition giving the player an option to set alight enemy castles, and a pre-battle deployment screen, allowing the player to organise their forces and view the terrain and opposing forces before the battle begins. In addition, three new factions were added to the main Medieval: Total War campaign, along with ribauldequin artillery and the game's patch. The expansion pack was released on 7 May 2003 in the United States and on 9 May in the United Kingdom.
Activision, the game's publisher, produced a combination of Medieval: Total War and Medieval: Total War: Viking Invasion, called the Medieval: Total War Battle Collection, released on 7 January 2004. Medieval: Total War Battle Collection contained both games, patched to the latest version, and their manuals. On 30 June 2006, Sega, the company that took over the publishing of the series, released a collector's edition version of the Total War series, called Total War: Eras. The edition included patched versions of Shogun: Total War, Medieval: Total War and Rome: Total War, together with their expansion packs, a documentary detailing the creation of the game series, and Total War memorabilia.
Reception
Viking Invasion received "favorable" reviews, albeit slightly less than the original Medieval: Total War, according to Metacritic. Reviewers felt the new gameplay features for the Vikings were the most important enhancement of the expansion pack, with Eurogamer commending the Vikings' raiding system as something that fixes what "the original Medieval lacked". ActionTrip praised the new campaign as being a challenge for players: "even on the normal difficulty setting, Viking Invasion is a very challenging game", a view shared by other critics. The pre-battle screen was commended by GameSpot, calling it a "handy new feature." GameSpot also praised the new additions to the original campaign, mentioning they have "made castle sieges more interesting". The main criticism for Medieval: Total War – Viking Invasion was the graphics, with both ActionTrip and Eurogamer stating that they were "starting to feel a little bit creaky". A lack of new multiplayer options was considered by GameSpot to be "unfortunate", mentioning that "a multiplayer campaign option would have been a great new feature". Overall, the expansion was received well by critics in the industry. IGN concluded by saying fans "won't be disappointed with the Viking Invasion", while Actiontrip finished stating; "the graphics are beginning to look old" but the challenge made the expansion "worth it". The review by GameSpot finished by saying "overall, the expansion is a great addition to Medieval", and Eurogamer concluded with praising the addition it made to Medieval: Total War: "It's a worthy expansion pack to a truly excellent game".
The editors of Computer Gaming World nominated Viking Invasion for their 2003 "Expansion Pack of the Year" award, but it lost to Battlefield 1942: Secret Weapons of WWII. It was also a runner-up for Computer Games Magazines "Expansion of the Year" award, which ultimately went to EverQuest: Lost Dungeons of Norrath.
References
External links
Total War official site
The Creative Assembly official website
2002 video games
Activision games
Video games set in the Middle Ages
Real-time tactics video games
Creative Assembly games
Total War (video game series)
Turn-based strategy video games
Video games with expansion packs
Windows games
Windows-only games
Video games scored by Jeff van Dyck
Video games scored by Saki Kaskas
Video games set in Africa
Video games set in Europe
Video games set in the Middle East
Video games developed in the United Kingdom
Multiplayer and single-player video games
Video games set in the Viking Age
Historical simulation games
Grand strategy video games
|
21829900
|
https://en.wikipedia.org/wiki/Roosevelt%20College%20Marikina
|
Roosevelt College Marikina
|
FEU Roosevelt Marikina is a private non-sectarian college named in honor of the American president Franklin D. Roosevelt. Its former name was Roosevelt Memorial High School. It was founded in 1933 as Marikina Academy. It is considered as the oldest academic institution in eastern Metro Manila. The college offers courses from pre-school to postgraduate studies. Roosevelt College primarily serves the educational needs of the province of Rizal and eastern Metro Manila. Aside from the flagship Cainta campus, Roosevelt College has campuses in Marikina (the original school location), Cubao, San Mateo, and Rodriguez.
Notable alumni
Francisco Tatad – former Senator – Roosevelt Homesite
Mario Parial – painter, printmaker, sculptor and photographer. – Roosevelt Homesite
Novelita "Nova Villa" Villanueva – actress – Roosevelt Homesite
Edgardo Angara – Senator
General Delfin N. Bangit - Armed Forces of the Philippines Chief of Staff - Roosevelt San Mateo
Julian Marcus Trono- child actor and television personality - Roosevelt College Cubao
Del De Guzman - Mayor - Roosevelt College Marikina
Cristine Reyes - actress - Roosevelt College Cainta
Allan Caidic- basketball player - Roosevelt College Cainta 1976-1980
History
Foundation as Marikina Academy
To understand how Roosevelt College metamorphosed from what was then the Marikina Academy of 1933 to what it is today, we need to bring to light some significant precedent events that place within a span of 6 decades. Roosevelt College was born in 1945 as Roosevelt Memorial High School along the busy J.P. Rizal Street in Barrio San Roque, then a part of Rizal province.
Marikina used to be an agricultural town. While the old folks had so much to do in the rice field and vegetable farms, majority of middle-aged and the young would readily find themselves busy and enjoying a century-old shoe making industry. It was almost a common sight to see families working together under their respective thatched roofs from early dawn to late evening busy attending to their handcrafted pairs of shoes.
Obviously, because of the nature of this cottage industry, families grew to be well-knit and clannish. Popular education, however, was limited to the "Katon Kristiyano" and to the primary and elementary grades available in a few barrio schools. A few affluent families would easily send their children to high school and college in Manila while those who had hardly enough would still need a lot a time and money to leave shoe making and take trips to and from Rizal. It was natural that a good number of boys and girls missed their secondary and tertiary schooling.
Such was the life of the people of Marikina then. With a meager income from shoe making and farming and no public secondary school to go, many children were left without the benefits of formal education. Mayor Wenceslao C. dela Paz, pre-war town executive of the late 1920s foresaw that the town will not remain agricultural for long. It was his obsession to let the young go beyond grade school and learn something more than shoe making and farming.
Realizing the futility of seeking public financial support for high school town, he mustered his own family resources and established in 1933 a secondary school he called Marikina Academy as a self-sustaining, non-sectarian private institution of learning.
During the first year of operations, only 23 students enrolled, all children of shoemakers. The school was housed at the old residence of the then Congressman Emilio dela Paz. Head of the school was Engr. Quiterio Q. Marcos who, because of his work as an engineer in the government, was replaced by Mr. Rosendo de Guzman a year later. The first teachers who were handpicked from the community were Mr. Ireneo M. Cruz, Miss Teofista O. Cruz, Mr. Leon Florencio, Miss Aurora Joseph and Mr. Hilario G. de Asis. The following year, enrollment increased to 54. The first graduation exercises took place in 1936 with only 10 "pioneer" graduates. In an examination given by the government for recognition purposes that year, the Marikina Academy placed 19th among the 70 private schools all over the country. By 1941, enrollment swelled to 272. This necessitated the transfer of the school from its original site in Barrio Sta. Elena to a rented old "Hacienda-type" house in Barrio San Roque.
The school was closed on December 8, 1941 when World War II broke out. Later in mid 1942, the school was reopened for those who wanted to carry on their education under the Japanese regime. In September 1944 when the American battle for the liberation of the Philippines was almost in the Greater Manila area, the school was again closed as mandate of the Japanese military authorities.
Later that same year, amidst turmoil and destruction of lives and properties, sad news broke out that Mayor Wenceslao C. dela Paz, founder of the Marikina Academy, was arrested and incarcerated in Fort Santiago on charges of treason and guerrilla activities. Nobody can surely tell where he was imprisoned and how he died. His body was never found.
The Marikina Academy ceased operations after the death of its founder. Meanwhile, a group of prominent citizens and educators, anticipating the enormous task of postwar reconstruction and rehabilitation felt that, more than ever before, the town of Marikina needed a secondary school. They moved to retrieve the Academy and invited all concerned to join them in the noble effort.
Roosevelt Memorial High School
In 1945, and in keeping with the trend of the times, a corporation was organized to operate a Secondary School in the same, old hacienda house. It also absorbed all records and the resources of the old academy and brought in new ones. It likewise retained the services of the teachers. In a gesture of loyalty and gratitude to the Americans, the school was named Roosevelt Memorial High School in honor of the late wartime US President, Franklin D. Roosevelt.
Engineer Deogracias F. dela Paz, one of the incorporates was elected first President of the Board of Trustees, a position he had to hold for 33 years. Other members were the widow of the founder of the old academy, Mrs. Felisa Mallari-dela Paz, Mr. Leonardo P. Santos, Sr., Mrs. Miguela Gonzales and Mr. Rosendo de Guzman, Mr. Ireneo M. Cruz was designated as its School Director.
The school became the rallying point of almost all elementary graduates of the public schools in Marikina, San Mateo and Montalban. Young boys and girls whose parents were willing to pay a minimal amount of fee for the education of their children found themselves studying at Roosevelt. Other students whose studies were disrupted by the war went back to school and took advantage of the abridged curriculum offered by the Bureau of Education. The school was forced to operate in two shifts to accommodate as many entrants.
In pursuance of this mission, the school started to branch out in the peripheral areas of Marikina. As many natives of Marikina migrated to San Juan and settled along N. Domingo Street, RMHS San Juan was established in this town in 1946.
The following year, a second branch was established at the corner of the then Highway 54 (now EDSA) Epifanio de los Santos and Marikina-San Juan Road (now Aurora Blvd.). This became known as RMHS Cubao. To cater to the educational needs of military personnel, a branch was established at the former site of the PA Ordinance Center in 1948. This unit became operational only for one year as the Armed Forces of the Philippines (AFP) opened the AFP School for Enlisted Men. Records were then integrated with those of the Cubao Unit.
In 1949, in response to the clamor of the townsfolk of San Mateo and Montalban, RMHS San Mateo Branch was established and started to operate in an old house along the main road. This unit was later renamed Doña Aurora High School, in honor of Doña Aurora Quezon who died in an ambush that same year.
The early 1950s saw the rapid growth and development of the school in terms of population and facilities. The main school in Marikina gave up the old hacienda and moved to a new storey building just a stone's throw away. From then on, the main school became RMHS Administration. RMHS Murphy Branch opened in 1951 and continued to operate as a separate unit it was fused with the Cubao Unit in 1972.
In 1953, RMHS Quirino opened its doors to serve the residents of the newly opened government housing projects 2,3 and 4. In 1954, DAHS in San Mateo moved to a new campus just behind the Catholic Church. The Cubao Branch likewise moved to a two-storey building at a new site along Aurora Blvd., a few meters away from its former site.
The rest of the 1950s were growth year for the RMHS. Enrollment increased by the hundreds every year. School visibility was evident. For the rest of the decade, the school provided the community with education-oriented activities such as the intramural games, the Christmas lantern parade and program, the week-long foundation day celebration highlighted by the coronation of Miss Roosevelt and the parade were subjects of much anticipation by the townsfolk.
College Establishment
The 1960s were characterized by radical transformation. The main school in Marikina, now housed in a three-storey building started to offer collegiate courses in June 1962. From then on, RMHS became Roosevelt College. In the same year, the much awaited school unit of Montalban (now Rodriguez) also opened its doors to serve the educational needs of the town.
In 1964, the Elementary School Division was restored at the main school thus making the Roosevelt College a three-level school. By the same year the need to expand became even more evident as new housing areas started to mushroom in the northern part of Marikina. Initial steps were taken to establish a branch school in the area.
In 1965, a six-room, two-storey building, situated at a reclaimed swamp and along J.P. Rizal St. in Lamuan started operation as an annex to the Administration unit in Marikina. Inspired by the overwhelming public response, a 42-room, three-storey pre-stressed concrete edifice was completed in 1967. This building was to house the Administration offices and the four institutes of the College Division which were transferred from the main school in San Roque, Marikina, in addition to the complete secondary and elementary schools. From then on, this became Roosevelt College Lamuan unit while the mother school in San Roque, Marikina became Roosevelt College, San Roque Unit. In a few years, the combined population of the two schools came close to 10,000 students. Thus, the two schools, instead of competing, actually complemented each other in serving the educational needs of the people of Marikina.
In 1969, RHMS Cubao moved to a new 20-room two-storey concrete building in a sprawling site of 10th Avenue in Cubao. While a 6-room, two-storey building of the same type was constructed at Dona Aurora High School.
The 1970s were marked by enrichment of educational offerings. The Institute of Graduate Studies opened in the early part of the 1970s to cater the teachers who wish to complete their master of arts but were inconvenienced by long trips to schools in Manila. In a positive move to upgrade classroom delivery system, The Research, Publication and Supervision (RPS) Office was created, followed a little later by the English Training Center for teachers.
In anticipation of the termination of the lease of the campus site in RC San Roque, a 20-room single-storey building was constructed in 1974 in a 1.5 hectare parcel of land along Sum long Highway in Cainta. Initially, this was called Marikina School and commenced operation after transferring some high school and elementary classes from RC San Roque. In 1976, upon the completion of a 66-room 3 – storey concrete structure, the entire RC San Roque, the mother school of the Roosevelt School System was transferred to this site, thus, necessitating the change of name to Roosevelt College, Sum long or Cainta unit. College classes were also offered, starting with the Institute of Arts and Sciences, Education and Nursing. The Central Administration Offices which were transferred to RC Lamuan in 1967 were again transferred to the new campus.
In the early 1980s, in a move to rationalize college operations, the Institutes of Education, Commerce and Arts and Sciences were confined at the Marikina Campus while the Cainta Campus specialized in Engineering. The Cainta Campus likewise became the center of the Graduate School Agro-Forestry Extension Program, a non- traditional approach in graduate education offered to teachers within Region IV of the Department of Education, Culture and Sports (DECS).
But the most significant developments in the 1980s focused on management systems and processes and in academic contents and delivery system. Massive reforms were effected in planning system, organizational set-up, staffing patterns and practices, financial controls and unit level management procedures. A 5-year academic development program was launched in 1980. The program aimed to improve academic standard of the school by improving the general internal environment systems, upgraded qualification of people element, improvement of school facilities and instructional materials and adoption of a more discriminating student acceptance policy. In 1988, as if to test the effects of the program, initial steps were taken towards accreditation with the Philippine Accrediting Association of Schools, Colleges and Universities (PAASCU) and the school universal standard which raises the passing score from the usual 50% to 65% and later 70%, was put into effect. If indeed instructional systems have improved, the students are ready for higher level of performance expectations.
Since the 1990s
The decade of the 1990s was greeted by bold steps taken by the Board of Trustees. The Roosevelt College Foundation Center for Teacher Education was established with a substantial amount as initial outlay with no return in investment expected except for the chance to hire teacher education graduates who are highly trained and qualified for the job. It was to become the Roosevelt College's contribution to the improvement of Philippine education. The College Division responded to the country's demand for manpower in the technical level by opening two-year course in Computer Programming, Computer Secretarial and Computer Technician. Meanwhile, training for the professions was maintained at competitive levels by upgrading the engineering curricula and facilities to conform with the Technical Panel for Engineering and Architecture Education (TPEAE) standards. New courses like BS in Computer Science and BS in Commerce major in Management Information System were introduced.
The latter half of the decade ushered in another degree course, BS in Computer Engineering aimed to produce professionals for the Information Technology industries. To handle the growing demand for an effective middle-level work force in the construction industry, Construction Technology was opened in SY 1998–1999. Through the joint efforts of the school and the Technical Education and Skills Development Authority (TESDA), the course was strengthened through a consortium with the Philippine Constructors Association, Marikina Valley Chapter (PCA-Marivalley), a group which will provide the required industry training for the students.
Capping the decade is the opening of another major under BS Commerce, Business Management and Public Administration. The elevation of Computer Education Department to Institute of Computer Education is another milestone of the period.
An amount of not less than PhP 12 million was appropriated for the renovation of the RMHS San Juan Campus and the Administration Building of DAHS San Mateo. This was followed by PhP 27 million for the new Administration and College Building at the Cainta Campus. Another marked improvement is the newly renovated Roosevelt College Marikina Gymnasium (1998–1999).
As an added measure to make educational offering more relevant, the high school and elementary curricula were made quantitative. Pupils from Grade 2 to 6 were given Basic Computer Skills Lessons. Computer literacy was integrated in the THE 1 and 2, while Educational Computing was offered as a specialized course in the THE 3 and 4; likewise, Computer-Aided Instructions in English, Science and Mathematics were introduced. Advanced Computer courses were also offered as enrichment subjects for high school students while Values Education 3 and 4 were replaced by Trigonometry and Calculus.
Keeping up with recent developments in early childhood education, SY 1998–1999 saw the expansion of the pre-school program with the inclusion of Development Kindergarten room in Roosevelt College Marikina, similar to the one in Cainta was started. It was opened formally, SY 1999–2000.
Activities for 3 to 4 year old-children are made meaningful and open-ended with lessons enhanced by illuminated keyboard and multi-kid. The total program takes into account the learning principles that are based on knowledge of child growth and development.
External links
Official website
References
Universities and colleges in Metro Manila
Educational institutions established in 1933
1933 establishments in the Philippines
Schools in Marikina
Buildings and structures in Marikina
tl:Roosevelt College
|
492273
|
https://en.wikipedia.org/wiki/GB%2018030
|
GB 18030
|
GB 18030 is a Chinese government standard, described as Information Technology — Chinese coded character set and defines the required language and character support necessary for software in China. GB18030 is the registered Internet name for the official character set of the People's Republic of China (PRC) superseding GB2312. As a Unicode Transformation Format (i.e. an encoding of all Unicode code points), GB18030 supports both simplified and traditional Chinese characters. It is also compatible with legacy encodings including GB2312, CP936, and GBK 1.0.
In addition to the "GB18030 character encoding", this standard contains requirements about which scripts must be supported, font support, etc.
History
The GB18030 character set is formally called "Chinese National Standard GB 18030-2005: Information Technology—Chinese coded character set". GB abbreviates Guójiā Biāozhǔn (国家标准), which means national standard in Chinese. The standard was published by the China Standard Press, Beijing, 8 November 2005. Only a portion of the standard is mandatory. Since 1 May 2006, support for the mandatory subset is officially required for all software products sold in the PRC.
An older version of the standard, known as "Chinese National Standard GB 18030-2000: Information Technology—Chinese ideograms coded character set for information interchange—Extension for the basic set", was published on March 17, 2000. The encoding scheme stays the same in the new version, and the only difference in GB-to-Unicode mapping is that GB 18030-2000 mapped the character (ḿ) to a private use code point U+E7C7, and character (without specifying any glyph) to U+1E3F (ḿ), whereas GB 18030-2005 swaps these two mapping assignments. More code points are now associated with characters due to update of Unicode, especially the appearance of CJK Unified Ideographs Extension B. Some characters used by ethnic minorities in China, such as Mongolian characters and Tibetan characters (GB 16959-1997 and GB/T 20542-2006), have been added as well, which accounts for the renaming of the standard.
Compared with its ancestors, GB 18030's mapping to Unicode has been modified for the 81 characters that were provisionally assigned a Unicode Private Use Area code point (U+E000–F8FF) in GBK 1.0 and that have later been encoded in Unicode. This is specified in Appendix E of GB 18030. There are 24 characters in GB 18030-2005 that are still mapped to Unicode PUA. According to Ken Lunde, the 2018 Draft of a new revision of GB 18030 will finally eliminate these mappings.
As a national standard
The mandatory part of GB 18030-2005 consists of 1 byte and 2 byte encoding, together with 4 byte encoding for CJK Unified Ideographs Extension A. The corresponding Unicode code points of this subset, including provisional private assignments, lie entirely in the BMP. These parts correspond to the fully mandatory GB 18030-2000.
Most major computer companies had already standardised on some version of Unicode as the primary format for use in their binary formats and OS calls. However, they mostly had only supported code points in the BMP originally defined in Unicode 1.0, which supported only 65,536 codepoints and was often encoded in 16 bits as UCS-2.
In a move of historic significance for software supporting Unicode, the PRC decided to mandate support of certain code points outside the BMP. This means that software can no longer get away with treating characters as 16-bit fixed width entities (UCS-2). Therefore, they must either process the data in a variable width format (such as UTF-8 or UTF-16), which are the most common choices, or move to a larger fixed width format (such as UCS-4 or UTF-32). Microsoft made the change from UCS-2 to UTF-16 with Windows 2000.
Mapping
GB 18030 defines a one (ASCII), two (extended GBK), or four-byte (UTF) encoding. The two-byte codes are defined in a lookup table, while the four-byte codes are defined sequentially (hence algorithmically) to fill otherwise unencoded parts in UCS. GB 18030 inherits the bad aspects of GBK, most notably needing special code to safely find ASCII characters in a GB18030 sequence.
The one- and two-byte code points are essentially GBK with the euro sign, PUA mappings for unassigned/user-defined points, and vertical punctuations. The four byte scheme can be thought of as consisting of two units, each of two bytes. Each unit has a similar format to a GBK two byte character but with a range of values for the second byte of 0x30–0x39 (the ASCII codes for decimal digits). The first byte has the range 0x81 to 0xFE, as before. This means that a string search routine that is safe for GBK should also be reasonably safe for GB18030 (in much the same way that a basic byte-oriented search routine is reasonably safe for EUC).
This gives a total of 1,587,600 (126×10×126×10) possible 4 byte sequences, which is easily sufficient to cover Unicode's 1,112,064 (17×65536 − 2048 surrogates) assigned, reserved, and noncharacter code points.
Unfortunately, to further complicate matters there are no simple rules to translate between a 4 byte sequence and its corresponding code point. Instead, codes are allocated sequentially (with the first byte containing the most significant part and the last the least significant part) only to Unicode code points that are not mapped in any other manner. For example:
U+00DE (Þ) → 81 30 89 37
U+00DF (ß) → 81 30 89 38
U+00E0 (à) → A8 A4
U+00E1 (á) → A8 A2
U+00E2 (â) → 81 30 89 39
U+00E3 (ã) → 81 30 8A 30
An offset table is used in the WHATWG and W3C version of GB 18030 to efficiently translate code points. ICU and glibc use similar range definitions to avoid wasting space on large sequential blocks.
Support
Encoding
Windows 2000 can support the GB18030 encoding if GB18030 Support Package is installed. Windows XP can support it natively. The open source PostgreSQL database supports GB18030 through its full support for UTF-8, i.e. by converting it to and from UTF-8. Similarly Microsoft SQL Server supports GB18030 by conversion to and from UTF-16.
More specifically, supporting the GB18030 encoding on Windows means that Code Page 54936 is supported by MultiByteToWideChar and WideCharToMultiByte. Due to the backward compatibility of the mapping, many files in GB18030 can be actually opened successfully as the legacy Code Page 936, that is GBK, even if the Code Page 54936 is not supported. However, that is only true if the file in question contains only GBK characters. Loading will fail or cause corrupted result if the file contains characters that do not exist in GBK (see § Technical details for examples).
GNU glibc's gconv, the character codec library used on most Linux distributions, supports GB 18030-2000 since 2.2, and GB 18030-2005 since 2.14; glibc notably includes non-PUA mappings for GB 18030-2005 in order to achieve round-trip conversion. GNU libiconv, an alternative iconv implementation frequently used on non-glibc UNIX-like environments like Cygwin, supports GB 18030 since version 1.4.
Glyphs
The GB18030 Support Package for Windows contains SimSun18030.ttc, a TrueType font collection file which combines two Chinese fonts, SimSun-18030 and NSimSun-18030. The SimSun 18030 font includes all the characters in Unicode 2.1 plus new characters found in the Unicode CJK Unified Ideographs Extension A block although, despite its name, it does not contain glyphs for all characters encoded by GB 18030, as all (about a million) Unicode code points up to U+10FFFF can be encoded as GB 18030. GB 18030 compliance certification only requires correct handling and recognition of glyphs in the mandatory (two-byte, and CJK Ext. A) Chinese part. Nevertheless, the requirement of PUA characters in the standard have hampered this implementation.
Other CJK font families like HAN NOM and Hanazono Mincho provide wider coverage for Unicode CJK Extension blocks than SimSun-18030 or even Simsun (Founder Extended), but they don't support all code points defined in Unicode 5.0.0 either.
See also
Guobiao code
CJK
Chinese character encoding
Comparison of Unicode encodings
Notes
References
External links
IANA Charset Registration for GB18030
Introduction to GB18030 including evolution from GB2312 and GBK (Sun/Internet Archive)
ICU data
GB18030: A mega-codepage (IBM DeveloperWorks)
Authoritative mapping table between GB18030-2000 and Unicode
ICU Converter Explorer: GB18030
Unicode charts
Unicode CJK Unified Ideographs Extension A (PDF, 1.5 MB)
Unicode CJK Unified Ideographs Extension B (PDF, 13 MB)
GB18030 Support Package for Windows 2000/XP, including Chinese, Tibetan, Yi, Mongolian and Thai font by Microsoft (Internet Archive)
SIL's freeware fonts, editors and documentation
Character sets
18030
Encodings of Asian languages
Unicode Transformation Formats
Chinese-language computing
|
597229
|
https://en.wikipedia.org/wiki/Product%20lifecycle
|
Product lifecycle
|
In industry, product lifecycle management (PLM) is the process of managing the entire lifecycle of a product from its inception through the engineering, design and manufacture, as well as the service and disposal of manufactured products. PLM integrates people, data, processes and business systems and provides a product information backbone for companies and their extended enterprises.
History
The inspiration for the burgeoning business process now known as PLM came from American Motors Corporation (AMC). The automaker was looking for a way to speed up its product development process to compete better against its larger competitors in 1985, according to François Castaing, Vice President for Product Engineering and Development. Lacking the "massive budgets of General Motors, Ford, and foreign competitors … AMC placed R&D emphasis on bolstering the product lifecycle of its prime products (particularly Jeeps)." After introducing its compact Jeep Cherokee (XJ), the vehicle that launched the modern sport utility vehicle (SUV) market, AMC began development of a new model, that later came out as the Jeep Grand Cherokee. The first part in its quest for faster product development was computer-aided design (CAD) software system that made engineers more productive. The second part in this effort was the new communication system that allowed conflicts to be resolved faster, as well as reducing costly engineering changes because all drawings and documents were in a central database. The product data management was so effective that after AMC was purchased by Chrysler, the system was expanded throughout the enterprise connecting everyone involved in designing and building products. While an early adopter of PLM technology, Chrysler was able to become the auto industry's lowest-cost producer, recording development costs that were half of the industry average by the mid-1990s.
During 1982–83, Rockwell International developed initial concepts of Product Data Management (PDM) and PLM for the B-1B bomber program. The system called Engineering Data System (EDS) was augmented to interface with Computervision and CADAM systems to track part configurations and lifecycle of components and assemblies. Computervison later released implementing only the PDM aspects as the lifecycle model was specific to Rockwell and aerospace needs.
Forms
PLM systems help organizations in coping with the increasing complexity and engineering challenges of developing new products for the global competitive markets.
Product lifecycle management (PLM) should be distinguished from 'product life-cycle management (marketing)' (PLCM). PLM describes the engineering aspect of a product, from managing descriptions and properties of a product through its development and useful life; whereas, PLCM refers to the commercial management of the life of a product in the business market with respect to costs and sales measures.
Product lifecycle management can be considered one of the four cornerstones of a manufacturing corporation's information technology structure. All companies need to manage communications and information with their customers (CRM-customer relationship management), their suppliers and fulfillment (SCM-supply chain management), their resources within the enterprise (ERP-enterprise resource planning) and their product planning and development (PLM).
One form of PLM is called people-centric PLM. While traditional PLM tools have been deployed only on the release or during the release phase, people-centric PLM targets the design phase.
As of 2009, ICT development (EU-funded PROMISE project 2004–2008) has allowed PLM to extend beyond traditional PLM and integrate sensor data and real-time 'lifecycle event data' into PLM, as well as allowing this information to be made available to different players in the total lifecycle of an individual product (closing the information loop). This has resulted in the extension of PLM into closed-loop lifecycle management (CL2M).
Benefits
Documented benefits of product lifecycle management include:
Reduced time to market
Increase full price sales
Improved product quality and reliability
Reduced prototyping costs
More accurate and timely request for quote generation
Ability to quickly identify potential sales opportunities and revenue contributions
Savings through the re-use of original data
A framework for product optimization
Reduced waste
Savings through the complete integration of engineering workflows
Documentation that can assist in proving compliance for RoHS or Title 21 CFR Part 11
Ability to provide contract manufacturers with access to a centralized product record
Seasonal fluctuation management
Improved forecasting to reduce material costs
Maximize supply chain collaboration
Overview of product lifecycle management
Within PLM there are five primary areas;
Systems engineering (SE) is focused on meeting all requirements, primarily meeting customer needs, and coordinating the systems design process by involving all relevant disciplines. An important aspect for lifecycle management is a subset within Systems Engineering called Reliability Engineering.
Product and portfolio m2 (PPM) is focused on managing resource allocation, tracking progress, plan for new product development projects that are in process (or in a holding status). Portfolio management is a tool that assists management in tracking progress on new products and making trade-off decisions when allocating scarce resources.
Product design (CAx) is the process of creating a new product to be sold by a business to its customers.
Manufacturing process management (MPM) is a collection of technologies and methods used to define how products are to be manufactured.
Product data management (PDM) is focused on capturing and maintaining information on products and/or services through their development and useful life. Change management is an important part of PDM/PLM.
Note: While application software is not required for PLM processes, the business complexity and rate of change requires organizations execute as rapidly as possible.
Introduction to development process
The core of PLM (product lifecycle management) is the creation and central management of all product data and the technology used to access this information and knowledge. PLM as a discipline emerged from tools such as CAD, CAM and PDM, but can be viewed as the integration of these tools with methods, people and the processes through all stages of a product's life. It is not just about software technology but is also a business strategy.
For simplicity the stages described are shown in a traditional sequential engineering workflow.
The exact order of event and tasks will vary according to the product and industry in question but the main processes are:
Conceive
Specification
Concept design
Design
Detailed design
Validation and analysis (simulation)
Tool design
Realise
Plan manufacturing
Manufacture
Build/Assemble
Test (quality control)
Service
Sell and deliver
Use
Maintain and support
Dispose
The major key point events are:
Order
Idea
Kickoff
Design freeze
Launch
The reality is however more complex, people and departments cannot perform their tasks in isolation and one activity cannot simply finish and the next activity start. Design is an iterative process, often designs need to be modified due to manufacturing constraints or conflicting requirements. Whether a customer order fits into the timeline depends on the industry type and whether the products are, for example, built to order, engineered to order, or assembled to order.
Phases of product lifecycle and corresponding technologies
Many software solutions have been developed to organize and integrate the different phases of a product's lifecycle. PLM should not be seen as a single software product but a collection of software tools and working methods integrated together to address either single stages of the lifecycle or connect different tasks or manage the whole process. Some software providers cover the whole PLM range while others have a single niche application. Some applications can span many fields of PLM with different modules within the same data model. An overview of the fields within PLM is covered here. The simple classifications do not always fit exactly; many areas overlap and many software products cover more than one area or do not fit easily into one category. It should also not be forgotten that one of the main goals of PLM is to collect knowledge that can be reused for other projects and to coordinate simultaneous concurrent development of many products. It is about business processes, people and methods as much as software application solutions. Although PLM is mainly associated with engineering tasks it also involves marketing activities such as product portfolio management (PPM), particularly with regards to new product development (NPD). There are several life-cycle models in each industry to consider, but most are rather similar. What follows below is one possible life-cycle model; while it emphasizes hardware-oriented products, similar phases would describe any form of product or service, including non-technical or software-based products:
Phase 1: Conceive
Imagine, specify, plan, innovate
The first stage is the definition of the product requirements based on customer, company, market, and regulatory bodies’ viewpoints. From this specification, the product's major technical parameters can be defined.
In parallel, the initial concept design work is performed defining the aesthetics of the product together with its main functional aspects. Many different media are used for these processes, from pencil and paper to clay models to 3D CAID computer-aided industrial design software.
In some concepts, the investment of resources into research or analysis-of-options may be included in the conception phase – e.g. bringing the technology to a level of maturity sufficient to move to the next phase. However, life-cycle engineering is iterative. It is always possible that something doesn't work well in any phase enough to back up into a prior phase – perhaps all the way back to conception or research. There are many examples to draw from.
In New Product Development process, this phase collects and evaluates also market risks and technical risks by measuring KPI and scoring model.
Phase 2: Design
Describe, define, develop, test, analyze and validate
This is where the detailed design and development of the product's form starts, progressing to prototype testing, through pilot release to full product launch. It can also involve redesign and ramp for improvement to existing products as well as planned obsolescence.
The main tool used for design and development is CAD. This can be simple 2D drawing/drafting or 3D parametric feature-based solid/surface modeling. Such software includes technology such as Hybrid Modeling, Reverse Engineering, KBE (knowledge-based engineering), NDT (Nondestructive testing), and Assembly construction.
This step covers many engineering disciplines including: mechanical, electrical, electronic, software (embedded), and domain-specific, such as architectural, aerospace, automotive, ... Along with the actual creation of geometry, there is the analysis of the components and product assemblies. Simulation, validation, and optimization tasks are carried out using CAE (computer-aided engineering) software either integrated into the CAD package or stand-alone. These are used to perform tasks such as: Stress analysis, FEA (finite element analysis); kinematics; computational fluid dynamics (CFD); and mechanical event simulation (MES). CAQ (computer-aided quality) is used for tasks such as Dimensional tolerance (engineering) analysis.
Another task performed at this stage is the sourcing of bought-out components, possibly with the aid of procurement systems.
Phase 3: Realize
Manufacture, make, build, procure, produce, sell and deliver
Once the design of the product's components is complete, the method of manufacturing is defined. This includes CAD tasks such as tool design; including the creation of CNC machining instructions for the product's parts as well as the creation of specific tools to manufacture those parts, using integrated or separate CAM (computer-aided manufacturing) software. This will also involve analysis tools for process simulation of operations such as casting, molding, and die-press forming.
Once the manufacturing method has been identified CPM comes into play. This involves CAPE (computer-aided production engineering) or CAP/CAPP (computer-aided production planning) tools for carrying out factory, plant and facility layout, and production simulation e.g. press-line simulation, industrial ergonomics, as well as tool selection management.
Once components are manufactured, their geometrical form and size can be checked against the original CAD data with the use of computer-aided inspection equipment and software.
Parallel to the engineering tasks, sales product configuration and marketing documentation work takes place. This could include transferring engineering data (geometry and part list data) to a web-based sales configurator and other desktop publishing systems.
Phase 4: Service
Use, operate, maintain, support, sustain, phase-out, retire, recycle and disposal
Another phase of the lifecycle involves managing "in-service" information. This can include providing customers and service engineers with the support and information required for repair and maintenance, as well as waste management or recycling. This can involve the use of tools such as Maintenance, Repair and Operations Management (MRO) software.
An effective service consideration begins during and even prior to product design as an integral part of the product lifecycle management. Service Lifecycle Management (SLM) has critical touchpoints at all phases of the product lifecycle that must be considered. Connecting and enriching a common digital thread will provide enhanced visibility across functions, improve data quality, and minimize costly delays and rework.
There is an end-of-life to every product. Whether it be disposal or destruction of material objects or information, this needs to be carefully considered since it may be legislated and hence not free from ramifications.
Operational upgrades
During the operational phase, a product owner may discover components and consumables which have reached their individual end of life and for which there are Diminishing Manufacturing Sources or Material Shortages (DMSMS), or that the existing product can be enhanced for a wider or emerging user market easier or at less cost than a full redesign. This modernization approach often extends the product lifecycle and delays end-of-life disposal.
All phases: product lifecycle
Communicate, manage and collaborate
None of the above phases should be considered as isolated. In reality, a project does not run sequentially or separated from other product development projects, with information flowing between different people and systems.
A major part of PLM is the coordination and management of product definition data. This includes managing engineering changes and release status of components; configuration product variations; document management; planning project resources as well as timescale and risk assessment.
For these tasks data of graphical, textual, and meta nature — such as product Bills Of Materials (BOMs) — needs to be managed. At the engineering departments level, this is the domain of Product Data Management (PDM) software, or at the corporate level Enterprise Data Management (EDM) software; such rigid level distinctions may not be consistently used, however, it is typical to see two or more data management systems within an organization. These systems may also be linked to other corporate systems such as SCM, CRM, and ERP. Associated with these systems are project management systems for project/program planning.
This central role is covered by numerous collaborative product development tools that run throughout the whole lifecycle and across organizations. This requires many technology tools in the areas of conferencing, data sharing, and data translation. This specialized field is referred to as product visualization which includes technologies such as DMU (digital mock-up), immersive virtual digital prototyping (virtual reality), and photo-realistic imaging.
User skills
The broad array of solutions that make up the tools used within a PLM solution-set (e.g., CAD, CAM, CAx...) were initially used by dedicated practitioners who invested time and effort to gain the required skills. Designers and engineers produced excellent results with CAD systems, manufacturing engineers became highly skilled CAM users, while analysts, administrators, and managers fully mastered their support technologies. However, achieving the full advantages of PLM requires the participation of many people of various skills from throughout an extended enterprise, each requiring the ability to access and operate on the inputs and output of other participants.
Despite the increased ease of use of PLM tools, cross-training all personnel on the entire PLM tool-set has not proven to be practical. Now, however, advances are being made to address ease of use for all participants within the PLM arena. One such advance is the availability of "role" specific user interfaces. Through tailorable user interfaces (UIs), the commands that are presented to users are appropriate to their function and expertise.
These techniques include:
Concurrent engineering workflow
Industrial design
Bottom–up design
Top–down design
Both-ends-against-the-middle design
Front-loading design workflow
Design in context
Modular design
NPD new product development
DFSS design for Six Sigma
DFMA design for manufacture / assembly
Digital simulation engineering
Requirement-driven design
Specification-managed validation
Configuration management
Concurrent engineering workflow
Concurrent engineering (British English: simultaneous engineering) is a workflow that, instead of working sequentially through stages, carries out a number of tasks in parallel. For example: starting tool design as soon as the detailed design has started, and before the detailed designs of the product are finished; or starting on detail design solid models before the concept design surfaces models are complete. Although this does not necessarily reduce the amount of manpower required for a project, as more changes are required due to the incomplete and changing information, it does drastically reduce lead times and thus time to market.
Feature-based CAD systems have allowed simultaneous work on the 3D solid model and the 2D drawing by means of two separate files, with the drawing looking at the data in the model; when the model changes the drawing will associatively update. Some CAD packages also allow associative copying of geometry between files. This allows, for example, the copying of a part design into the files used by the tooling designer. The manufacturing engineer can then start work on tools before the final design freeze; when a design changes size or shape the tool geometry will then update.
Concurrent engineering also has the added benefit of providing better and more immediate communication between departments, reducing the chance of costly, late design changes. It adopts a problem prevention method as compared to the problem solving and re-designing method of traditional sequential engineering.
Bottom–up design
Bottom–up design (CAD-centric) occurs where the definition of 3D models of a product starts with the construction of individual components. These are then virtually brought together in sub-assemblies of more than one level until the full product is digitally defined. This is sometimes known as the "review structure" which shows what the product will look like. The BOM contains all of the physical (solid) components of a product from a CAD system; it may also (but not always) contain other 'bulk items' required for the final product but which (in spite of having definite physical mass and volume) are not usually associated with CAD geometry such as paint, glue, oil, adhesive tape, and other materials.
Bottom–up design tends to focus on the capabilities of available real-world physical technology, implementing those solutions to which this technology is most suited. When these bottom–up solutions have real-world value, bottom–up design can be much more efficient than top–down design. The risk of bottom–up design is that it very efficiently provides solutions to low-value problems. The focus of bottom–up design is "what can we most efficiently do with this technology?" rather than the focus of top–down which is "What is the most valuable thing to do?"
Top–down design
Top–down design is focused on high-level functional requirements, with relatively less focus on existing implementation technology. A top-level spec is repeatedly decomposed into lower-level structures and specifications until the physical implementation layer is reached. The risk of a top–down design is that it may not take advantage of more efficient applications of current physical technology, due to excessive layers of lower-level abstraction due to following an abstraction path that does not efficiently fit available components e.g. separately specifying sensing, processing, and wireless communications elements even though a suitable component that combines these may be available. The positive value of top–down design is that it preserves a focus on the optimum solution requirements.
A part-centric top–down design may eliminate some of the risks of top–down design. This starts with a layout model, often a simple 2D sketch defining basic sizes and some major defining parameters, which may include some Industrial design elements. Geometry from this is associatively copied down to the next level, which represents different subsystems of the product. The geometry in the sub-systems is then used to define more detail in the levels below. Depending on the complexity of the product, a number of levels of this assembly are created until the basic definition of components can be identified, such as position and principal dimensions. This information is then associatively copied to component files. In these files the components are detailed; this is where the classic bottom–up assembly starts.
The top–down assembly is sometimes known as a "control structure". If a single file is used to define the layout and parameters for the review structure it is often known as a skeleton file.
Defense engineering traditionally develops the product structure from the top down. The system engineering process prescribes a functional decomposition of requirements and then physical allocation of product structure to the functions. This top down approach would normally have lower levels of the product structure developed from CAD data as a bottom–up structure or design.
Both-ends-against-the-middle design
Both-ends-against-the-middle (BEATM) design is a design process that endeavors to combine the best features of top–down design, and bottom–up design into one process. A BEATM design process flow may begin with an emergent technology that suggests solutions which may have value, or it may begin with a top–down view of an important problem that needs a solution. In either case, the key attribute of BEATM design methodology is to immediately focus at both ends of the design process flow: a top–down view of the solution requirements, and a bottom–up view of the available technology which may offer the promise of an efficient solution. The BEATM design process proceeds from both ends in search of an optimum merging somewhere between the top–down requirements, and bottom–up efficient implementation. In this fashion, BEATM has been shown to genuinely offer the best of both methodologies. Indeed, some of the best success stories from either top–down or bottom–up have been successful because of an intuitive, yet unconscious use of the BEATM methodology. When employed consciously, BEATM offers even more powerful advantages.
Front loading design and workflow
Front loading is taking top–down design to the next stage. The complete control structure and review structure, as well as downstream data such as drawings, tooling development, and CAM models, are constructed before the product has been defined or a project kick-off has been authorized. These assemblies of files constitute a template from which a family of products can be constructed. When the decision has been made to go with a new product, the parameters of the product are entered into the template model and all the associated data is updated. Obviously predefined associative models will not be able to predict all possibilities and will require additional work. The main principle is that a lot of the experimental/investigative work has already been completed. A lot of knowledge is built into these templates to be reused on new products. This does require additional resources "up front" but can drastically reduce the time between project kick-off and launch. Such methods do however require organizational changes, as considerable engineering efforts are moved into "offline" development departments. It can be seen as an analogy to creating a concept car to test new technology for future products, but in this case, the work is directly used for the next product generation.
Design in context
Individual components cannot be constructed in isolation. CAD and CAID models of components are created within the context of some or all of the other components within the product being developed. This is achieved using assembly modelling techniques. The geometry of other components can be seen and referenced within the CAD tool being used. The other referenced components may or may not have been created using the same CAD tool, with their geometry being translated from other collaborative product development (CPD) formats. Some assembly checking such as DMU is also carried out using product visualization software.
Product and process lifecycle management (PPLM)
Product and process lifecycle management (PPLM) is an alternate genre of PLM in which the process by which the product is made is just as important as the product itself. Typically, this is the life sciences and advanced specialty chemicals markets. The process behind the manufacture of a given compound is a key element of the regulatory filing for a new drug application. As such, PPLM seeks to manage information around the development of the process in a similar fashion that baseline PLM talks about managing information around the development of the product.
One variant of PPLM implementations are Process Development Execution Systems (PDES). They typically implement the whole development cycle of high-tech manufacturing technology developments, from initial conception, through development, and into manufacture. PDES integrates people with different backgrounds from potentially different legal entities, data, information and knowledge, and business processes.
Market size
After the Great Recession, PLM investments from 2010 onwards showed a higher growth rate than most general IT spending.
Total spending on PLM software and services was estimated in 2020 to be $26 billion a year, with an estimated compound annual growth rate of 7.2% from 2021-2028. This was expected to be driven by a demand for software solutions for management functions, such as change, cost, compliance, data, and governance management.
Pyramid of production systems
According to Malakooti (2013), there are five long-term objectives that should be considered in production systems:
Cost: Which can be measured in terms of monetary units and usually consists of fixed and variable cost.
Productivity: Which can be measured in terms of the number of products produced during a period of time.
Quality: Which can be measured in terms of customer satisfaction levels for example.
Flexibility: Which can be considered the ability of the system to produce a variety of products for example.
Sustainability: Which can be measured in terms ecological soundness i.e. biological and environmental impacts of a production system.
The relation between these five objects can be presented as a pyramid with its tip associated with the lowest Cost, highest Productivity, highest Quality, most Flexibility, and greatest Sustainability. The points inside of this pyramid are associated with different combinations of five criteria. The tip of the pyramid represents an ideal (but likely highly unfeasible) system whereas the base of the pyramid represents the worst system possible.
See also
Application lifecycle management
Building lifecycle management
Cradle-to-cradle design
Hype cycle
ISO 10303 – Standard for the Exchange of Product model data
Kondratiev wave
Life cycle thinking
Life-cycle assessment
Product data record
Product management
Sustainable materials management
System lifecycle
Technology roadmap
User-centered design
References
Further reading
External links
Brand management
|
28769705
|
https://en.wikipedia.org/wiki/History%20of%20tablet%20computers
|
History of tablet computers
|
The history of tablet computers and the associated special operating software is an example of pen computing technology, and thus the development of tablets has deep historical roots.
The first patent for a system that recognized handwritten characters by analyzing the handwriting motion was granted in 1914.
The first publicly demonstrated system using a tablet and handwriting recognition instead of a keyboard for working with a modern digital computer dates to 1956.
Early tablets
The tablet computer and the associated special operating software is an example of pen computing technology, and the development of tablets has deep historical roots. The first patent for a system that recognized handwritten characters by analyzing the handwriting motion was granted in 1914. The first publicly demonstrated system using a tablet and handwriting text recognition instead of a keyboard for working with a modern digital computer dates to 1956.
In addition to many academic and research systems, there were several companies with commercial products in the 1980s: Pencept and Communications Intelligence Corporation were among the best known of a crowded field.
The development of the tablet computer was enabled by several key technological advances. The rapid scaling and miniaturization of MOSFET transistor technology (Moore's law), the basic building block of mobile devices and computing devices, made it possible to build portable smart devices such as tablet computers. Another important enabling factor was the lithium-ion battery, an indispensable energy source for tablets, commercialized by Sony and Asahi Kasei in 1991.
Fictional and prototype tablets
Tablet computers appeared in a number of works of science fiction in the second half of the 20th century, with the depiction of Arthur C. Clarke's NewsPad appearing in Stanley Kubrick's 1968 film 2001: A Space Odyssey, the description of the Calculator Pad in the 1951 novel Foundation by Isaac Asimov, the Opton in the 1961 novel Return from the Stars, by Stanislaw Lem, and The Hitchhiker's Guide to the Galaxy in Douglas Adams 1978 comedy of the same name, all helping to promote and disseminate the concept to a wider audience.
In 1968, Alan Kay envisioned a KiddiComp; while a PhD candidate he developed and described the concept as a Dynabook in his 1972 proposal: A personal computer for children of all ages, the paper outlines the requirements for a conceptual portable educational device that would offer functionality similar to that supplied via a laptop computer or (in some of its other incarnations) a tablet or slate computer with the exception of the requirement for any Dynabook device offering near eternal battery life. Adults could also use a Dynabook, but the target audience was children.
Steve Jobs of Apple envisioned in a 1983 speech an "incredibly great computer in a book that you can carry around with you and learn how to use in 20 minutes". In 1985, as the home-computer market significantly declined after several years of strong growth, Dan Bricklin said that a successful home computer needed to be the size of and as convenient to carry as a spiral notebook. He and others urged the industry to research the Dynabook concept.
Star Trek: The Next Generation featured extensive use of tablet computers.
Early devices
In 1986, Hindsight, a startup in Enfield CT, developed the Letterbug, an 8086-based tablet computer for the educational market. Prototypes were shown at trade shows in New England in 1987, but no production models ever came out.
In 1987 Linus Technologies released the Write-top, the first tablet computer with pen input and handwriting recognition. It weighed 9 pounds and was based on MS-DOS with an electroluminescent backlit CGA display and a "resistive type touch screen in which a voltage is applied to the screen edges, and a stylus detects the voltage at the touched location." The handwriting had to be individually trained for each user. Around 1500 units were sold.
In 1989, GRiD Systems released the GRiDPad 1900, the first commercially successful tablet computer. It weighed 4.5 pounds and had a tethered pen resistive screen like who led the GRidPad development and later created the PalmPilot. Its GRiDPen software ran on MS-DOS and was later licensed as PenRight.
The 1991, Atari ST-PAD Stylus was demonstrated but did not enter production.
In 1991, AT&T released their first EO Personal Communicator, this was one of the first commercially available tablets and ran the GO Corporation's PenPoint OS on AT&T's own hardware, including their own AT&T Hobbit CPU.
In 1992, Samsung introduced the PenMaster. It was based around the Intel i386SL CPU. As the OS, it used the newly released Windows for Pen Computing from Microsoft. The touchscreen relied on a chipset by Wacom and it used a battery powered pen. GRID Systems licensed the design from Samsung and was also sold as the better known GRiDPad SL.
In 1993, Apple Computer released the Apple Newton, with a 6-inch screen and 800 grams weight). It utilized Apple's own new Newton OS, initially running on hardware manufactured by Motorola and incorporating an ARM CPU, that Apple had specifically co-developed with Acorn Computers. The operating system and platform design were later licensed to Sharp and Digital Ocean, who went on to manufacture their own variants.
The Compaq Concerto was released in 1993 with a Compaq-modified version of MS-DOS 6.2 and Windows 3.1, a.k.a. Windows for PEN, with pen-entry and Wacom compatibility. Functionally the Concerto was a full featured laptop that could operate in pen-mode when the keyboard was removed.
In 1994 media company Knight Ridder made a concept video of a tablet device with a color display and a focus on media consumption. The company didn't create it as a commercial product because of deficiencies of weight and energy consumption in display technology.
In 1994, the European Union initiated the 'OMI-NewsPAD' project (EP9252), requiring a consumer device be developed for the receipt and consumption of electronically delivered news / newspapers and associated multi-media. The NewsPad name and project goals were borrowed from and inspired by Arthur C. Clarke's 1965 screen play and Stanley Kubrick's 1968 film: 2001: A Space Odyssey. Acorn Computers developed and delivered an ARM based touch screen tablet computer for this program, branded the NewsPad. The device was supplied for the duration of the Barcelona-based trial, which ended in 1997.
In 1996, The Webbook Company announced the first Internet-based tablet, then referred to as a Web Surfboard, that would run Java and utilize a RISC processor. However, it never went into production.
Also in 1996, Palm, Inc. released the first of the Palm OS based PalmPilot touch and stylus based PDA, the touch based devices initially incorporating a Motorola Dragonball (68000) CPU.
Again in 1996, Fujitsu released the Stylistic 1000 tablet format PC, running Microsoft Windows 95, on a 100 MHz AMD486 DX4 CPU, with 8 MB RAM offering stylus input, with the option of connecting a conventional Keyboard and mouse.
In 1999, Intel announced a StrongARM based touch screen tablet computer under the name WebPAD, the tablet was later re-branded as the "Intel Web Tablet".
In April 2000, Microsoft launched the Pocket PC 2000, utilising their touch capable Windows CE 3.0 operating system. The devices were manufactured by several manufacturers, based on a mix of: x86, MIPS, ARM, and SuperH hardware.
One early implementation of a Linux tablet was the ProGear by FrontPath. The ProGear used a Transmeta chip and a resistive digitizer. The ProGear initially came with a version of Slackware Linux, but could later be bought with Windows 98.
Microsoft Tablet PC
In 1999, Microsoft attempted to re-institute the then decades old tablet concept by assigning two well-known experts in the field, from Xerox Palo Alto Research Center, to the project.
In 2000, Microsoft coined the term "Microsoft Tablet PC" for tablet computers built to Microsoft's specification, and running a licensed specific tablet enhanced version of its Microsoft Windows OS, popularizing the term tablet PC for this class of devices. Microsoft Tablet PCs were targeted to address business needs mainly as note-taking devices, and as rugged devices for field work. In the health care sector, tablet computers were intended for data capture – such as registering feedback on the patient experience at the bedside as well and supporting data collection through digital survey instruments.
In 2002, original equipment manufacturers released the first tablet PCs designed to the Microsoft Tablet PC specification. This generation of Microsoft Tablet PCs were designed to run Windows XP Tablet PC Edition, the Tablet PC version of Windows XP. This version of Microsoft Windows superseded Microsoft's earlier pen computing operating environment, Windows for Pen Computing 2.0. After releasing Windows XP Tablet PC Edition, Microsoft designed the successive desktop computer versions of Windows, Windows Vista and Windows 7, to support pen computing intrinsically.
Tablet PCs failed to gain popularity in the consumer space because of unresolved problems. The existing devices were too heavy to be held with one hand for extended periods, the specific software features designed to support usage as a tablet (such as finger and virtual keyboard support) were not present in all contexts, and there were not enough applications specific to the platform – legacy applications created for desktop interfaces made them not well adapted to the slate format.
Linux
One early implementation of a Linux tablet was the ProGear by FrontPath. The ProGear used a Transmeta chip and a resistive digitizer.
The ProGear initially came with a version of Slackware Linux, but could later be bought with Windows 98. Because these computers are general purpose IBM PC compatible machines, they can run many different operating systems. However, the device is no longer for sale and FrontPath has ceased operations. Many touch screen sub-notebook computers can run any of several Linux distributions with little customization.
X.org supports screen rotation and tablet input through Wacom drivers, and handwriting recognition software from both the Qt-based Qtopia and GTK+-based Internet Tablet OS provide promising free and open source systems for future development.
Open source note taking software in Linux includes applications such as Xournal (which supports PDF file annotation), Gournal (a Gnome-based note taking application), and the Java-based Jarnal (which supports handwriting recognition as a built-in function). Before the advent of the aforementioned software, many users had to rely on on-screen keyboards and alternative text input methods like Dasher. There is a stand-alone handwriting recognition program available, CellWriter, in which users must write letters separately in a grid.
A number of Linux-based OS projects are dedicated to tablet PCs. Since all these are open source, they are freely available and can be run or ported to devices that conform to the tablet PC design. In 2003, Hitachi introduced the VisionPlate rugged tablet that was used as a point-of-sale device. Maemo (rebranded MeeGo in 2010), a Debian GNU/Linux based graphical user environment, was developed for the Nokia Internet Tablet devices (770, N800, N810 & N900). The Ubuntu Netbook Remix edition, as well as the Intel sponsored Moblin project, both have touchscreen support integrated into their user interfaces. Canonical Ltd has started a program for better supporting tablets with the Unity UI for Ubuntu 10.10.
TabletKiosk offered a hybrid digitizer / touch device running openSUSE.
webOS
Initially developed by Palm, Inc. in January 2009, as the Palm OS, webOS was purchased by HP to be their proprietary operating system running on the Linux kernel. Versions 1.0 to 2.1 of webOS uses the patched Linux 2.6.24 kernel. HP has continued to develop the webOS platform for use in multiple products, including smartphones, tablet PCs, and printers. HP announced plans in March 2011, for a version of webOS by the end of 2011, to run within the Microsoft Windows operating system to be used in HP desktop and notebook computers in 2012.
HP TouchPad, the first addition to HP's tablet family, was shipped out with version 3.0.2. Version 3.0.2 gives the tablet support for multitasking, applications, and HP Synergy. HP have also claimed in its webcatalog to support over 200 apps with its release.
On 18 August 2011, HP announced that it would discontinue production of all webOS devices.
MeeGo
Nokia entered the tablet space with the Nokia 770 running Maemo, a Debian-based Linux distribution custom-made for their Nokia Internet Tablet line. The product line continued with the N900 which is the first to add phone capabilities. Intel, following the launch of the UMPC, started the Mobile Internet Device initiative, which took the same hardware and combined it with a Linux operating system custom-built for portable tablets. Intel co-developed the lightweight Moblin operating system following the successful launch of the Atom CPU series on netbooks.
MeeGo is an operating system developed by Intel and Nokia to support Netbooks, Smartphones and tablet PCs. In 2010, Nokia and Intel combined the Maemo and Moblin projects to form MeeGo. The first MeeGo powered tablet PC is the Neofonie WeTab. The WeTab uses an extended version of the MeeGo operating system called WeTab OS. WeTab OS adds runtimes for Android and Adobe AIR and provides a proprietary user interface optimized for the WeTab device.
Mac OS X Modbook
Apple has never sold a tablet PC computer running Mac OS X, although OS X does have support for handwriting recognition via Inkwell. However, Apple sells the iOS-based iPad Tablet computer, introduced in 2010.
Before the introduction of the iPad, Axiotron introduced the Modbook, a heavily modified Apple MacBook, Mac OS X-based tablet computer at Macworld in 2007. The Modbook used Apple's Inkwell handwriting and gesture recognition, and used digitization hardware from Wacom. To support the digitizer on the integrated tablet, the Modbook was supplied with a third-party driver called TabletMagic. Wacom does not provide drivers for this device.
Apple's iPad
The tablet computer market was reinvigorated by Apple through the introduction of the iPad device in 2010. While the iPad places restrictions on the owner to install software thus deviating it from the PC tradition, its attention to detail for the touch interface is considered a milestone in the history of the development of the tablet computer that defined the tablet computer as a new class of portable device, different from a laptop PC or netbook. A WiFi-only model of the tablet was released in April 2010, and a WiFi+3G model was introduced about a month later, using a no-contract data plan from AT&T. Since then, the iPad 2 has launched, bringing 3G support from both AT&T and Verizon Wireless. The iPad has been characterized by some as a tablet computer that mainly focuses on media consumption such as web browsing, email, photos, videos, and e-reading, even though full-featured, Microsoft Office-compatible software for word processing (Pages), spreadsheets (Numbers), and presentations (Keynote) were released alongside the initial model. One month after the iPad's release Apple subsidiary FileMaker Inc. released a version of the Bento database software for it. With the introduction of the iPad 2 Apple also released full-featured first party software for multi-track music composition (GarageBand) and video editing (iMovie). As of the release of iOS 5 in October 2011, iPads no longer require being plugged into a separate personal computer for initial activation and backups, eliminating one of the drawbacks of using a non-PC architecture-based tablet computer.
On 20 May 2010, IDC published a press release defining the term media tablet as personal devices with screens from 7 to 12 inches, lightweight operating systems "currently based on ARM processors" which "provide a broad range of applications and connectivity, differentiating them from primarily single-function devices such as ereaders". IDC also predicted a market growth for tablets from 7.6 million units in 2010, to more than 46 million units in 2014. More recent reports show predictions from various analysts in the range from 26 to 64 million units in 2013. On 2 March 2011, Apple announced that 15 million iPads had been sold in three fiscal quarters of 2010, double the number that IDC then predicted.
Other post-PC tablet computers
Early competitors to Apple's iPad in the market for tablet computers not based on the traditional PC architecture were the 5 inch Dell Streak, released in June 2010, and the original 7 inch Samsung Galaxy Tab, released in September 2010..
At the Consumer Electronics Show in January 2011, over 80 new tablets were announced to compete with the iPad. Companies who announced tablets included: Dell with the Streak Tablet, Acer with the new Acer Tab, Motorola with its Xoom tablet (Android 3.0), Samsung with a new Samsung Galaxy Tab (Android 2.2), Research in Motion demonstrating their BlackBerry Playbook, Vizio with the Via Tablet, Toshiba with the Android 3.0 – run Toshiba Thrive, and others including Asus, and the startup company Notion Ink. Many of these tablets were designed to run Android 3.0 Honeycomb, Google's mobile operating system for tablets, while others run older versions of Android like 2.3, or a completely different OS such as the BlackBerry Playbook's QNX. Other than the Motorola Xoom, by the time most competitors released devices of comparable size and price to the original iPad, Apple in March 2011, had already released their second generation iPad 2.
Hewlett-Packard announced its TouchPad based on the WebOS system in June 2011. HP released it a month later in July, only to discontinue it after less than 49 days of sales, becoming the first casualty in the post-PC tablet computer market. The fire sale on TouchPad tablets when its price was dropped from US$499 to as low as $99 after it was discontinued resulted in a surge of interest. This dramatic increase in its popularity potentially raised its market share above all other non-Apple tablets, at least temporarily.
In September 2011, Amazon.com announced the Kindle Fire, a 7-inch tablet deeply tied into their Kindle ebook service, Amazon Appstore, and other Amazon services for digital music, video, and other content. The Kindle Fire runs on Amazon's custom fork of v2.3 of the Android operating system. Using Amazon's cloud services for accelerated web browsing and remote storage, Amazon has set it up to have very little other connection back to Google, aside from supporting Gmail as one of the several webmail services it can access. At a cost of only US$199 for the Kindle Fire it has been suggested that Amazon's business strategy is to make their money on selling content through it, as well as the device acting as a storefront for physical goods sold through Amazon. Besides the Kindle Fire's low price, reviewers have also noted that it is polished on its initial release, in comparison to other tablets that often needed software updates.
Despite the large number of competing tablets released in 2020, so far none of them have managed to gain considerable traction as the market continued to be dominated by the iPad and iPad 2. Several manufacturers had to resort to deep discounts to move excess inventory, as what happened with the HP TouchPad (after its announced discontinuation) and the BlackBerry Playbook. It has been suggested that many companies, in their rush to jump on the "tablet bandwagon", had released products that might have had decent hardware but lacked refinement and came with software bugs that needed updates.
Post-PC tablet market share
According to IDC, Android have 63% of all "media tablet" sales in 2013 and rising and Windows is also rising in market share. Apple's iPad had 83% of all "media tablet" sales in 2010 and 28% of market share in 2013. At the unveiling of the iPad 2 in March 2011, Steve Jobs claimed that the iPad held more than 90% market share, but the difference between the figures could be explained by the difference between the amount of hardware shipped into the channel versus the number that have been actually sold.
In August 2011, the iPad and iPad 2 dominated sales, outselling Android and other rival OS tablets by a ratio of eight to one. Apple's iPad held 66 percent of the global tablet market in Q1 of 2011, but the share is predicted to drop to 58 percent by the end of the year due to the influx of new products, mostly Android tablets. Technology experts suggest that Apple is getting court injunctions to stop the slide, although these injunctions are only preliminary measures as Apple has to provide more substantial evidence in subsequent court proceedings that the design of competing products infringed its patents or copied their designs in order to make any bans permanent. These cases take months or even years to come to court, unless there is no settlement, and if Apple loses it will be liable for the business lost by a competitor due to the injunction. Although risky, experts say that this kind of strategy gives time for Apple to hold off rivals and grab even greater market share with their iPad, since it is a market that is developing fast where Apple leads, regardless of the damages that they have to pay if they lose the case. Google's David Drummond complained "They (Apple) want to make it harder for manufacturers to sell Android devices. Instead of competing by building new features or devices, they are fighting through litigation."
On 14 September 2011, IDC announced that in the second calendar quarter of 2011, the market share of the iPad increased to 68.3% from 65.7% in the previous quarter, while market share for Android-based tablets decreased from 34.0% the previous quarter down to 26.8% in the second quarter. Besides being affected by the introduction of the iPad 2 in March 2011, this can also be partially attributed to the introduction of RIM's PlayBook tablet, which took 4.9% share of the market in the quarter.
On 22 September 2011, Gartner lowered their forecast for sales of tablet computers based on the Android OS by 28 percent from the previous quarter's projection, explaining that "Android’s appeal in the tablet market has been constrained by high prices, weak user interface and limited tablet applications." Further, they state that they expect the iPad to have a "free run" through the 2011 holiday season and that Apple will "maintain a market share lead throughout our forecast period by commanding more than 50 percent of the market until 2014." Gartner revised their projection of Apple's worldwide tablet market share at the end of 2011, up to 73.4% after their previous projection of 68.7% for the year.
In October 2011, at the Launch Pad conference Ryan Block from gadget site gdgt showed slides identifying the makeup of the site's users who bought tablets in 2011 consisting of 76% iPad (39% iPad 2, 37% original iPad), 6% HP TouchPad, and no other tablet at over 4%. He noted that the numbers did not include previous purchases of the iPad or other tablets in 2010. In a breakdown by platform he showed a chart indicating Apple's iOS at 76%, Google's Android at 17%, HP's webOS at 6%, and RIM's PlayBook OS at 2%.
A report by Strategy Analytic showed that the share of Android tablet computers had risen sharply at the expense of Apple's iOS in the fourth quarter of 2011. According to Strategy Analytic, Android accounted for 39% of the global tablet market in the final three months of 2011, up from 29% a year earlier. Apple's share fell to 58% from 68%. A total of 26.8 million tablet computers were sold in the quarter, up from 10.7 million a year ago, the report said.
In China, according to an AlphaWise survey of 1,553 Chinese consumers across 16 cities over the summer of 2011, Apple's iPad currently holds a 65% share of that nation's tablet market. When asked about future purchases, 68% of those surveyed indicated an intent to buy an iPad, versus other brands' shares of 10% for Asus, 8% for Lenovo, 6% for Samsung, and 3% or less for any other brand.
According to eMarketer & Forbes, advertisers will spend nearly $1.23 billion on mobile advertising in 2011 in the US, up from $743 million last year. By 2015, the US mobile advertising market is set to reach almost $4.4 billion. This includes spending on display ads (such as banners, rich media and video), search and messaging-based advertising, and covers ads viewed on both mobile phones and tablets.
Timeline
Before 1950
1888: U.S. Patent granted to Elisha Gray on electrical stylus device for capturing handwriting.
1914: U.S. Patent on handwriting recognition user interface with a stylus.
1942: U.S. Patent on touchscreen for handwriting input.
1945: Vannevar Bush proposes the Memex, a data archiving device including handwriting input, in an essay As We May Think.
1950s
Tom Dimond demonstrates the Stylator electronic tablet with pen for computer input and software for recognition of handwritten text in real-time.
1960s
Early 1960s
RAND Tablet invented. The RAND Tablet is better known than the Styalator, but was invented later.
1961
Stanislaw Lem describes an Opton, a portable device with a screen "linked directly, through electronic catalogs, to templates of every book on earth" in the 1961 novel "Return from the Stars".
1966
In the science fiction television series Star Trek, crew members carry large, wedge-shaped electronic clipboards, operated through the use of a stylus.
1968
Filmmaker Stanley Kubrick imagines a flatscreen tablet device wirelessly playing a video broadcast in the movie 2001: A Space Odyssey.
1970s
1971
Touchscreen interface developed at SLAC.
1972
Alan Kay of Xerox PARC publishes: "A personal computer for children of all ages" describing and detailing possible uses for his Dynabook concept. However, the device was never built.
1978
The Hitchhiker's Guide to the Galaxy is broadcast as a radio comedy on BBC Radio 4. The series was named after a fictional touch screen electronic tablet used in the play.
1980s
1982
Pencept of Waltham, Massachusetts markets a general-purpose computer terminal using a tablet and handwriting recognition instead of a keyboard and mouse.
Cadre System markets the Inforite point-of-sale terminal using handwriting recognition and a small electronic tablet and pen.
1985
Pencept and CIC both offer PC computers for the consumer market using a tablet with handwriting recognition instead of a keyboard and mouse. Operating system is MS-DOS.
1986
Hindsight develops and tests the Letterbug, an educational tablet computer before making the trade show tour in 1987.
1987
The Knowledge Navigator concept piece by Apple Computer.
Linus Technologies releases the Linus Write-top
1989
The first commercially successful tablet-type portable MS-DOS computer was the GRiDPad from GRiD Systems.
Wang Laboratories introduces Freestyle, an application that captured a screen from a MS-DOS application, and let users add voice and handwriting annotations. It was a sophisticated predecessor to later note-taking applications for systems like tablet computers.
1990s
1991
The Momenta Pentop was released.
GO Corporation announced a dedicated operating system, called PenPoint OS, with control of the operating system desktop via handwritten gesture shapes.
NCR released model 3125 pen computer running MS-DOS, Penpoint OS or Pen Windows.
The Apple Newton entered development; although it ultimately became a PDA, its original concept (which called for a larger screen and greater sketching abilities) resembled the hardware of a tablet computer.
1992
GO Corporation shipped the PenPoint OS for general availability and IBM announced IBM 2125 pen computer (the first IBM model named "ThinkPad") in April.
Microsoft releases Windows for Pen Computing as a response to the PenPoint OS by GO Corporation.
Samsung introduced the formidable and elegantly designed PenMaster which used Windows for Pen Computing from Microsoft
1993
Apple Computer announces the Newton PDA, also known as the Apple MessagePad, which includes handwriting recognition with a stylus.
IBM releases the ThinkPad, IBM's first commercialized portable tablet computer product available to the consumer market, as the IBM ThinkPad 750P and 360P.
BellSouth released the IBM Simon Personal Communicator, an analog cellphone using a touchscreen and display. It did not include handwriting recognition, but did permit users to write messages and send them as faxes on the analog cellphone network, and included PDA and email features.
AT&T introduced the EO Personal Communicator combining PenPoint with wireless communications.
1994
Knight Ridder concept video of a tablet device with focus on media consumption.
Sony introduces Magic Link PDA based on Magic Cap operating system.
1995
Hewlett Packard releases the MS-DOS and PEN/GEOS based OmniGo 100 and OmniGo 120 handheld organizers with flip-around clamshell display with pen support and Graffiti handwriting recognition.
1996
The Digital Equipment Corporation releases the DEC Lectrice.
Acorn Computers supply ARM-based touch screen tablets for the NewsPad pilot in Barcelona, Spain.
1997
The first Palm Pilot introduced.
1998
Cyrix-NatSemi announce and demonstrate the WebPad touch screen tablet computer at COMDEX.
1999
The "QBE" pen computer created by Aqcess Technologies wins COMDEX Best of Show.
Intel announces a StrongARM-based, wireless touch screen tablet computer called the WebPad, the device was later renamed the "Intel Web Tablet".
2000s
2000
PaceBlade develops the first device that meets the Microsoft's Tablet PC standard and received the "Best Hardware" award at VAR Vision 2000.
The "QBE Vivo" pen computer created by Aqcess Technologies ties for COMDEX Best of Show.
Bill Gates of Microsoft demonstrates the first public prototype of a Tablet PC (defined by Microsoft as a pen-enabled computer conforming to hardware specifications devised by Microsoft and running a licensed copy of the "Windows XP Tablet PC Edition" operating system) at COMDEX.
2002
Microsoft releases the Microsoft Tablet PC, designed and built by HP.
Motion Computing releases their 1st slate Tablet PC the M1200.
2003
PaceBlade receives the "Innovation des Jahres 2002/2003" award for the PaceBook Tablet PC from PC Professional Magazine at the CeBIT.
Fingerworks develops the touch technology and touch gestures later used in the Apple iPhone.
Motion Computing releases their 2nd slate Tablet PC the M1300.
2005
Nokia launches the Nokia 770 Internet Tablet.
Motion Computing releases the LE1600 and paperback sized LS800 Tablet PC.
2006
Windows Vista released for general availability. Vista included the functionality of the special Tablet PC edition of Windows XP.
On Disney Channel Original Movie, Read It and Weep, Jamie uses a Tablet PC for her journal.
MTVs "Pimp My Ride" features multiple Motion Computing tablets PCs in customized automobiles
2007
Axiotron introduces Modbook, the first (and only) tablet computer based on Mac hardware and Mac OS X at Macworld.
Archos launches Archos 605 WiFi, a PMP with WiFi. Virtually a tablet PC.
Apple launches iPod touch, an MP3 player with WiFi. It took Apple two years to turn this concept into a tablet PC.
2008
In April 2008, as part of a larger federal court case, the gesture features of the Windows/Tablet PC operating system and hardware were found to infringe on a patent by GO Corp. concerning user interfaces for pen computer operating systems. Microsoft's acquisition of the technology is the subject of a separate lawsuit.
HP releases the second multi-touch capable tablet: the HP TouchSmart tx2 series.
2009
Asus announces a tablet netbook, the Eee PC T91 and T91MT, the latter with a multi-touch screen.
Always Innovating announced a new tablet netbook with an ARM CPU.
Motion Computing launched the J3400.
2010s
2010
Apple Inc. unveils the iPad, running Apple iOS in March.
Fusion Garage releases the JooJoo, running Linux.
Samsung unveils the Galaxy Tab, running Google Android.
Neofonie releases the WeTab, a MeeGo-based slate tablet PC, featuring an 11.6 inch multi-touch screen at 1366×768 pixels resolution.
Dixons Retail unveils the Advent Vega, a 10-inch tablet PC running Android 2.2, having a micro SD card slot, a USB port and a 16h battery life for audio playback and 6.5h for 1080p video.
HP releases the Slate 500, running a full-version of Windows 7.
2011
Motorola releases Xoom a 10-inch tablet running Android 3.0 (Honeycomb).
BlackBerry releases BlackBerry Playbook running BlackBerry Tablet OS, based on QNX Neutrino.
Asus releases the Asus Eee Pad Transformer TF101, one of the first 2-in-1 detachable tablets
Dell showcases the Streak 7 tablet at CES 2011 in January.
ZTE announces the ZTE V11 and the Z-pad that both run Android 3.0 (Honeycomb).
Apple released the iPad 2.
Toshiba announces the Toshiba Tablet, a 10-inch tablet powered by a Tegra 2 process and Android 3.0 (Honeycomb)
HP releases the HP TouchPad with webOS & withdraws it in August 2011 (a month later).
Amazon announced an Android-based tablet, the Kindle Fire, in September.
Barnes & Noble introduces Nook Tablet in November.
2012
Apple releases the iPad 3, and then later in the year iPad 4 and the iPad Mini.
Google unveiled the Nexus 7, a 7-inch tablet developed with Asus and the Nexus 10, a 10-inch tablet developed with Samsung.
Samsung releases Samsung Galaxy Note 10.1, with stylus apps, running Android 4.0 (Ice Cream Sandwich) with 1.4 GHz quad-core CPU.
Microsoft releases Microsoft Surface RT with an ARM microprocessor and kickstand.
2013
Sony releases the Sony Xperia Tablet Z as well as having Ingress Protection Ratings of IP55 and IP57, making it dust-resistant, water-jet resistant, and waterproof.
Apple releases the iPad Air and the iPad Mini 2 in November (first 64-bit tablets with iPhone 5S smartphone being the first 64-bit mobile device the month before)
Microsoft releases the Surface 2 with an ARM microprocessor and two step Kickstand. Alongside the Surface Pro 2 was released with an Intel core I5 processor.
2014
Samsung releases a 2014 version of the Samsung Galaxy Note 10.1
Microsoft releases the Surface Pro 3
Nvidia releases the Shield Tablet, an Android tablet focused on gaming
Google releases the Nexus 9 (first 64-bit Android tablet)
Apple releases the iPad Air 2.
HP ships first 64-bit Windows 8.1 tablets with Intel Atom
2015
Android and Windows tablets (and smartphones) are up to 4 GB RAM, using 64-bit processors.
Microsoft released fourth generation of the Surface Pro, the Surface Pro 4 and 2-in-1 convertible tablet that could be folded like a laptop called the Surface Book, both came with the sixth generation Skylake Intel processors.
Apple released the iPad Pro, being one of the largest tablet devices ever made. It features a 12.9-inch display. It also released accessories at the same time such as its first tablet point device, the Apple Pencil.
2016
Apple released the iPad Pro in a 9.7-inch display that had a 256 GB option, the largest amount of storage available on a consumer tablet.
2017
Apple released the iPad, its lowest cost 9.7-inch tablet. One reviewer said the tablet is "perfect for first-time tablet buyers".
See also
Comparison of tablet computers
Graphics tablet
Pen computing
Personal digital assistant
Smartbook
Tablet computer
Ultra-Mobile PC
Microsoft Tablet PC
External links
Microsoft Center for Research on Pen-Centric Computing
Notes on the History of Tablet- and Pen-based Computing (YouTube)
Annotated bibliography of references to handwriting recognition and tablet and touch computers
References
History
Personal computers
History of computing
tg:Лавҳроёна
|
1980968
|
https://en.wikipedia.org/wiki/V-Ray
|
V-Ray
|
V-Ray is a biased computer-generated imagery rendering software application developed by Chaos Group (Bulgarian: Хаос Груп), which was established in Sofia, Bulgaria, in 1997. V-Ray is a commercial plug-in for third-party 3D computer graphics software applications and is used for visualizations and computer graphics in industries such as media, entertainment, film and video game production, industrial design, product design and architecture. The company chief architects are Peter Mitev and Vladimir Koylazov.
Overview
V-Ray is a rendering engine that uses global illumination algorithms, including path tracing, photon mapping, irradiance maps and directly computed global illumination.
The desktop 3D applications that are supported by V-Ray are:
Autodesk 3ds Max
Autodesk Revit
Cinema 4D
Maya
Modo
Nuke
Rhinoceros
SketchUp
Katana
Unreal
Houdini
Blender
Academic and stand-alone versions of V-Ray are also available.
Modo support will be discontinued at the end of 2021.
Studios using V-Ray
North America
United States
Method Studios
Digital Domain
Blur Studio
Zoic Studios
Canada
Method Studios
Digital Domain
Bardel Entertainment
Europe
PostOffice Amsterdam
Germany
Pixomondo
Scanline VFX
References
Further reading
Francesco Legrenzi, V-Ray - The Complete Guide, 2008
Markus Kuhlo and Enrico Eggert, Architectural Rendering with 3ds Max and V-Ray: Photorealistic Visualization, Focal Press, 2010
Ciro Sannino, Photography and Rendering with V-Ray, GC Edizioni, 2012
Luca Deriu, V-Ray e Progettazione 3D, EPC Editore, 2013
Ciro Sannino, Chiaroscuro with V-Ray, GC Edizioni, 2019
External links
Chaos Group
V-Ray at rhino3d.com
A Closer Look At VRAY Architectural Review of V-Ray
VRay Material Downloads and Resource Library
VRAYforC4D - the website of V-Ray for Cinema4d, made by LAUBlab KG
Free Material Library
3D graphics software
Rendering systems
Global illumination software
3D rendering software for Linux
Proprietary commercial software for Linux
|
14664078
|
https://en.wikipedia.org/wiki/Harris%20affine%20region%20detector
|
Harris affine region detector
|
In the fields of computer vision and image analysis, the Harris affine region detector belongs to the category of feature detection. Feature detection is a preprocessing step of several algorithms that rely on identifying characteristic points or interest points so to make correspondences between images, recognize textures, categorize objects or build panoramas.
Overview
The Harris affine detector can identify similar regions between images that are related through affine transformations and have different illuminations. These affine-invariant detectors should be capable of identifying similar regions in images taken from different viewpoints that are related by a simple geometric transformation: scaling, rotation and shearing. These detected regions have been called both invariant and covariant. On one hand, the regions are detected invariant of the image transformation but the regions covariantly change with image transformation. Do not dwell too much on these two naming conventions; the important thing to understand is that the design of these interest points will make them compatible across images taken from several viewpoints. Other detectors that are affine-invariant include Hessian affine region detector, Maximally stable extremal regions, Kadir–Brady saliency detector, edge-based regions (EBR) and intensity-extrema-based regions (IBR).
Mikolajczyk and Schmid (2002) first described the Harris affine detector as it is used today in An Affine Invariant Interest Point Detector. Earlier works in this direction include use of affine shape adaptation by Lindeberg and Garding for computing affine invariant image descriptors and in this way reducing the influence of perspective image deformations, the use affine adapted feature points for wide baseline matching by Baumberg and the first use of scale invariant feature points by Lindeberg; for an overview of the theoretical background. The Harris affine detector relies on the combination of corner points detected through Harris corner detection, multi-scale analysis through Gaussian scale space and affine normalization using an iterative affine shape adaptation algorithm. The recursive and iterative algorithm follows an iterative approach to detecting these regions:
Identify initial region points using scale-invariant Harris–Laplace detector.
For each initial point, normalize the region to be affine invariant using affine shape adaptation.
Iteratively estimate the affine region: selection of proper integration scale, differentiation scale and spatially localize interest points..
Update the affine region using these scales and spatial localizations.
Repeat step 3 if the stopping criterion is not met.
Algorithm description
Harris–Laplace detector (initial region points)
The Harris affine detector relies heavily on both the Harris measure and a Gaussian scale space representation. Therefore, a brief examination of both follow. For a more exhaustive derivations see corner detection and Gaussian scale space or their associated papers.
Harris corner measure
The Harris corner detector algorithm relies on a central principle: at a corner, the image intensity will change largely in multiple directions. This can alternatively be formulated by examining the changes of intensity due to shifts in a local window. Around a corner point, the image intensity will change greatly when the window is shifted in an arbitrary direction. Following this intuition and through a clever decomposition, the Harris detector uses the second moment matrix as the basis of its corner decisions. (See corner detection for more complete derivation). The matrix , has also been called the autocorrelation matrix and has values closely related to the derivatives of image intensity.
where and are the respective derivatives (of pixel intensity) in the and direction at point (,); and are the position parameters of the weighting function w. The off-diagonal entries are the product of and , while the diagonal entries are squares of the respective derivatives. The weighting function can be uniform, but is more typically an isotropic, circular Gaussian,
that acts to average in a local region while weighting those values near the center more heavily.
As it turns out, this matrix describes the shape of the autocorrelation measure as due to shifts in window location. Thus, if we let and be the eigenvalues of , then these values will provide a quantitative description of how the autocorrelation measure changes in space: its principal curvatures. As Harris and Stephens (1988) point out, the matrix centered on corner points will have two large, positive eigenvalues. Rather than extracting these eigenvalues using methods like singular value decomposition, the Harris measure based on the trace and determinant is used:
where is a constant. Corner points have large, positive eigenvalues and would thus have a large Harris measure. Thus, corner points are identified as local maxima of the Harris measure that are above a specified threshold.
where are the set of all corner points, is the Harris measure calculated at , is an 8-neighbor set centered on and is a specified threshold.
Gaussian scale-space
A Gaussian scale space representation of an image is the set of images that result from convolving a Gaussian kernel of various sizes with the original image. In general, the representation can be formulated as:
where is an isotropic, circular Gaussian kernel as defined above. The convolution with a Gaussian kernel smooths the image using a window the size of the kernel. A larger scale, , corresponds to a smoother resultant image. Mikolajczyk and Schmid (2001) point out that derivatives and other measurements must be normalized across scales. A derivative of order , , must be normalized by a factor in the following manner:
These derivatives, or any arbitrary measure, can be adapted to a scale space representation by calculating this measure using a set of scales recursively where the scale is . See scale space for a more complete description.
Combining Harris detector across Gaussian scale-space
The Harris-Laplace detector combines the traditional 2D Harris corner detector with the idea of a Gaussian scale space representation in order to create a scale-invariant detector. Harris-corner points are good starting points because they have been shown to have good rotational and illumination invariance in addition to identifying the interesting points of the image. However, the points are not scale invariant and thus the second-moment matrix must be modified to reflect a scale-invariant property. Let us denote, as the scale adapted second-moment matrix used in the Harris-Laplace detector.
where is the Gaussian kernel of scale and . Similar to the Gaussian-scale space, is the Gaussian-smoothed image. The operator denotes convolution. and are the derivatives in their respective direction applied to the smoothed image and calculated using a Gaussian kernel with scale . In terms of our Gaussian scale-space framework, the parameter determines the current scale at which the Harris corner points are detected.
Building upon this scale-adapted second-moment matrix, the Harris-Laplace detector is a twofold process: applying the Harris corner detector at multiple scales and automatically choosing the characteristic scale.
Multi-scale Harris corner points
The algorithm searches over a fixed number of predefined scales. This set of scales is defined as:
Mikolajczyk and Schmid (2004) use . For each integration scale, , chosen from this set, the appropriate differentiation scale is chosen to be a constant factor of the integration scale: . Mikolajczyk and Schmid (2004) used . Using these scales, the interest points are detected using a Harris measure on the matrix. The cornerness, like the typical Harris measure, is defined as:
Like the traditional Harris detector, corner points are those local (8 point neighborhood) maxima of the cornerness that are above a specified threshold.
Characteristic scale identification
An iterative algorithm based on Lindeberg (1998) both spatially localizes the corner points and selects the characteristic scale. The iterative search has three key steps, that are carried for each point that were initially detected at scale by the multi-scale Harris detector ( indicates the iteration):
Choose the scale that maximizes the Laplacian-of-Gaussians (LoG) over a predefined range of neighboring scales. The neighboring scales are typically chosen from a range that is within a two scale-space neighborhood. That is, if the original points were detected using a scaling factor of between successive scales, a two scale-space neighborhood is the range . Thus the Gaussian scales examined are: . The LoG measurement is defined as:
where and are the second derivatives in their respective directions. The factor (as discussed above in Gaussian scale-space) is used to normalize the LoG across scales and make these measures comparable, thus making a maximum relevant. Mikolajczyk and Schmid (2001) demonstrate that the LoG measure attains the highest percentage of correctly detected corner points in comparison to other scale-selection measures. The scale which maximizes this LoG measure in the two scale-space neighborhood is deemed the characteristic scale, , and used in subsequent iterations. If no extrema, or maxima of the LoG is found, this point is discarded from future searches.
Using the characteristic scale, the points are spatially localized. That is to say, the point is chosen such that it maximizes the Harris corner measure (cornerness as defined above) within an 8×8 local neighborhood.
Stopping criterion: and .
If the stopping criterion is not met, then the algorithm repeats from step 1 using the new points and scale. When the stopping criterion is met, the found points represent those that maximize the LoG across scales (scale selection) and maximize the Harris corner measure in a local neighborhood (spatial selection).
Affine-invariant points
Mathematical theory
The Harris-Laplace detected points are scale invariant and work well for isotropic regions that are viewed from the same viewing angle. In order to be invariant to arbitrary affine transformations (and viewpoints), the mathematical framework must be revisited. The second-moment matrix is defined more generally for anisotropic regions:
where and are covariance matrices defining the differentiation and the integration Gaussian kernel scales. Although this may look significantly different from the second-moment matrix in the Harris-Laplace detector; it is in fact, identical. The earlier matrix was the 2D-isotropic version in which the covariance matrices and were 2x2 identity matrices multiplied by factors and , respectively. In the new formulation, one can think of Gaussian kernels as a multivariate Gaussian distributions as opposed to a uniform Gaussian kernel. A uniform Gaussian kernel can be thought of as an isotropic, circular region. Similarly, a more general Gaussian kernel defines an ellipsoid. In fact, the eigenvectors and eigenvalues of the covariance matrix define the rotation and size of the ellipsoid. Thus we can easily see that this representation allows us to completely define an arbitrary elliptical affine region over which we want to integrate or differentiate.
The goal of the affine invariant detector is to identify regions in images that are related through affine transformations. We thus consider a point and the transformed point , where A is an affine transformation. In the case of images, both and live in space. The second-moment matrices are related in the following manner:
where and are the covariance matrices for the reference frame. If we continue with this formulation and enforce that
where and are scalar factors, one can show that the covariance matrices for the related point are similarly related:
By requiring the covariance matrices to satisfy these conditions, several nice properties arise. One of these properties is that the square root of the second-moment matrix, will transform the original anisotropic region into isotropic regions that are related simply through a pure rotation matrix . These new isotropic regions can be thought of as a normalized reference frame. The following equations formulate the relation between the normalized points and :
The rotation matrix can be recovered using gradient methods likes those in the SIFT descriptor. As discussed with the Harris detector, the eigenvalues and eigenvectors of the second-moment matrix, characterize the curvature and shape of the pixel intensities. That is, the eigenvector associated with the largest eigenvalue indicates the direction of largest change and the eigenvector associated with the smallest eigenvalue defines the direction of least change. In the 2D case, the eigenvectors and eigenvalues define an ellipse. For an isotropic region, the region should be circular in shape and not elliptical. This is the case when the eigenvalues have the same magnitude. Thus a measure of the isotropy around a local region is defined as the following:
where denote eigenvalues. This measure has the range . A value of corresponds to perfect isotropy.
Iterative algorithm
Using this mathematical framework, the Harris affine detector algorithm iteratively discovers the second-moment matrix that transforms the anisotropic region into a normalized region in which the isotropic measure is sufficiently close to one. The algorithm uses this shape adaptation matrix, , to transform the image into a normalized reference frame. In this normalized space, the interest points' parameters (spatial location, integration scale and differentiation scale) are refined using methods similar to the Harris-Laplace detector. The second-moment matrix is computed in this normalized reference frame and should have an isotropic measure close to one at the final iteration. At every th iteration, each interest region is defined by several parameters that the algorithm must discover: the matrix, position , integration scale and differentiation scale . Because the detector computes the second-moment matrix in the transformed domain, it's convenient to denote this transformed position as where .
Computation and implementation
The computational complexity of the Harris-Affine detector is broken into two parts: initial point detection and affine region normalization. The initial point detection algorithm, Harris-Laplace, has complexity where is the number of pixels in the image. The affine region normalization algorithm automatically detects the scale and estimates the shape adaptation matrix, . This process has complexity , where is the number of initial points, is the size of the search space for the automatic scale selection and is the number of iterations required to compute the matrix.
Some methods exist to reduce the complexity of the algorithm at the expense of accuracy. One method is to eliminate the search in the differentiation scale step. Rather than choose a factor from a set of factors, the sped-up algorithm chooses the scale to be constant across iterations and points: . Although this reduction in search space might decrease the complexity, this change can severely effect the convergence of the matrix.
Analysis
Convergence
One can imagine that this algorithm might identify duplicate interest points at multiple scales. Because the Harris affine algorithm looks at each initial point given by the Harris-Laplace detector independently, there is no discrimination between identical points. In practice, it has been shown that these points will ultimately all converge to the same interest point. After finishing identifying all interest points, the algorithm accounts for duplicates by comparing the spatial coordinates (), the integration scale , the isotropic measure and skew. If these interest point parameters are similar within a specified threshold, then they are labeled duplicates. The algorithm discards all these duplicate points except for the interest point that's closest to the average of the duplicates. Typically 30% of the Harris affine points are distinct and dissimilar enough to not be discarded.
Mikolajczyk and Schmid (2004) showed that often the initial points (40%) do not converge. The algorithm detects this divergence by stopping the iterative algorithm if the inverse of the isotropic measure is larger than a specified threshold: . Mikolajczyk and Schmid (2004) use . Of those that did converge, the typical number of required iterations was 10.
Quantitative measure
Quantitative analysis of affine region detectors take into account both the accuracy of point locations and the overlap of regions across two images. Mioklajcyzk and Schmid (2004) extend the repeatability measure of Schmid et al. (1998) as the ratio of point correspondences to minimum detected points of the two images.
where are the number of corresponding points in images and . and are the number of detected points in the respective images. Because each image represents 3D space, it might be the case that the one image contains objects that are not in the second image and thus whose interest points have no chance of corresponding. In order to make the repeatability measure valid, one remove these points and must only consider points that lie in both images; and only count those points such that . For a pair of two images related through a homography matrix , two points, and are said to correspond if:
Robustness to affine and other transformations
Mikolajczyk et al. (2005) have done a thorough analysis of several state-of-the-art affine region detectors: Harris affine, Hessian affine, MSER, IBR & EBR and salient detectors. Mikolajczyk et al. analyzed both structured images and textured images in their evaluation. Linux binaries of the detectors and their test images are freely available at their webpage. A brief summary of the results of Mikolajczyk et al. (2005) follow; see A comparison of affine region detectors for a more quantitative analysis.
Viewpoint Angle Change: The Harris affine detector has reasonable (average) robustness to these types of changes. The detector maintains a repeatability score of above 50% up until a viewpoint angle of above 40 degrees. The detector tends to detect a high number of repeatable and matchable regions even under a large viewpoint change.
Scale Change: The Harris affine detector remains very consistent under scale changes. Although the number of points declines considerably at large scale changes (above 2.8), the repeatability (50-60%) and matching scores (25-30%) remain very constant especially with textured images. This is consistent with the high-performance of the automatic scale selection iterative algorithm.
Blurred Images: The Harris affine detector remains very stable under image blurring. Because the detector does not rely on image segmentation or region boundaries, the repeatability and matching scores remain constant.
JPEG Artifacts: The Harris affine detector degrades similar to other affine detectors: repeatability and matching scores drop significantly above 80% compression.
Illumination Changes: The Harris affine detector, like other affine detectors, is very robust to illumination changes: repeatability and matching scores remain constant under decreasing light. This should be expected because the detectors rely heavily on relative intensities (derivatives) and not absolute intensities.
General trends
Harris affine region points tend to be small and numerous. Both the Harris-Affine detector and Hessian-Affine consistently identify double the number repeatable points as other affine detectors: ~1000 regions for an 800x640 image. Small regions are less likely to be occluded but have a smaller chance of overlapping neighboring regions.
The Harris affine detector responds well to textured scenes in which there are a lot of corner-like parts. However, for some structured scenes, like buildings, the Harris-Affine detector performs very well. This is complementary to MSER that tends to do better with well structured (segmentable) scenes.
Overall the Harris affine detector performs very well, but still behind MSER and Hessian-Affine in all cases but blurred images.
Harris-Affine and Hessian-Affine detectors are less accurate than others: their repeatability score increases as the overlap threshold is increased.
The detected affine-invariant regions may still differ in their rotation and illumination. Any descriptor that uses these regions must account for the invariance when using the regions for matching or other comparisons.
Applications
Content-based image retrieval
Model-based recognition
Object retrieval in video
Visual data mining: identifying important objects, characters and scenes in videos
Object recognition and categorization
Remotely sensed image analysis: Object detection from remotely sensed images
Software packages
Affine Covariant Features: K. Mikolajczyk maintains a web page that contains Linux binaries of the Harris-Affine detector in addition to other detectors and descriptors. Matlab code is also available that can be used to illustrate and compute the repeatability of various detectors. Code and images are also available to duplicate the results found in the Mikolajczyk et al. (2005) paper.
lip-vireo - binary code for Linux, Windows and SunOS from VIREO research group. See more from the homepage
External links
- Presentation slides from Mikolajczyk et al. on their 2005 paper.
- Cordelia Schmid's Computer Vision Lab
- Code, test Images, bibliography of Affine Covariant Features maintained by Krystian Mikolajczyk and the Visual Geometry Group from the Robotics group at the University of Oxford.
- Bibliography of feature (and blob) detectors maintained by USC Institute for Robotics and Intelligent Systems
- Digital implementation of Laplacian of Gaussian
See also
Hessian-affine
MSER
Kadir brady saliency detector
Scale space
Isotropy
Corner detection
Interest point detection
Affine shape adaptation
Image derivative
Computer vision
ASIFT -> Affine-Sift (A fully affine invariant image matching algorithm)
References
Feature detection (computer vision)
|
22152533
|
https://en.wikipedia.org/wiki/Agrochola
|
Agrochola
|
Agrochola is a genus of moths of the family Noctuidae. The genus was erected by Jacob Hübner in 1821.
Species
Agrochola agnorista Boursin, 1955
Agrochola albimacula Kononenko, 1978
Agrochola albirena Boursin, 1956
Agrochola antiqua (Hacker, 1993)
Agrochola approximata (Hampson, 1906)
Agrochola attila Hreblay & Ronkay, 1999
Agrochola azerica Ronkay & Gyulai, 1997
Agrochola blidaensis (Stertz, 1915)
Agrochola circellaris (Hufnagel, 1766) – the brick
Agrochola deleta (Staudinger, 1881)
Agrochola disrupta Wiltshire, 1952
Agrochola dubatolovi Varga & Ronkay, 1991
Agrochola egorovi (Bang-Haas, 1934)
Agrochola evelina (Butler, 1879)
Agrochola fibigeri Hacker & Moberg, 1989
Agrochola flavirena (Moore, 1881)
Agrochola gorza Hreblay & Ronkay, 1999
Agrochola gratiosa (Staudinger, 1881)
Agrochola haematidea (Duponchel, 1827) – southern chestnut
Agrochola helvola (Linnaeus, 1758)
Agrochola humilis (Denis & Schiffermüller, 1775)
Agrochola hypotaenia (Bytinski-Salz, 1936)
Agrochola imitana Ronkay, 1984
Agrochola insularis (Walker, 1875)
Agrochola janhillmanni (Hacker & Moberg, 1989)
Agrochola karma Hreblay, Peregovits & Ronkay, 1999
Agrochola kindermanni (Fischer von Röslerstamm, [1841])
Agrochola kosagezai Hreblay, Peregovits & Ronkay, 1999
Agrochola kunandrasi Hreblay & Ronkay, 1999
Agrochola lactiflora (Draudt, 1934)
Agrochola laevis (Hübner, [1803])
Agrochola leptographa Hacker & Ronkay, 1990
Agrochola litura (Linnaeus, 1761)
Agrochola lota (Clerck, 1759) – red-line Quaker
Agrochola luteogrisea (Warren, 1911)
Agrochola lychnidis (Denis & Schiffermüller, 1775) – beaded chestnut
Agrochola macilenta (Hübner, [1809]) – yellow-line Quaker
Agrochola mansueta (Herrich-Schäffer, [1850])
Agrochola meridionalis (Staudinger, 1871)
Agrochola minorata Hreblay & Ronkay, 1999
Agrochola naumanni Hacker & Ronkay, 1990
Agrochola nekrasovi Hacker & Ronkay, 1992
Agrochola nigriclava Boursin, 1957
Agrochola nitida (Denis & Schiffermüller, 1775)
Agrochola occulta Hacker, [1997]
Agrochola orejoni Agenjo, 1951
Agrochola orientalis Fibiger, 1997
Agrochola oropotamica (Wiltshire, 1941)
Agrochola osthelderi Boursin, 1951
Agrochola pallidilinea Hreblay, Peregovits & Ronkay, 1999
Agrochola pamiricola Hacker & Ronkay, 1992
Agrochola phaeosoma (Hampson, 1906)
Agrochola pistacinoides (d'Aubuisson, 1867)
Agrochola plumbea (Wiltshire, 1941)
Agrochola plumbitincta Hreblay, Peregovits & Ronkay, 1999
Agrochola prolai Berio, 1976
Agrochola pulchella (Smith, 1900)
Agrochola pulvis (Guenée, 1852)
Agrochola punctilinea Hreblay & Ronkay, 1999
Agrochola purpurea (Grote, 1874)
Agrochola rufescentior (Rothschild, 1914)
Agrochola rupicapra (Staudinger, 1878)
Agrochola sairtana Derra, 1990
Agrochola sakabei Sugi, 1980
Agrochola scabra (Staudinger, 1891)
Agrochola schreieri (Hacker & Weigert, 1984)
Agrochola semirena (Draudt, 1950)
Agrochola siamica Hreblay & Ronkay, 1999
Agrochola spectabilis Hacker & Ronkay, 1990
Agrochola statira Boursin, 1960
Agrochola staudingeri Ronkay, 1984
Agrochola telortoides Hreblay & Ronkay, 1999
Agrochola thurneri Boursin, 1953
Agrochola trapezoides (Staudinger, 1882)
Agrochola tripolensis (Hampson, 1914)
Agrochola turcomanica Ronkay, Varga & Hreblay, 1998
Agrochola turneri Boursin, 1953
Agrochola vulpecula (Lederer, 1853)
Agrochola wolfsclaegeri Boursin, 1953
Agrochola zita Hreblay & Ronkay, 1999
The following species are sometimes placed in the genus Sunira, while other authors consider Sunira to be a subgenus of Agrochola:
Agrochola bicolorago (Guenée, 1852)
Agrochola decipiens (Grote, 1881)
Agrochola verberata (Smith, 1904)
References
Noctuoidea genera
|
7828921
|
https://en.wikipedia.org/wiki/University%20of%20the%20Philippines%20Mindanao
|
University of the Philippines Mindanao
|
The University of the Philippines Mindanao (also referred to as UPMin or UP Mindanao) is a public research university, serving as the sixth constituent unit of the University of the Philippines System. UP Mindanao is the only constituent university of the UP System that was created through legislative action. Republic Act 7889 formally created UP Mindanao on February 20, 1995. The university was later formally recognized as an independent constituent university by the Board of Regents of the UP System on March 23, 1995. Its main focus of education is Mindanao studies through an affirmative action program in the Autonomous Region in Muslim Mindanao to attract Muslims and Lumad students, aside from the marginalized and deserving students. The university was first named University of the Philippines in Mindanao to declare the arrival of the state university in Mindanao after a long period of waiting by eager alumni and students. This was later replaced to its present name due to the request of the UP System for formalities' sake.
The university offers nine undergraduate degree programs and five graduate program heavily inclined in research through its two colleges and one school; the university is the one of its kind in the Philippines to have a discipline in Agribusiness Economics (ABE), and is one of the three worldwide to have such a degree program.
As of 2017, the Philippines' Commission on Higher Education (CHED) awarded a National Center of Development (COD) to UPMin in Information Technology and in Biology education (College of Science and Mathematics). Currently the university is developing itself to become a Center of Culture and Languages, engaging in activities enriching the Filipino and ethnic languages and Mindanao through literature and translation; the university is also a co-founder and active member of the Davao Colleges and Universities Network (DACUN) in the field of cultural integration and development.
The university also aims to become the best science university in Southern Philippines with the UP Mindanao Science and Technology Park Consortium; the university's long-term plan is to transform UPMin to a "green university town," the only one of its kind in the UP system and in the Philippines.
History
A vision of “UP in Mindanao"
There were already stirrings for the establishment of a UP in Mindanao as early as the late 1950s or early 1960s. The UP Alumni Association-Davao Chapter, which was established on December 3, 1949, clamored for the creation of a “UP in Mindanao” for more than two decades; as early as 1961, the UP Summer School already offered extension courses in Law, Business Administration and Education, with the old location of the Davao Central Elementary School as venue. This was then replaced by the short-lived UP Extension Division Davao in 1970. However, in a feasibility study conducted by Hannah et al., it dismissed the idea of creating a permanent constituent campus of UP in Mindanao due to financial concerns.
On November 24–25, 1989, the UPAA Davao Chapter hosted the 12th UP Alumni Institute with Senator Vicente Paterno as keynote speaker. Then UP President Jose Abueva, who attended the conference, was confronted with the strong and united voice of about 630 alumni in attendance as well as media and the business sector, all clamoring for a resolution that would promote the establishment of a "UP in Mindanao" located in Davao City. The resolution was unanimously approved and adopted by the Institute on November 25, 1989. This was submitted to the UPAA national and endorsed by the UP Board of Regents.
The vision of a UP in Mindanao began to materialize when, on the third regular session of the House of Representatives on April 30, 1990. 1st District Representative Prospero Nograles introduced House Bill 13382, also known as "Act to Establish the University of the Philippines in Mindanao". A public hearing on House Bill 13382 was conducted by Senator Edgardo Angara (then Chairman of the Senate Committee on Education) on October 12, 1990 at the Davao Chamber of Commerce. It was sponsored by Representatives Prospero Nograles and Rodolfo del Rosario of Davao del Norte. A strong sentiment for the establishment of a UP in Mindanao pervaded during the public hearing.
To alleviate the strong clamor for a UP Mindanao, Abueva created instead the UP Consortium System, much like the UP Open University. When Abueva's term ended, President Fidel V. Ramos appointed Emil Q. Javier as UP President. It turned out that Ramos had already instructed Javier to create a fact-finding committee composed of Regents Oscar Alfonso, vice-president for Planning Fortunato dela Peña, Atty. Carmelita Yadao-Guno, and Rogelio V. Cuyno, to conduct another study concerning the proposal to create UP Mindanao.
The committee came to Davao for an ocular inspection of the University of Southeastern Philippines (USEP) and Bago Oshiro, Mintal. After receiving the committee report, Javier did not pursue the plan to convert USEP into UP Mindanao due to demonstrations conducted by USEP faculty and staff. Instead, he opted for a congressional action. At the same time, the committee also talked with Bureau of Plant Industry director Nerios Roperos for the segregation of 204 hectares of the BPI area in Bago Oshiro for the UP Mindanao campus. The task of pursuing UP Mindanao was given to Representative Elias B. Lopez, as he is the only UP alumnus among the Davao Representatives.
Establishment
Republic Act 7889, entitled, "An Act Creating the University of the Philippines Mindanao," was finally enacted into law on February 20, 1995, by President Fidel V. Ramos. On March 22 of the same year, the Board of Regents passed a resolution officially creating the University of the Philippines Mindanao. To emphasize the importance of RA 7889 to the Mindanaoans, a re-enactment of the signing was held at the Bangko Sentral ng Pilipinas, with President Ramos himself in attendance. In that same year, Rogelio V. Cuyno was appointed as UP Mindanao's first dean.
The Lee Business Center in Juan Luna Street corner J. de la Cruz Street, and the Casa Mercado Building in Matina served as UP Mindanao's home from March to September 1995 and from September 1995 to January 1996 respectively, until it finally found its home at Ladislawa Avenue.
Because the university then had no disposable land where to construct its facilities, President Ramos signed Proclamation No. 822 reserving 204 hectares of government-sequestered land in Mintal for UP Mindanao. This land was originally owned by the Japanese Ohta Development Corporation and was later returned to the Philippine Government as part of war reparations after World War II.
In June 1996, the College of Arts and Sciences and the School of Management were created. After a year, the CAS was split into the College of Humanities and Social Sciences and the College of Science and Mathematics; the School of Management was retained.
In 1997, the Elias B. Lopez Residential Hall was constructed. It was initially envisioned to be a dormitory for female students, as the one of the existing structures used by UP Mindanao inside the Philippine Coconut Authority (PCA) building was a male-only dormitory.
A period of changes (1998–2000)
On February 20, 1998, the UP Oblation was installed at the Bago Oshiro campus during the university's 3rd Foundation Anniversary. The statue was made by Napoleon Abueva, and is a replica of the original UP Oblation by Guillermo Tolentino. It was accompanied by the UP Madrigal Singers upon its arrival in Davao and was brought to Mintal after a citywide motorcade from Sasa Wharf. It was temporarily installed at the CSM grounds, then later moved to its permanent location at the Oblation Circle, in front of the Administration Building.
The Board of Regents granted full autonomy to UP Mindanao on February 26, 1998, making it the sixth and latest constituent unit of the UP System. Dean Rogelio V. Cuyno, was elevated as its first Chancellor on December 11, of the same year. President Ramos also signed Proclamation Nos. 1252 and 1253 segregating land reservations in Laak, Compostela Valley (2800 hectares), and Marilog District, Davao City (4100 hectares) for research, extension, and instruction purposes.
The respective colleges of UP Mindanao then later found their homes during this year: the School of Management began its occupancy of the Terraza Milesa Building at F. Iñigo Street (Anda Street), while the College of Science and Mathematics and College of Humanities and Social Sciences used the Academic Building I left by the University of Southeastern Philippines. Some classes were still held in the PCA Building, as the Administration Building was being constructed during this period. It was eventually used and occupied in 2000 by the university administration, then later by CHSS in 2009.
The pioneering batch of graduates celebrated their commencement exercises later that year.
The first decade (2000–2010)
The second University Chancellor of the university was Ricardo M. de Ungria, who was installed on September 21, 2001. Under his chancellorship, UP Mindanao co-founded the Davao Colleges and Universities Network (DACUN), Mindanao Science and Technology Park Consortium (MSTPC) and Mindanao Studies Consortium Foundation Inc. (MSCFI). In addition, the Board of Regents approved in 2002 the creation of two offices that delivered UP Mindanao's mandate, the Office of Extension and Community Services (OECS) and the Office of Research (OR). The construction of the Sitio Basak Road, which connects UP Mindanao, and neighboring Sitio Basak, to Mintal and Davao City, was completed in 2006. Initially, this unpaved road, known to the UP Mindanao community as the Abortion Road, was one of the hindrances that students, teachers, and the community had to endure everyday especially during rainy days, when the road was impassable to almost all kinds of vehicles.
The first edition of the university's refereed journal, Banwa, was published in 2004. It was the first major academic journal to be printed in Mindanao. This was then followed by the awarding by the Commission of Higher Education (CHED) to the School of Management's Agribusiness Supply Chain as the "Best Higher Education Research Program" in 2006. The same program won the same award again in 2010.
Gilda C. Rivero succeeded De Ungria as UP Mindanao's third chancellor on July 30, 2007. Her two vice chancellors are Emma Ruth V. Bayogan, Vice-Chancellor for Academic Affairs, and Miguel D. Soledad, appointed as Officer-in-Charge of the Office of the Vice-Chancellor for Administration starting June 2, 2008, with Soledad's appointment eventually approved by the Board of Regents on its 1233rd Meeting. It was during this year when CHED awarded UP Mindanao, specifically its Computer Science program, as a Center for Development in IT Education. The CHED Zonal Research Center was established in UP Mindanao in 2008, focusing on biodiversity and biotechnology research and development.
However, during the first year of Chancellor Rivero's administration, the general tuition fee was increased due to the revision of tuition and other fees (TOFI), resulting in a PhP600-per-unit calculation of matriculation, which infuriated many regular, outgoing and incoming students due to the high fees coupled with miscellaneous fees like cultural and internet fees. Due to increased protests by students and some faculty members, the university was closely guarded by the Philippine National Police, the Army and ROTC corps during the visit of former President Gloria Macapagal-Arroyo during the laying-down of the capsule for the Biotechnology Complex at CSM in 2009; many protesters, mostly from UPMin and some from other universities within the region, rallied at the Administration Building and barricaded Kanluran Road so as to block the passage of the Presidential convoy. Only former UP President Emerlinda R. Roman and DOST Secretary Estrella F. Alabastro were present during the event.
The Centennial Year of the University of the Philippines was celebrated in 2008, with UP Mindanao holding a kick-off ceremony and testimonial dinner as part of its celebrations. In 2009, the Administration Building was completed and its right wing became the residence of the College of Humanities and Social Sciences. In that year, Banwas journal issue on sago (Metroxylon sagu) was named as "Most Outstanding Monograph" by the National Academy of Science and Technology. The CHED Higher Education Regional Research Center for Davao Regional was hosted by UP Mindanao in 2012 for its research program "Sustainable Development of the Philippine Tuna Value Chain."
The House of Representatives, under the Committee on Higher and Technical Education, approved on February 10, 2010, House Bill 3076 (An act providing for the Development Fund for the Medium-Term Development Plan of UP Mindanao) authored by First District Congressman Karlo Alexei B. Nograles providing UPMin with PhP1.2 billion for the construction of vital facilities set out by the UP Mindanao Land Use and Master Development Plan of 2009, including the CHSS and SOM Buildings, Student Dormitory, DOST-SEI-CSM Biotechnology Facility, among others, spread over four years.
During the Kasadya celebrations on December 17, 2010, Bt Eggplant samples planted by the university as part of its scientific research program was uprooted by personnel from the City Agriculturist's Office while the whole community was at the Peoples' Park for the said event. The issue was with the concerns of biological safety of the genetically modified samples to the environment, which was deliberated three months before the said uprooting.
The last year of Chancellor Rivero's term saw the rapid improvement of the road system within UPMin, with the construction of the Mindanao Avenue, which directly connects the Administration Building to Mintal via the Sitio Basak Road and downtown Davao via the Davao-Bukidnon Highway, and the initial repair and construction of the Oblation Circle, due in part to HB 3076. The School of Management's Administration and Corporate Center (SOM Building Phase 1) also reached its final phase of construction in January 2011 and was inaugurated in May of that year. The completion of the SOM Building meant that all of UPMin's three academic colleges are located within its campus at Mintal, Tugbok District, Davao City; the graduate programs of SOM also moved to Mintal.
Twenty years and beyond (2010–present)
During the 1285th Board of Regents meeting on January 24, 2013, the UP Board of Regents selected Sylvia B. Concepcion as the fourth Chancellor of the UP Mindanao. Her term started on March 1, 2013, and ended on February 28, 2016. The turn-over ceremony (and also the testimonial ceremony for Chancellor Rivero) was held on February 22, 2013, and the Investiture Ceremony for the new chancellor was conducted on April 19, 2013, simultaneous with the 16th Commencement Exercises.
During the 31st SR Selection of the General Assembly of Student Councils (GASC) at UP Visayas Miag-ao Campus, former UP Mindanao University Student Council (USC) Chairperson Krista Iris Melgarejo was chosen as the 31st Student Regent of the UP System. She is the first SR to hail from the university. Her term formally started during the June 20 UP Board of Regents meeting.
The construction of the University Main Library began in August 2013, after a prolonged dispute between the university and settler associations living within the university on land ownership issues hampered efforts to immediately construct the building earlier that year. The Main Library was finished in the late quarter of 2014.
UP Mindanao was awarded as the "Best Implementing Agency" during the 25th Founding Anniversary of the Southern Mindanao Regional Research and Development Consortium (SMARRDEC) in 2013. In addition, it also hosted its first international research conference, entitled the International Conference on Agribusiness Economics and Management (ICAEM).
The university celebrated its 20th Foundation Anniversary throughout the month of February 2015 under the theme, "Ika-baynte saulugon, UPMin padayon!" (Celebrate the twentieth, onward UPMin!). One of the main highlights during the month-long celebrations is the bayanihan construction of the Oblation Plaza on February 28 entitled "Isang Libong Alumni Para Kay Oble" (A Thousand UP Alumni for the Oblation), led by the UP Alumni Association-Davao Chapter and various alumni from all UP constituent universities together with members of the UPMin administration, staff, faculty, and students. During this event, the ceremonial lighting of the UP Mindanao Oblation (which was inspired from UP Mindanao's own Torch Night) featured alumni from UP Manila, UP Diliman, UP Los Baños, UP Visayas, UP Cebu, UP Open University, UP Baguio, and UP Mindanao as torchbearers before lighting the cauldron in front of the Oblation.
The Milestones Exhibit featuring the academic regalia of the four chancellors, a photo exhibit of historical events, and a roster of achievements, was displayed in the Administration Building throughout the whole month. The main academic symposium, Science for Society, was on February 16, with DOST Undersecretary Amelia Guevara as the keynote speaker. Week-long celebrations and events for each of university's colleges was also observed, with CHSS celebrating its anniversary on the first week, CSM on the second week, and SOM on the last week. The Students' Week and Dorm Week were held on the third week, featuring events and contests from various student organizations, the highlight of which is Tatak UPMin on February 27. All of these academic and recreational events were open to the public.
In 2019, former School of Management (SOM) dean Larry N. Digal was appointed by the Board of Regents as the fifth chancellor of UP Mindanao, from 1 March 2019 to 28 February 2022. His term show the rapid expansion of the university's academic, research, and public service offerings, including the institution of two new degree programs (Associate in Sports Studies under the College of Humanities and Social Sciences, and Ph.D. in Management under the School of Management), the conferment of 14 new faculty and several administrative items, and the continuing infrastructure development in the campus; the CHSS Cultural Complex, EBL Residential Hall Annex, Center for Advancement of Research in Mindanao (CARIM) Building, and the operational completion of the Davao City-UP Sports Complex happened under his term. During Digal's term, the university saw its highest student population to date: 1,355 students consisting of 1,194 undergraduate and 161 graduate students. Amidst these successes, his term witnessed the COVID-19 pandemic and its effects to university operations, with the university responding to the health crisis through a shift to remote learning, intensified research and public service activities related to the pandemic, and allowed the City Government of Davao to use its Faculty & Staff Housing building and the Athletic Gymnasium in the Davao City–UP Sports Complex as isolation facilities for light to moderately-affected patients.
On 24 February 2022 Philippine Genome Center-Mindanao (PGC-Mindanao) director Lyre Anni E. Murao was named by the UP Board of Regents to succeed Digal as the sixth Chancellor of UP Mindanao during its 1368th meeting; her term will run from 1 March 2022 to 28 February 2025.
Organization
Administration
UP Mindanao is governed by the 11-member Board of Regents composed of the UP President, the Chairman of the Commission on Higher Education (CHED), the chairpersons of the Committee of Higher Education of the Senate and the House of Representatives, four regents representing the student, faculty, alumni and staff sectors, and three regents who are appointed by the President of the Philippines.
The university is directly administered by the Chancellor, assisted by the Vice Chancellors for Academic Affairs and Administration.
The University Council is composed of the chancellor, university professors, professors, associate professors, and assistant professors of the various degree-granting units of that university. The Chancellor serves as Chairperson and the University Registrar as Secretary.
The council has the power to prescribe the courses of study and rules of discipline, subject to the approval of the Board of Regents. The council is also authorized to fix the requirements for admission to any college of the university as well as those for graduation and the receiving of a degree. The council is empowered to recommend to the Board of Regents students or others to be recipients of degrees. The Council exercises disciplinary power over the students through its Chancellor or Executive Committee within the limits prescribed by the rules of discipline approved by the Board Regents.
The Executive Committee, which counts as members the Deans of the various degree-granting units of the constituent university as well as the heads of administrative offices, also acts in an advisory capacity to the Chancellor in all matters pertaining to their offices for which they seek its advice.
The President of the university is an ex officio member of the University Council of each constituent university and presides over its meetings whenever present.
Colleges and degree-granting Units
UP Mindanao is divided into two colleges and one school: the College of the Humanities and Social Sciences (CHSS), focusing on the cultural diversity of Mindanao through the study of humanities and social sciences; the College of Science and Mathematics (CSM), studying the endemic flora and pioneering scientific development in the fields of biology, food science, applied mathematics, and computer science; and the School of Management (SOM), which undertakes research on supply chain management and agribusiness economics.
Degree-granting unitsCollege of Humanities and Social Sciences Department of Architecture
Architecture
Urban and Regional Planning
Department of Humanities
Communication and Media Arts
Communication theory
Communication research
Media Arts
Speech and Corporate Communication
Creative Writing
English literature
American literature
Filipino literature
Foreign languages
Japanese
Department of Human Kinetics
Physical Education
Exercise and Sports Science
UP Mindanao Varsity Teams
Football
Basketball
Volleyball
Chess
UP Mindanao Dance Ensemble
National Service Training Program
Department of Social Sciences
Anthropology
Linguistics
Psychology
Social SciencesCollege of Science and Mathematics Department of Biological Sciences and Environmental Studies
Animal Science
Biology
Crop Science
Ecology
Microbiology
Wildlife
Zoology
Department of Food Science and Chemistry
Chemistry
Food Science and Technology
Department of Mathematics, Physics and Computer Science
Applied Mathematics
Data Analytics and Data Studies
Computer Science
Applied Machine Learning and Artificial Intelligence
PhysicsSchool of Management Agribusiness Economics
Supply Chain Management
Campus
Even before the establishment of UPMin, Barangay Mintal and Barangay Bago Oshiro, are known for its agriculture research centers. The Philippine Science High School Southern Mindanao Campus at nearby Barangay Sto. Niño and the Mindanao Science Centrum at Barangay Bago Oshiro further define the area as a "academic and research hub".
With the establishment of UPMin in Mintal, this sleepy barangay has been rapidly turned into a small university town within Davao City, with businesses and establishments built around the university accommodating to the needs of the students and faculty. Furthermore, the UP Mindanao Land Use and Master Development Plan aims to create UP Mindanao a "Green University Town", being a "garden" campus with emphasis on the Mindanaoan culture, with each building reflecting each ethnolinguistic group in Mindanao
ExistingOffice structures Administration Building (Admin) – houses most administrative offices, as well as housing the College of Humanities and Social Sciences (CHSS) and School of Management (SOM) classrooms and offices. Designed by Francisco C. Santos, Jr. after the Quezon Hall (Administration Building) of UP Diliman, and is inspired by Bagobo architecture and design (notably on the façade and floor tiles).
Center for the Advancement of Research in Mindanao (CARIM) Building – office building for CARIM, formerly the Office of Research.
Human Kinetics Center (HKC) – located east of the Administration Building, originally housed the Department of Human Kinetics, classrooms for physical education classes, a dance hall used by the UP Mindanao Dance Ensemble, a table tennis hall, a gym, and a faculty room. Donated by Senator Robert Jaworski in 2003 (to which it was also called "Jaworski Hall" after its completion). Currently used by the Physical Plant Office (PPO).Academic structures Academic Building I (Kanluran) – houses the College of Science and Mathematics (CSM) and the DOST-SEI Regional Biotechnology Laboratory; first constructed by the University of Southeastern Philippines (USEP) and later renovated and refurbished by UP Mindanao. Aptly named "Kanluran" due to its western location in the map, and because of an urban legend surrounding the building itself.
School of Management Building Phase 1 (SOM) – houses the School of Management's (SOM) administration and staff. Also houses classrooms and a large audio-visual room.
CHSS Cultural Complex – located 800 meters from the Human Kinetics Center, the CHSS Cultural Complex serves as the college's main edifice, housing an open-air amplitheater, mini-audio visual room, the Communication Arts laboratory, Mindanao Studies Program (MSP) Gallery, Architectural Heritage Gallery, a conference room, and workshop areas.
University Main Library (Main Lib) – the University Library houses the Main Library and the Office of the University Librarian, as well as a gallery, records room, audio-visual room, reading areas, internet and multimedia rooms, and business areas for the UP Press bookstore and refreshments.Residential structures Elias B. Lopez Residential Hall (EBL Hall, Dorm) – houses the Student Housing Services office, dormitory rooms (which houses about 250 occupants), dormitory clinic, and the Interactive Learning Center. Named after Congressman Elias B. Lopez, the so-called "Father of UP in Mindanao" for being the main proponent for the establishment of UPMin. The EBL Hall also houses the offices for the Student Housing Section of the Office of Student Affairs, and the Interactive Learning Center – Learning Resource Center (ILC–LRC).
EBL Science and Technology Dormitory (Dorm Annex) – located at the CSM Complex, houses dormitory rooms for CSM students (around 50 occupants).
EBL Residential Hall Annex – new residential building with more dormitory rooms, classrooms, and office spaces.
Faculty and Staff Housing – residential building for faculty and staff with residences outside Davao City.Kalimudan Student Center (Kalimudan) – opened in August 2010, Kalimudan houses commercial establishments, offices, and related facilities
University Student Council (USC) House – main office of the UP Mindanao University Student Council; also used by major student organizations for meetings and forums.
Himati Office – main office of Himati, the official student publication of the University of the Philippines Mindanao.
Covered gymnasium DFSC Sago Laboratory – research facility for food technology projects related to sago processing and sago flour production; managed by the Department of Food Science and Chemistry (DFSC) of the College of Science and Mathematics.
UPMin Infirmary – donated by the Beta Sigma Fraternity through the Betans Spirit Foundation; caters the medical needs of UP Mindanao constituents and of the neighboring communities of Bago Oshiro and Sitio Basak.Road Network Mindanao Avenue – inspired from UP Diliman's University Avenue, the Mindanao Avenue connects the Administration Building directly into the road network of Sitio Basak and Mintal.
Oblation Plaza – the Oblation Plaza enshrines the Oblation statue that is the iconic figure of the UP System. This is one of the pioneer projects of the UP Alumni Association-Davao Chapter through a bayanihan construction project entitled Isang Libong Alumni Para Kay Oble on 28 February 2015.
Maguindanao Road (Kanluran Road) – the largest and the main road in the university; connects the Administration Building and EBL Dorm with Kanluran. Has a distance of 1.1 kilometers (from Admin to Kanluran).
Under development
Davao City-UP Sports Complex – the DC-UP Sports Complex has been envisioned by the City Government of Davao and UP as a regional sports facility based on international standards. It contains a training and athletic gymnasium, football stadium with oval track, an aquatic center for swimming events, training rooms and facilities, and offices and classrooms for the Department of Human Kinetics, later to be handed to the College of Human Kinetics. It is found on a 20-hectare site two kilometers away from the Administration Building.
UP Mindanao Health Center Davao City Public Hospital
Mindanao Research and Innovation Center for One Health
Center for Infectious Diseases
Philippine Genome Center–Mindanao Building
UP Mindanao Botanical Garden Cultural parks
Central Park – greenery behind the Administration Building and Main Library
Mindanawon, Islamic, Chinese, Japanese
Convergence Park
Carillion Plaza – proposed roundabout
Religious buildings (Roman Catholic, Islam, interfaith)
Open-air amplitheater
Academic Core College of Humanities and Social Sciences Complex CHSS Academic Building
College of Science and Mathematics Complex Philippine Genome Center – Mindanao (PGC–Mindanao) Center
School of Management Complex SOM Building Phase 2
Future College of Human Kinetics College of Human Kinetics Building
Mindanao Research and Development Center CARIM Building Phase 2
Knolwedge, Innovation, Science, and Technology Park (KIST Park) – future complex for science, technology, and engineering activities; jointly sponsored by UP Mindanao and the Kitakyushu Science and Knowledge Park.
Road and pedestrian network Mindanao Ring – circumferential pedestrian road connecting all academic complexes, the Administration Building, and Main Library
Cultural Walk – concrete-paved pathwalks featuring textile patterns from selected Mindanawon ethnolinguistic groups
protected walkways
bikeways
Executive Housing facilities Commercial facilitiesRetired
Ladislawa Campus – located inside Ladislawa Village, it housed the first administrative offices of UP Mindanao.
Lee Business Corner – housed the first administrative offices and classrooms of UP Mindanao.
Casa Mercado Building – located in Matina, this was used as an extension building for the administration.
Philippine Coconut Authority (PCA) Dormitory – located in Bago Oshiro, a few structures within the area were used by UP Mindanao as male-only dormitory and educational center, where the first lecture classes were held.
Old SOM Building (UP Mindanao City Campus, UP Anda) – located in F. Iñigo Street (formerly Anda Street) in downtown Davao City, This building was an old hotel rented by the former UPMin Administration for their offices and an extension campus while they moved from the old Ladislawa campus. It was used to hold classes for the School of Management (SOM) major subjects and master's degrees courses, and the UP Open University Davao office was also located there. Upon the completion of the SOM Building at the main campus, the remaining facilities were moved into the new SOM Building at Mintal. The UPOU Davao Office moved to a nearby building where the UP Alumni Association-Davao Chapter office is located.
Old Main Library Building (Main Lib)''' – formerly the UP Mindanao Cultural Center. This small building was the old "convenience center" by the 66th Engineering Brigade, given to UPMin when the former left Mintal for deployment. It houses the main book collections of the university and the School of Management (SOM) library. In 2011, due to the construction of the Mindanao Avenue, the structure was demolished and its contents moved to the Interactive Learning Center-Learning Resource Center building (ILC-LRC), Elias B. Lopez Residential Hall, and Kalimudan Student Center while the new University Library was being constructed.
Student life
As with the other UP units, the university is home to many student organizations that sponsor many activities, most notably during the month of August, September, December and February. These organizations also foster camaraderie and excitement in a major university that is quite distant from downtown Davao.
Like in UP Diliman, UPMin has its own jeepney transportation network called “UPMin Ikot” (similarly named after UP Diliman's own "Ikot" and "Toki" jeepney networks) supported by motorcycles called "single motorcycles" or "habal-habal", and covered tricycles called "payong-payong" or "princess". They usually use the Admin-Kanluran, Admin-Sitio Basak, Admin-Mintal and Kanluran-Mintal routes. In 2012, a proposal has been submitted to the Land Transportation Franchising and Regulatory Board (LTFRB) to create a formal jeepney route from UP Mindanao to Mintal. This route, also called "UPMin Ikot", travels from the College of Science and Mathematics (Kanluran), Administration Building (Admin/CHSS), Sitio Basak (stopping by at the nearby dormitories and boarding houses), downtown Mintal, then finally at the Holy Spirit Hospital, where the public transportation system connects the university to downtown Davao through the Calinan-Mintal-Roxas Avenue jeepney route. This was formally opened to the public on February 1, 2013, as part of the university's Foundation Month celebrations.
Special events
Due to the efforts made by the students and academe, and due in part to its isolation to urban Davao, UP Mindanao's youth as a constituent unit has been offset by its efforts to become a cultural melting pot in the whole Mindanao region. Most of the traditions practiced were borrowed from other constituent units, but were given Mindanaoan elements. Because of these, such events draw an enormous crowd not only from the student and employee body, but also from other schools and universities as well.
Most of the events are organized by the UP Mindanao University Student Council (USC) and student organizations through the coordination and support of the Office of Student Affairs (OSA); other events are also hosted by program-based student organizations, fraternities, and sororities.
University convocation
During the first day of classes for every academic year, the whole community gathers at the Atrium to welcome the freshmen with an acquaintance ceremony, in which members of the administration, faculty, heads of the colleges, staff and students are introduced. An invited guest speaker will then give an inspirational message for the new members of the student body. The freshman with the highest University-predicted Grade (UPG) will then give his/her message on behalf of the batch and lead them during the oath-taking. The elected University Student Council officials are also inaugurated during the program after they were elected during the previous academic year.
Torch Night
Usually done weeks after the formal opening of classes, the Torch Night involves the symbolic passing of the lit torch, symbolizing responsibility and privilege as "Iskolar ng Bayan", by the upperclassmen, represented by the bloc leaders of the earlier freshmen batch, to the current bloc leaders of the freshmen batch of each degree program. They then pledge their oath of loyalty and responsibility to the university and to their fellow schoolmates. A competition of group presentations between freshmen is also part of the event.
Freshmen Night
One of the highlights during the academic year, the Freshmen Night is held a month after the Torch Night, where all freshmen, with the help of their upperclassmen, compete in a pageant (Search for UPMin Isko and Iska) and a dance competition. In 2011, it was celebrated together with the Torch Night but was reinstated as a separate activity in 2013.
Freshmen Torch Night
In an effort to make the experience of welcoming the new students better, the Freshmen Torch Night has been organized in 2011 until the following year. This event combined the fanfare of welcoming the freshmen during Torch Night and the pageantry of the Freshmen Night. The event's highlights include the passing and lighting of the degree program torches and group presentations from the freshmen (taken from the Torch Night) and the Search for the Ultimate Isko and Iska of UP Mindanao (Freshmen Night).
Dula
Dula, a Cebuano word meaning "play", is the university-wide sportsfest, preceded by the college-based sportsfests Dula-dula for CSM and SOM, and Hampang for CHSS (both words mean "play" in the Cebuano and Hiligaynon/Ilonggo languages). Done usually in the middle of the first semester of the academic year two weeks before the moratorium, students get together with their teachers in the spirit of enjoyment, sportsmanship and camaraderie. The highlight of the competitions is the Cheerdance Competition, the climax of intercollegiate rivalry between CHSS, CSM and SOM, drawing huge crowds from other universities and is commonly featured in the local media.
Deviance Day
Deviance Day is organized by the College of Humanities and Social Sciences (CHSS) under the Department of Social Sciences (DSS) and the Dalub-aral na may Ugnayan, Galing at Organisadong Ningas na Ginagabayan ng Antropolohiya (DUGONG-ANTRO) student organization. It is an adaptation of UP Cebu's "Crazy Day" and was originally a free concert hosted by the BA Social Sciences and BA Anthropology programs.
Throughout the day, participating students, faculty and staff would dress up with their "not-so-everyday" attires. Many would copy the look of their favorite movie/cartoon/book characters, some would dress up in formal attire, others try to cross-dress, some do cosplay, and a brave few would enter classes with little clothing at all. In the evening, a program is held which consists of talent shows, a presentation of the best-dressed people during the day, and a free concert, participated by musical bands within the university.
Kasadya
A Cebuano word meaning "joy or happiness," Kasadya is the highlight of the university-wide Christmas celebration that starts during December and continues through the month. It features a lantern parade made by the university constituents and other sectors in UP Mindanao, and Pasiklaban, a skit contest for students and alumni sponsored by the UP Alumni Association-Davao.
In 2010, the Christmas celebrations were officially opened with the lighting of the University Christmas Tree, constructed with lanterns from the different student organizations, at the Kalimudan Student Center, and the culminating program and lantern parade was held at Peoples' Park in Davao City. Since 2011, the celebrations for Kasadya were held inside the campus.
UP Mindanao anniversary celebrations
A month-long event, this commemorates the foundation of UP in Mindanao as a constituent university with the UP System on February 20. Cultural and civic events are conducted during this time, spearheaded by the administration. The colleges also celebrate their foundation weeks during this period (namely CHSS Week and CSM Week). The respective degree programs also hold their own events within their college's week, such as BACA Night by the BA Communication Arts program and Communicators' Guild, and CSM Night, organized by the CSM Student Council.
Residents of the Elias B. Lopez Hall also celebrate their Dorm Week during this period, with events such as workshops, talkshows, quiz bees, the Open House (where students not living inside the dormitory can enter the rooms of dormers throughout the day), competitions for the cleanest and best-decorated rooms, and the much-awaited Mistress of the Dorm, a beauty pageant of cross-dressing male dormers.
UP Fiesta is the umbrella event organized by the University Student Council, held on the first week of the anniversary. It includes guided tours throughout the university for invited high school students, talent shows, and a market fair participated by several student organizations which lasts until the following week's Orgs Fair, which is also another event where student organizations get to promote their activities to other students and organizations. Other activities include foundation celebrations by several student organizations, and the Tatak UPMin, which consists of a beauty pageant (Mistress and Master of UPMin) of male and female students from the different student organizations cross-dressing into their alter-egos, and a musical competition of bands (Battle of the Bands).
Honorable students, faculty and staff are given recognition during Recognition Day, held on February 20, the exact date when UP Mindanao was founded in 1995.
The annual alumni homecoming held during this period has been named Panagtagbo, which means "gathering" in the Cebuano language. This event aims to gather the former students, faculty, and staff of UP Mindanao since its institution in 1995, and offers an avenue for them to reunite with their former classmates, schoolmates, professors, and the university. It also includes musical presentations by Alagad ni Oble'', one of the first musical bands established in the university, and dance concerts by invited party DJs. It was only in 2013 when the event was formally included in the formal lineup of activities throughout the month, and was given a name by the alumni.
Housing
The Elias B. Lopez Hall (EBL Hall) serves as the student residence center of the university, administered by the Student Housing Section (SHS) under the Office of Student Affairs (OSA). Most residents are freshmen and upperclassmen with home residences outside Davao City. In addition, the Science and Technology Dormitory (the Dorm Annex) caters to students from the College of Science and Mathematics, especially those conducting their undergraduate theses, research works, and other related activities.
Several dormitories, apartments, and shared residences are also near the campus, located mostly at Sitio Basak and Mintal.
See also
State Universities and Colleges (Philippines)
List of University of the Philippines people
University of the Philippines Baguio
University of the Philippines Manila
University of the Philippines Los Banos
University of the Philippines Cebu
University of the Philippines Visayas
References
External links
University of the Philippines system
University of the Philippines Mindanao
University of the Philippines Mindanao YouTube page
M
State universities and colleges in the Philippines
Universities and colleges in Davao City
Research universities in the Philippines
|
52415701
|
https://en.wikipedia.org/wiki/Butterfly%20network
|
Butterfly network
|
A butterfly network is a technique to link multiple computers into a high-speed network. This form of multistage interconnection network topology can be used to connect different nodes in a multiprocessor system. The interconnect network for a shared memory multiprocessor system must have low latency and high bandwidth unlike other network systems, like local area networks (LANs) or internet for three reasons:
Messages are relatively short as most messages are coherence protocol requests and responses without data.
Messages are generated frequently because each read-miss or write-miss generates messages to every node in the system to ensure coherence. Read/write misses occur when the requested data is not in the processor's cache and must be fetched either from memory or from another processor's cache.
Messages are generated frequently, therefore rendering it difficult for the processors to hide the communication delay.
Components
The major components of an interconnect network are:
Processor nodes, which consist of one or more processors along with their caches, memories and communication assist.
Switching nodes (Router), which connect communication assist of different processor nodes in a system. In multistage topologies, higher level switching nodes connect to lower level switching nodes as shown in figure 1, where switching nodes in rank 0 connect to processor nodes directly while switching nodes in rank 1 connect to switching nodes in rank 0.
Links, which are physical wires between two switching nodes. They can be uni-directional or bi-directional.
These multistage networks have lower cost than a cross bar, but obtain lower contention than a bus. The ratio of switching nodes to processor nodes is greater than one in a butterfly network. Such topology, where the ratio of switching nodes to processor nodes is greater than one, is called an indirect topology.
The network derives its name from connections between nodes in two adjacent ranks (as shown in figure 1), which resembles a butterfly. Merging top and bottom ranks into a single rank, creates a Wrapped Butterfly Network. In figure 1, if rank 3 nodes are connected back to respective rank 0 nodes, then it becomes a wrapped butterfly network.
BBN Butterfly, a massive parallel computer built by Bolt, Beranek and Newman in the 1980s, used a butterfly interconnect network. Later in 1990, Cray Research's machine Cray C90, used a butterfly network to communicate between its 16 processors and 1024 memory banks.
Butterfly network building
For a butterfly network with p processor nodes, there need to be p(log2 p + 1) switching nodes. Figure 1 shows a network with 8 processor nodes, which implies 32 switching nodes. It represents each node as N(rank, column number). For example, the node at column 6 in rank 1 is represented as (1,6) and node at column 2 in rank 0 is represented as (0,2).
For any 'i' greater than zero, a switching node N(i,j) gets connected to N(i-1, j) and N(i-1, m), where, m is inverted bit on ith location of j. For example, consider the node N(1,6): i equals 1 and j equals 6, therefore m is obtained by inverting the ith bit of 6.
As a result, the nodes connected to N(1,6) are :
Thus, N(0,6), N(1,6), N(0,2), N(1,2) form a butterfly pattern. Several butterfly patterns exist in the figure and therefore, this network is called a Butterfly Network.
Butterfly network routing
In a wrapped butterfly network (which means rank 0 gets merged with rank 3), a message is sent from processor 5 to processor 2. In figure 2, this is shown by replicating the processor nodes below rank 3. The packet transmitted over the link follows the form:
The header contains the destination of the message, which is processor 2 (010 in binary). The payload is the message, M and trailer contains checksum. Therefore, the actual message transmitted from processor 5 is:
Upon reaching a switching node, one of the two output links is selected based on the most significant bit of the destination address. If that bit is zero, the left link is selected. If that bit is one, the right link is selected. Subsequently, this bit is removed from the destination address in the packet transmitted through the selected link. This is shown in figure 2.
The above packet reaches N(0,5). From the header of the packet it removes the leftmost bit to decide the direction. Since it is a zero, left link of N(0,5) (which connects to N(1,1)) gets selected. The new header is '10'.
The new packet reaches N(1,1). From the header of the packet it removes the leftmost bit to decide the direction. Since it is a one, right link of N(1,1) (which connects to N(2,3)) gets selected. The new header is '0'.
The new packet reaches N(2,3). From the header of the packet it removes the leftmost bit to decide the direction. Since it is a zero, left link of N(2,3) (which connects to N(3,2)) gets selected. The header field is empty.
Processor 2 receives the packet, which now contains only the payload 'M' and the checksum.
Butterfly network parameters
Several parameters help evaluate a network topology. The prominent ones relevant in designing large-scale multi-processor systems are summarized below and an explanation of how they are calculated for a butterfly network with 8 processor nodes as shown in figure 1 is provided.
Bisection Bandwidth: The maximum bandwidth required to sustain communication between all nodes in the network. This can be interpreted as the minimum number of links that need to be severed to split the system into two equal portions. For example, the 8 node butterfly network can be split into two by cutting 4 links that crisscross across the middle. Thus bisection bandwidth of this particular system is 4. It is a representative measure of the bandwidth bottleneck which restricts overall communication.
Diameter: The worst case latency (between two nodes) possible in the system. It can be calculated in terms of network hops, which is the number of links a message must travel in order to reach the destination node. In the 8 node butterfly network, it appears that N(0,0) and N(3,7) are farthest away, but upon inspection, it is apparent that due to the symmetric nature of the network, traversing from any rank 0 node to any rank 3 node requires only 3 hops. Therefore, the diameter of this system is 3.
Links: Total number of links required to construct the entire network structure. This is an indicator of overall cost and complexity of implementation. The example network shown in figure 1 requires a total of 48 links (16 links each between rank 0 and 1, rank 1 and 2, rank 2 and 3).
Degree: The complexity of each router in the network. This is equal to the number of in/out links connected to each switching node. The butterfly network switching nodes have 2 input links and 2 output links, hence it is a 4-degree network.
Comparison with other network topologies
This section compares the butterfly network with linear array, ring, 2-D mesh and hypercube networks. Note that linear array can be considered as a 1-D mesh topology. Relevant parameters are compiled in the table (‘p’ represents the number of processor nodes).
Advantages
Butterfly networks have lower diameter than other topologies like a linear array, ring and 2-D mesh. This implies that in butterfly network, a message sent from one processor would reach its destination in a lower number of network hops.
Butterfly networks have higher bisection bandwidth than other topologies. This implies that in butterfly network, a higher number of links need to be broken in order to prevent global communication.
It has a bigger computer range.
Disadvantages
Butterfly networks are more complex and costlier than other topologies due to the higher number of links required to sustain the network.
The difference between hypercube and butterfly lies within their implementation. Butterfly network has a symmetric structure where all processor nodes between two ranks are equidistant to each other, whereas hypercube is more suitable for a multi-processor system which demands unequal distances between its nodes. By looking at the number of links required, it may appear that hypercube is cheaper and simpler compared to a butterfly network, but as the number of processor nodes go beyond 16, the router cost and complexity (represented by degree) of butterfly network becomes lower than hypercube because its degree is independent of the number of nodes.
In conclusion, no single network topology is best for all scenarios. The decision is made based on factors like the number of processor nodes in the system, bandwidth-latency requirements, cost and scalability.
See also
Parallel Computing
Network Topology
Mesh networking
Sources
References
Internet architecture
Network topology
|
11974690
|
https://en.wikipedia.org/wiki/Crayon%20Physics%20Deluxe
|
Crayon Physics Deluxe
|
Crayon Physics Deluxe is a puzzle video game designed by Petri Purho and released on January 7, 2009. An early version, titled Crayon Physics, was released for Android from 2006-2007 developed by Acrodea Korea, Inc . Deluxe won the grand prize at the Independent Games Festival in 2008. It features a heavy emphasis on two-dimensional physics simulations, including gravity, mass, kinetic energy and transfer of momentum. The game includes a level editor and enables its players to download and share custom content via an online service.
Gameplay
The objective of each level in Crayon Physics Deluxe is to guide a ball from a predetermined start point so that it touches all of the stars placed on the level. The ball and nearly all objects on the screen are affected by gravity. The player cannot control the ball directly, but rather must influence the ball's movement by drawing physical objects on the screen. Depending on how the object is drawn, it becomes a rigid surface, a pivot point, a wheel or a rope, and the object can then interact with the ball by hitting it, providing a surface to roll on, dragging, carrying or launching the ball, etc. The player can also nudge the ball left or right by clicking on it, and in some levels, rockets appear and can be used as part of the solution.
The game challenges players to come up with creative solutions to each puzzle, and provides additional rewards for elegant solutions that do not rely on "brute force methods". It comes with more than seventy levels, and also features a level editor and an online Playground, where players can upload and download custom levels.
Development
Crayon Physics
Crayon Physics, the original prototype of this game, is Purho's tenth "rapid-prototype project" inspired by the rules of the Experimental Gameplay Project, and was developed in five days using resources freely available under a Creative Commons license. The game was first released for Android and developed by Acrodea Korea, Inc . On June 10, 2007, Purho announced that he would be developing a level editor to permit user-created levels, although by June 15 fans of the game had already worked out the level format and had released new levels for the game. The level editor was released on June 30. Crayon Physics was built with Simple DirectMedia Layer middle-layer and released as freeware.
Crayon Physics Deluxe
On October 12, 2007, Purho announced Crayon Physics Deluxe, which would feature an intuitive level editor, more levels, and a modification to the game engine to preserve the player's drawings instead of turning them into rectangles. The follow-up took a year and eight months to develop. It won the Seumas McNally Grand Prize at the Independent Games Festival in February 2008. Chris Baker of Slate Magazine also wrote that Crayon Physics Deluxe was more talked about than Gears of War 2 at the 2008 Game Developers Conference.
Microsoft Windows, Mac OS X, Linux and Android versions of this game were released along with Humble Indie Bundle for Android 4 on November 8, 2012.
Platforms
Published by Hudson Soft, Crayon Physics Deluxe was released for the iOS on January 1, 2009 and in Spring 2010 for the iPhone via Apple's App Store. A version for the PC was released six days later. An unofficial clone was made for the DS, but only in free play mode and under the title of Pocket Physics. A port for Windows Mobile was also made, but later pulled. It can still be downloaded unofficially. Ports for Mac and Linux were announced as available on July 27, 2011. Crayon Physics is pre-loaded on some Android devices including the Samsung Galaxy Note 10.1.
Reception
The PC version received "generally favorable reviews" according to the review aggregation website Metacritic.
References
External links
Official website
Crayon Physics freeware prototype
Numpty Physics A free, GPL licensed clone.
Crayon Physics (Android) at Android-apk.org
2009 video games
Android (operating system) games
Creative Commons-licensed video games
Drawing video games
Indie video games
IOS games
Linux games
MacOS games
Puzzle video games
Seumas McNally Grand Prize winners
Windows games
Video games developed in Finland
Single-player video games
|
4588157
|
https://en.wikipedia.org/wiki/Bill%20Kincaid
|
Bill Kincaid
|
William S. Kincaid (born March 10, 1956) is an American computer engineer and entrepreneur notable for creating the MP3 player SoundJam MP with Jeff Robbin that was eventually bought by Apple and renamed iTunes.
Work
Robbin and Kincaid worked for Apple in the 1990s as system software engineers on their operating system project Copland; the project was later abandoned. Both left Apple, where Robbin created Conflict Catcher and Kincaid worked at a startup.
After listening to a show on the radio channel NPR, Kincaid created hardware and device driver support for the Diamond Rio line of digital audio players. He then enlisted Jeff Robbin to develop the front-end for an MP3-playing software they named SoundJam MP. Dave Heller completed the core team. The three chose Casady & Greene as distributor, whom Jeff had previously worked with to distribute Conflict Catcher.
The software saw early success in the Mac music player market, competing with Panic's Audion.
In early 2000 Apple was looking to purchase an MP3 player and approached both Casady & Greene (SoundJam) and Panic (Audion). Because Panic was caught up in negotiations with AOL, the meeting never took place. Turning to Casady & Greene, Apple purchased the rights to the SoundJam software in a deal covered by a two-year secrecy clause.
SoundJam MP was renamed iTunes. Jeff, Bill, and Dave became the original developers of the software. All three continue to work at Apple, with Jeff as the current lead developer of iTunes.
In his spare time, he enjoys racing. In a racing profile, he says “A buddy and I wrote Apple's iTunes software and helped develop the iPod and the Apple music store. It wouldn't have happened if I hadn't heard about MP3 on the radio on the way to a race...”
References
External links
The True Story of SoundJam
Straight Dope on the iPod's Birth
1956 births
Living people
American computer specialists
20th-century American businesspeople
|
61157504
|
https://en.wikipedia.org/wiki/Apttus
|
Apttus
|
Apttus is an American business-to-business software provider specializing in business process automation.
The company provides what it calls “middle office” solutions, utilizing artificial intelligence to optimize various financial and commercial functions, such as quote-to-cash, revenue management, and e-commerce management. Apttus’ software was originally developed to leverage the Salesforce customer relationship management platform, but it has since been integrated with Microsoft Azure and IBM Cloud as well.
In September 2018, private equity firm Thoma Bravo purchased a majority stake in Apttus. This resulted in significant turnover in the executive ranks, as Thoma Bravo installed a new CEO, CFO, Chief Legal Officer, Chief People Officer, Vice President of Finance, and Corporate Controller by the end of 2018.
History
Apttus was founded in 2006 by Kirk Krappe, Neehar Giri, and Kent Perkocha. The three co-founders reportedly developed the company from ideas written down on napkins in a laundry room. Krappe served as the company's first CEO, with Giri as Chief Solutions Officer and Perkocha as Chief Customer Officer.
The company was bootstrapped and took no outside funding until 2013, when it raised $37 million in Series A financing from a group of investors including K1 Capital, ICONIQ, and Salesforce. By the time of its 2018 buyout, Apttus had received a total of $404 million in investment capital from five rounds of fundraising, which gave the company a valuation of approximately $1.3 billion, as of September 2016.
Despite publicly discussing the likelihood of an initial public offering in 2016, Apttus never went public before being acquired by Thoma Bravo. The 2015 acquisition of Apttus rival SteelBrick by Salesforce, an early Apttus investor, was widely blamed for Apttus’ inability to complete an IPO or find a buyer at more favorable terms. Thoma Bravo took a majority stake in Apttus in September 2018. The cost of the purchase was not revealed.
Controversy
In July 2018, Krappe departed Apttus with little warning, a move later reported to have been driven by accusations of sexual assault and misrepresentations of the company's financial performance. The allegations, which became public on November 1, 2018 with the publication of a Business Insider investigative piece, highlighted a company sales retreat at the One&Only Palmilla resort near Cabo San Lucas in Mexico, during which Krappe reportedly sexually assaulted a 26-year-old female business development employee. Other allegations accused Krappe of presenting misleading data on Apttus’ size and financial health, and at the time of the report, there were “several” sexual harassment claims underway, as well as three lawsuits over the financial misrepresentation issues.
References
American companies established in 2006
Software companies established in 2006
Software companies based in the San Francisco Bay Area
Companies based in San Mateo, California
Cloud computing providers
Cloud applications
Business software companies
Financial software companies
Salesforce
2018 mergers and acquisitions
Private equity portfolio companies
Software companies of the United States
|
1303236
|
https://en.wikipedia.org/wiki/Asynchronous%20System%20Trap
|
Asynchronous System Trap
|
Asynchronous System Trap (AST) refers to a mechanism used in several computer operating systems designed by the former Digital Equipment Corporation (DEC) of Maynard, Massachusetts.
Mechanism
Various events within these systems can be optionally signalled back to the user processes via the AST mechanism. These ASTs act like subroutine calls but they are delivered asynchronously, that is, without any regard to the context of the main thread. Because of this, care must be taken:
to ensure that any code that is shared between the main thread and the AST must be designed to be reentrant, and
any data that is shared must be safe against corruption if modified at any time by the AST. Otherwise, the data must be guarded by blocking ASTs during critical sections.
ASTs are most commonly encountered as a result of issuing QIO calls to the kernel. Completion of the I/O can be signalled by the issuance of an AST to the calling process/task. Certain runtime errors could also be signalled using the AST mechanism. Within OpenVMS, Special Kernel-Mode ASTs are used as the standard mechanism for getting relatively convenient access to a process context (including getting the process paged into physical memory as may be needed). These types of ASTs are executed at the highest possible per-process priority the next time the scheduler makes that process current, and are used among other things for retrieving process-level information (in response to a $GETJPI "getjob/process information" system call) and for performing process deletion.
The following operating systems implement ASTs:
RSX-11 (including all of the variants)
RSTS/E
OpenVMS
ASTs are roughly analogous to Unix signals. The important differences are:
There are no "signal codes" assigned to ASTs: instead of assigning a handler to a signal code and raising that code, the AST is specified directly by its address. This allows any number of ASTs to be pending at once (subject to process quotas).
ASTs never abort any system call in progress. In fact, it is possible for a process to put itself into a "hibernate" state (with the $HIBER system call), or to wait for an event flag by calling e.g. $WAITFR, whereupon it does nothing but wait for ASTs to be delivered. When an AST is delivered (triggered by an IO completion, timer, or other event), the process is temporarily taken out of the wait to execute the AST. After the AST procedure completes, the call that put the process into hibernation or the event flag wait is made again; in essence, the reason for the wait is re-evaluated. The only way to get out of this loop (apart from process deletion) is to execute a $WAKE or $SETEF system call to satisfy the wait. This can be done by the process itself by invoking $WAKE or $SETEF within the AST, or (if a global event flag is used) $SETEF within another process.
VAX/VMS V4 and later implemented an interesting optimization to the problem of synchronizing between AST-level and non-AST-level code. A system service named $SETAST could be used to disable or enable the delivery of ASTs for the current and all less-privileged access modes (the OpenVMS term for ring-based security features). However, if the critical section needing protection from ASTs was only a few instructions long, then the overhead of making the $SETAST calls could far outweigh the time to execute those instructions.
So for user mode only (the least privileged ring, normally used by ordinary user programs), a pair of bit flags was provided at a predefined user-writable memory location (in per-process "P1" space). The meanings of these two flags could be construed as "don't deliver any ASTs" and "ASTs have been disabled". Instead of the usual pair of $SETAST calls, the user-mode code would set the first flag before executing the sequence of instructions during which ASTs need to be blocked, and clear it after the sequence. Then (note the ordering here, to avoid race conditions) it would check the second flag to see if it had become set during this time: if so, then ASTs really have become disabled, and $SETAST should be called to re-enable them. In the most common case, no ASTs would have become pending during this time, so there would be no need to call $SETAST at all.
The kernel AST delivery code, for its part, would check the first flag before trying to deliver a user-mode AST; if it was set, then it would directly set the ASTs-disabled bit in the process control block (the same bit that would be set by an explicit $SETAST call from user mode), and also set the second flag, before returning and leaving the AST undelivered.
The asynchronous procedure call mechanism in the Windows NT family of operating systems is a similar mechanism.
References
Further reading
OpenVMS Alpha Internals and Data Structures : Scheduling and Process Control : Version 7.0, Ruth Goldenberg, Saro Saravanan, Denise Dumas,
Operating system technology
OpenVMS
|
1859508
|
https://en.wikipedia.org/wiki/Portal%20Software
|
Portal Software
|
Portal Software was founded in 1985 as Portal Information Network, one of the first Internet Service Provider in the San Francisco Bay Area. It was founded by John Little. The company offered its own interface through modem access that featured Internet email. Towards the end of the 1980s, the company offered FTP.
During this time, the company developed its own account management software. In 1992, John Little decided to focus on developing Portal's internal software for other ISPs, which he saw as a fast evolving market. Their ISP business was shut down and the accounts sold to Sprint. The company was renamed Portal Software in 1993 and Dave Labuda joined the new company as co-founder. Little and Labuda developed a scalable and flexible real-time enterprise software architecture, which they applied to the management of customers and revenue for internet and telecom service providers.
Portal Software developed a billing and revenue software suite (Portal Infranet) primarily targeted at telecommunications companies and ISPs. It was one of the largest companies in its business. Customers of Portal Software included PSINet, AOL Time Warner, China Mobile, Deutsche Telekom, France Télécom, iG Brazil, Juno Online Services, KPN, Orange UK, Reuters, SIRIUS Satellite Radio, Sprint Canada, Telefónica, Telenor, Telstra, TIM, U.S. Cellular, Vodafone, SaskTel and XM Satellite Radio. In order to address the telecommunications market, Portal software acquired the InteGrate software from Solution42, a German company which had a history in high-performance telecommunications Rating. This allowed a realistic performance of rating telephony usage events, something that was not feasible with the 'real-time' rating engine they had developed in-house.
Portal Software was bought by Oracle Corporation in 2006 for an estimated $220 Million. Portal Software is now a business unit of Oracle. Like other acquisition software, Portal Software will be integrated with the core products of Oracle such as Siebel (CRM), PeopleSoft (ERP/CRM), JD Edwards (ERP). At the time of the acquisition, Bhaskar Gorti was the company's CEO, JK Chelladurai was the Managing Director of India development center, Dave Labuda CTO, Bruce Grainger Vice President of Americas, Tim Porter Vice President of EMEA.
References
Defunct software companies of the United States
Software companies based in the San Francisco Bay Area
Oracle acquisitions
Software companies established in 1985
1985 establishments in California
2006 mergers and acquisitions
|
44074104
|
https://en.wikipedia.org/wiki/Klaus%20Pohl%20%28computer%20scientist%29
|
Klaus Pohl (computer scientist)
|
Klaus Pohl (born 1960 as Klaus Mussgnug in Karlsruhe
) is a German computer scientist and Professor for Software Systems Engineering at the University of Duisburg-Essen, mainly known for his work in Requirements Engineering and Software product line engineering.
Life and work
Pohl studied computer science from 1984 to 1988 at the Karlsruhe University of Applied Sciences and till 1989 Information Science at the University of Konstanz. He received his PhD in 1995 and habilitation in 1999 from RWTH Aachen. In addition, he worked for several years as a software architect, software developer and consultant.
Klaus Pohl is director of paluno – The Ruhr Institute for Software Technology, and full professor for Software Systems Engineering at the Institute for Computer Science and Business Information Systems (ICB) at the University of Duisburg-Essen. He is associate professor of the University of Limerick, Ireland.
From 2005 to 2007 he was the founding director of Lero – The Irish Software Engineering Research Centre.
He is also founding member of IREB e.V. (International Requirements Engineering Boards). IREB is a Non-Profit-Organisation and provider of CPRE (Certified Professional for Requirements Engineering). More than 22,000 people in more than 59 countries have passed the CPRE Foundation Level.
Pohl received several awards including the Fellow award of the German Informatics Society (GI - Gesellschaft für Informatik e.V.) in 2014.
His research interests focus on digital, connected systems, requirements engineering, service-based systems and software product line engineering.
Selected publications
Pohl is author of several monographs and author, co-author and editor of over 250 peer-reviewed publications
Monographs
Klaus Pohl and Chris Rupp: Requirements Engineering Fundamentals: A Study Guide for the Certified Professional for Requirement Engineering, Rocky Nook, 2. Edition 2015; German Edition: dpunkt.verlag. 4. Edition 2015; Portuguese Edition.
Klaus Pohl: Requirements Engineering: Fundamentals, Principles, and Techniques, Springer, 2010; German Edition: dpunkt.verlag. 2. Edition 2008; Chinese Edition: 2012.
Klaus Pohl, Günter Böckle, and Frank Van Der Linden (eds.): Software product line engineering: Foundations, Principles, and Techniques. Springer, Berlin, Heidelberg, New York 2005; Japanese Edition: 2009; Chinese Edition: 2013.
Klaus Pohl: Process-centered Requirements Engineering, Advanced Software Development Series, Research Studies Press Ltd, Taunton Somerset, England, 1996.
Selected Proceedings
David Notkin, Betty H.C. Cheng and Klaus Pohl (eds.): Proceedings of the 35th International Conference on Software Engineering (ICSE '13), IEEE/ACM, 2013.
Birgit Geppert and Klaus Pohl (eds.): Proceedings of the 12th International Software Product Line Conference (SPLC 2008), Los Alamitos, IEEE, 2008.
Petri Mähönen, Klaus Pohl and Thierry Priol (eds.): Towards a Service-Based Internet. Proceedings of the 1st European Conference ServiceWave 2008, Volume 5377 of Lecture Notes in Computer Science, Berlin, Heidelberg, Springer, 2008.
Klaus Pohl, Patrick Heymans, Kyo-C. Kang and Andreas Metzger (eds.): Proceedings of the 1st International Workshop on Variability Modelling of Software-Intensive Systems (VaMoS 2007), Volume 1 of Technical Report, Lero Int. Science Centre, University of Limerick, 2007.
Eric Dubois and Klaus Pohl (eds.): Proceedings of the 18th International Conference on Advanced Information Systems Engineering (CAiSE 2006), Volume 4001 of Lecture Notes in Computer Science, Berlin, Heidelberg, Springer, 2006.
J. Henk Obbink and Klaus Pohl (eds.):Proceedings of the 9th International Conference on Software Product Line (SPLC 2005), Volume 3714 of Lecture Notes in Computer Science, Berlin, Heidelberg, Springer, 2005.
Eric Dubois and Klaus Pohl (eds.): Proceedings of the 10th Anniversary IEEE Joint International Conference on Requirements Engineering (RE 2002), Los Alamitos, IEEE, 2002.
Matthias Jarke, Klaus Pasedach and Klaus Pohl (eds.): Proceedings der Informatik '97, Informatik als Innovationsmotor: 27. Jahrestagung der Gesellschaft für Informatik Informatik Aktuell, Berlin, Heidelberg, Springer, 1997.
Klaus Pohl, Gernot Starke and Peters Peter (eds.): Proceedings of the 1st International Workshop on Requirements Engineering: Foundation of Software Quality (REFSQ'94), Volume 6 of Aachener Beiträge zur Informatik, Aachen, Verlag der Augustinus Buchhandlung, 1994.
References
External links
Google scholar
Klaus Pohl, Research Group Software Systems Engineering at University of Duisburg-Essen
Publications, Research Group Software Systems Engineering at University Duisburg-Essen
paluno – The Ruhr Institute for Software Technology
1960 births
Living people
German computer scientists
University of Konstanz alumni
RWTH Aachen University alumni
University of Duisburg-Essen faculty
Scientists from Karlsruhe
|
23159366
|
https://en.wikipedia.org/wiki/List%20of%20patent%20claim%20types
|
List of patent claim types
|
This is a list of special types of claims that may be found in a patent or patent application. For explanations about independent and dependent claims and about the different categories of claims, i.e. product or apparatus claims (claims referring to a physical entity), and process, method or use claims (claims referring to an activity), see Claim (patent), section "Basic types and categories".
Beauregard
In United States patent law, a Beauregard claim is a claim to a computer program written in the form of a claim to an article of manufacture: a computer-readable medium on which are encoded, typically, instructions for carrying out a process. This type of claim is named after the 1995 decision In re Beauregard. The computer-readable medium that these claims contemplate is typically a floppy disk or CD-ROM, which is why this type of claim is sometimes called a "floppy disk" claim. In the past claims to pure instructions were generally considered not patentable because they were viewed as "printed matter," that is, like a set of instructions written down on paper. However, in In re Beauregard the Federal Circuit vacated for reconsideration in the PTO the patent-eligibility of a claim to a computer program encoded in a floppy disk, regarded as an article of manufacture. Consequently, such computer-readable media claims are commonly referred to as Beauregard claims.
When first used in the mid-1990s, Beauregard claims held an uncertain status, as long-standing doctrine held that media that contained merely "non-functional" data (i.e., data that did not interact with the substrate on which it was printed) could not be patented. This was the "printed matter" doctrine which ruled that no "invention" that primarily constituted printed words on a page or other information, as such, could be patented. The case from which this claim style derives its name, In re Beauregard (1995), involved a dispute between a patent applicant who claimed an invention in this fashion, and the PTO, which rejected it under this rationale. The appellate court (the United States Court of Appeals for the Federal Circuit) accepted the applicant's appeal - but chose to remand for reconsideration (rather than affirmatively ruling on it) when the Commissioner of Patents essentially conceded and abandoned the agency's earlier position. Thus, the courts have not expressly ruled on the acceptability of the Beauregard claim style, but its legal status was for a time accepted.
However, although time has rendered the issue essentially moot with regard to conventional media, such claims were originally and perhaps still can be more widely applied. The particular inventions to which Beauregard-style claims were originally directed—i.e., programs encoded on tangible computer-readable media (CD-ROMs, DVD-ROMs, etc.)—are no longer as important commercially, because software deployment is rapidly shifting from tangible computer-readable media to network-transfer distribution (Internet delivery). Thus, Beauregard-style claims are now less commonly drafted and prosecuted. However, electronic distribution was practiced even during the time when the Beauregard case was decided and patent drafters therefore soon tailored their claimed "computer readable medium" to encompass more than just floppy disks, ROMs, or other stable storage media, by extending the concept to information encoded on a carrier wave (such as radio) or transmitted over the Internet.
Two important developments have occurred since the mid and late 1990s, which have impacted the form or viability of Beauregard claims. First, in In re Nuijten, the Federal Circuit held that signals were not patent eligible, because their ephemeral nature kept them from falling within the statutory categories of 35 U.S.C. § 101, such as articles of manufacture. Practice accordingly evolved to recite Beauregard claim matter as being stored on "non-transitory" computer-readable media.
Second, the decisions of the Supreme Court leading up to Alice Corp. v. CLS Bank International appeared to exclude what amounted to a patent on information from the patent system. In CyberSource Corp. v. Retail Decisions Inc., the Federal Circuit first held a method for detecting credit card fraud patent ineligible and then held a corresponding Beauregard claim similarly patent ineligible because it too simply claimed a "mere manipulation or reorganization of data." After the Cybersource decision, the Supreme Court's decision in the Alice case made the status of some Beauregard claims even more uncertain. If the underlying method claim is not patent-eligible, recasting the claim in Beauregard format will not improve its patent eligibility.
Claims of this type have been allowed by the European Patent Office (EPO). However, a more general claim form of "a computer program for instructing a computer to perform the method of [allowable method claim]" is allowed, and no specific medium needs to be specified.
The UK Patent Office (aka IPO) began to allow computer program claims following this revised EPO practice, but then began to refuse them in 2006 after the decision of Aerotel/Macrossan. The UK High Court overruled this practice by decision, so that now they are again allowable in the UK as well, as they have been continuously at the EPO.
Exhausted combination
In United States patent law, an exhausted-combination claim comprises a claim (usually a machine claim) in which a novel device is combined with conventional elements in a conventional manner.
An example would be a claim to a conventional disc drive with a novel motor, to a personal computer (PC) containing a novel microprocessor, or to a conventional grease gun having a new kind of nozzle.In Lincoln Engineering, the inventor invented a new and improved coupling device to attach a nozzle to a grease gun. The patent, however, claimed the whole combination of grease gun, nozzle, and coupling. The Supreme Court stated that "the improvement of one part of an old combination gives no right to claim that improvement in combination with other old parts which perform no new function in the combination." It then concluded that the inventor's "effort, by the use of a combination claim, to extend the monopoly of his invention of an improved form of chuck or coupler to old parts or elements having no new function when operated in connection with the coupler renders the claim void." A far-fetched example, but one that illustrates the principle, would be a claim to an automobile containing a novel brake pedal.
The Federal Circuit held in 1984 that the doctrine of exhausted combination is outdated and no longer reflects the law.In re Bernhardt, 417 F.2d 1395 (Ct. Cus. & Pat. App. 1969). In its 2008 decision in Quanta Computer, Inc. v. LG Electronics, Inc., however, the Supreme Court seems to have assumed without any discussion that its old precedents are still in force, at least for purposes of the exhaustion doctrine.
Exhausted combination claims can have practical significance in at least two contexts. First is royalties. On the one hand, there is the possibility of the royalty base being inflated (since a car containing a novel brake pedal sells for more than the brake pedal itself would) or creating at least an opportunity to levy royalties at more than one level of distribution (an issue in the Quanta case). On the other hand, the royalties may not be just without a proper combination of elements. A second context is that of statutory subject matter under the machine-or-transformation test. By embedding a claim that does not satisfy the machine-or-transformation test in a combination with other equipment, it may make it possible to at least appear to satisfy that test.
Functional
A functional claim expresses a technical feature in functional terms such as "means for converting a digital electric signal into an analog electric signal". Similar language may be used to describe the steps of a method invention ("step for converting... step for storing...").
Functional claims are governed by the various statutes and laws of the country or countries in which a patent application is filed.
United States
In the U.S., functional claims, commonly known as "means-plus-function" or "step-plus-function" claims, are governed by various federal statutes, including 35 U.S.C. 112, paragraph 6, which reads: "An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof."
Extensive analysis has been done on interpretation of the scope and requirements of the means- and step-plus-function claim style.
Despite the complexity of this area of law, which includes many ambiguous and logically contradictory opinions and tests, many practitioners and patent applicants still use this claim style in an independent claim if the specification supports such means-plus-function language.
Jepson
In United States patent law, a Jepson claim is a method or product claim where one or more limitations are specifically identified as a point of novelty, distinguishable over at least the contents of the preamble. They may read, for instance, "A system for storing information having (...) wherein the improvement comprises:". The claim is named after the case Ex parte Jepson, 243 Off. Gaz. Pat. Off. 525 (Ass't Comm'r Pat. 1917). They are similar to the "two-part form" of claim in European practice prescribed by .
In a crowded art, a Jepson claim can be useful in calling the examiner's attention to a point of novelty of an invention without requiring the applicant to present arguments and possibly amendments to communicate the point of novelty to the Examiner. Such arguments and amendments can be damaging in future litigation, for example as in Festo.
On the other hand, the claim style plainly and broadly admits that that subject matter described in the preamble is prior art, thereby facilitating the examiner's (or an accused infringer's) arguments that the improvement is obvious in light of the admitted prior art, as per (a). Prosecutors and applicants are hesitant to admit anything as prior art for this reason, and so this claim style is seldom used in modern practice in the U.S.
Markush
Mainly but not exclusively used in chemistry, a Markush claim or structure is a claim with multiple "functionally equivalent" chemical entities allowed in one or more parts of the compound. According to "Patent Law for the Nonlawyer" (Burton A. Amernick; 2nd edition, 1991),
"In claims that recite... components of compositions, it is sometimes important to claim, as alternatives, a group of constituents that are considered equivalent for the purposes of the invention.... It has been permissible to claim such an artificial group, referred to as a 'Markush Group,' ever since the inventor in the first case... won the right to do so."
If a compound being patented includes several Markush groups, the number of possible compounds it covers could be vast. No patent databases generate all possible permutations and index them separately. Patent searchers have the problem, when searching for specific chemicals in patents, of trying to find all patents with Markush structures that would include their chemicals, even though these patents' indexing would not include the suitable specific compounds. Databases enabling such searching of chemical substructures are indispensable.
Markush claims were named after Eugene Markush, the first inventor to use them successfully in a U.S. patent (see, e.g., U.S. Patent Nos. 1,506,316, 1,982,681, 1,986,276, and 2,014,143), in the 1920s to 1940s. See Ex parte Markush.
According to the USPTO, the proper format for a Markush-type claim is : "selected from the group consisting of A, B and C."
In August 2007, the USPTO unsuccessfully proposed a number of changes to the use of Markush-type claims.
Omnibus
A so-called omnibus claim is a claim including a reference to the description or the drawings without stating explicitly any technical features of the product or process claimed. For instance, an omnibus claim may read as "The invention substantially as herein described", "Apparatus as described in the description" or "An x as shown in Figure y".
European Patent Organisation
Omnibus claims are allowed under the European Patent Convention (EPC), but only "when they are absolutely necessary".T 0150/82 (Claim Categories) of 7.2.1984, O.J. EPO issue: 1984,309.
United Kingdom
Although the United Kingdom had formerly allowed omnibus claims, in April 2017 the UK changed its rules so as to disallow omnibus claims in most cases, falling in line with the EPC. However, granted UK patents with omnibus claims remained valid.
The 1948 House of Lords case of Raleigh v. Miller is of interest in that every claim except the omnibus claim was held to be invalid, but that the latter was valid and had been infringed.
United States
Under U.S. patent law, omnibus claims are categorically disallowed in utility patents, and examiners are advised to reject them as failing to "particularly point out and distinctly claim the subject matter which the applicant regards as his invention" as required by 35 USC 112 paragraph 2. By contrast, design patents and plant patents are required to have one claim, and in omnibus form ("the ornamental design as shown in Figure 1.")
Product-by-process
A product-by-process claim is a claim directed to a product where the product is defined by its process of preparation, especially in the chemical and pharmaceutical industries. They may read for instance "Product obtained by the process of claim X," "Product made by the steps of . . .," and the like.
According to the European practice, they should be interpreted as meaning "Product obtainable''' by the process of claim...". They are only allowable if the product is patentable as such, and''' if the product cannot be defined in a sufficient manner on its own, i.e. with reference to its composition, structure or other testable parameters, and thus without any reference to the process.
The protection conferred by product-by-process claims should not be confused with the protection conferred to products by pure process claims, when the products are directly obtained by the claimed process of manufacture.
In the U.S., the Patent and Trademark Office practice is to allow product-by-process claims even for products that can be sufficiently described with structure elements. However, since the Federal Circuit's 2009 decision in Abbott Labs. v. Sandoz, Inc., 566 F.3d 1282, 1300 (Fed. Cir. 2009), such claims are disadvantageous compared to "true product" claims. To prove infringement of a product-by-process claim under Abbott, a patentee must show that a product meets both the product and the process elements of a product-by-process claim. To invalidate a product-by-process claim, however, an accused infringer need only show that the product elements, not the process elements, were present in the prior art. This can be compared to a "true product" claim, where all limitations must be proven to invalidate the claim.
Programmed computer
A programmed computer claim is one in the form—a general-purpose digital computer programmed to carry out (such and such steps, where the steps are those of a method, such as a method to calculate an alarm limit or a method to convert BCD numbers to pure binary numbers). The purpose of the claim is to try to avoid case-law holding certain types of method to be patent-ineligible. The theory of such claims is based on "the legal doctrine that a new program makes an old general purpose digital computer into a new and different machine." The argument against the validity of such claims is that putting a new piano roll into an old player piano does not convert the latter into a new machine. See Piano roll blues.
Reach-through
A reach-through claim is one that attempts to cover the basic research of an invention or discovery. It is an "attempt to capture the value of a discovery before it may be a full invention." Specifically, a reach-through claim is one in which "claims for products or uses for products when experimental data is provided for screening methods or tools for the identification of such products."
A reach through claim can be thought of as an exception to the general rule about claims.
An example of a denied claim was when the United States Federal Circuit refused to recognize a reach-through claim for Celebrex.
Signal
A signal claim is a claim for an electromagnetic signal that can, for example, embody information that can be used to accomplish a desired result or serve some other useful objective. One claim of this style might read: "An electromagnetic signal carrying computer-readable instructions for performing a novel method... "
In the United States, transitory signal claims are no longer statutory subject matter under In re Nuijten. A petition for rehearing en banc by the full Federal Circuit was denied in February 2008, and a petition for certiorari to the Supreme Court was denied the following October.
In contrast to the situation in the U.S., European Patent Office's Board of Appeal 3.4.01 held in its decision T 533/09 that the European Patent Convention did not as such exclude the patentability of signals, so that signals could be claimed. The Board acknowledged that a signal was neither a product nor a process, but could fall under the definition of « physical entity » in the sense of Enlarged Board of Appeal decision G 2/88.
Swiss-type
In Europe, a Swiss-type claim or "Swiss type of use claim" is a formerly used claim format intended to cover the first, second or subsequent medical use (or indication of efficacy) of a known substance or composition.
Consider a chemical compound which is known generally, and the compound is known to have a medical use (e.g. in treating headaches). If it is later found to have a second medical use (such as combating hair loss), the discoverer of this property will want to protect that new use by obtaining a patent for it.
However, the compound itself is known and thus could not be patented; it would lack novelty under (before the entry into force of the EPC 2000 and new ). Nor could the general concept of a medical formulation including this compound. This is known from the first medical use, and thus also lacks novelty under EPC Article 54. Only the particular method of treatment is new. However methods for treatment of the human body are not patentable under European patent law (). The Enlarged Board of Appeal of the European Patent Office solved this by allowing claims to protect the "Use of substance X in the manufacture of a medicament for the treatment of condition Y''". This fulfilled the letter of the law (it claimed the manufacture, not the medical treatment), and satisfied the EPO and applicants, in particular the pharmaceutical industry. Meanwhile, in view of new , on 19 February 2010, the EPO Enlarged Board of Appeal issued its decision G 2/08 and decided at that occasion that applicants may no longer claim second medical use inventions in the Swiss format.
In certain countries, including New Zealand, the Philippines, and Canada, methods of medical treatment are not patentable either (see MOPOP section 12.04.02), however "Swiss-type claims" are allowed (see MOPOP section 12.06.08).
Notes
References
Patent law lists
|
20705008
|
https://en.wikipedia.org/wiki/Encrypted%20Title%20Key
|
Encrypted Title Key
|
Encrypted Title Key is an encrypted key that belongs to anticopy Advanced Access Content System (AACS). This key is included in the Media Key Block system and is an important part of the content protection process of Blu-ray and HD-DVD contents.
What is it used for?
The main objective of the Encrypted Title Key is to reinforce the discs’ content security during the decryption process of content stored in the media. The content stored in medias like Blu-ray or HD-DVDs is composed and divided in information units called Titles. The owner of the protected contents, divide this information in the form of one or more Titles. It also provides a license to the player, a series of rules called Usage Rules which will be used later on to decrypt the disc information.
To protect the content, the information units are encrypted using encryption keys called Title Keys. To achieve more security and so that the key obtention process cannot be obtained by player without license, the Title Keys are encrypted giving as a result the Encrypted Title Keys.
The licensed replicator shall select a secret, random Title Key for each Title to be protected. Each Title Key shall be used to encrypt the content of its corresponding Title, as specified for each supported content format elsewhere in this specification. At the replicator’s discretion, a given Title may be encrypted using the same Title Key for all instances of pre-recorded media, or different Title Keys may be used for different instances.
Decryption Procedure
So that the players with license can achieve reading the discs’ content, there are some decryption procedures before achieving the reading. The discs have a volume identifier called VID (Volume ID), the Encrypted Title Key and a decryption key (Media Key Block).
The players have some keys, according to each model, called Device Keys, which are granted by the AACS organization. In the reproduction moment, one of these keys decrypts the contained MKB in the disc and as a result of this process, the Media Key, is obtained.
The Media Key is combined with the VID (Volume ID) and the Volume Unique Key (KVU) is originated so that the decryption of the Encrypted Title Key can finally be done and in consequence the necessary Title Key is obtained to decrypt and reproduce the discs’ content.
To codify the Encrypted Title Key, a codification is made following the next formula:
AES-128E (Kvu, Kt ⊕ Nonce ⊕ AES_H(Volume ID || title_id))
It is possible to demonstrate, with a simple analysis of the formula, that the result is obtained of a combination between the Volume ID and a Title identifier obtained from the Media Key, giving as a result the Kvu (Volume Unique Key).
Decryption Problems
Although the process of updating all the Title Keys for an application usually takes a very small amount of time (much less than a second), it is a critical time. If the device were to fail during the re-encryption process, the user's content might be lost. To reduce the risk of user loss, recording devices shall begin the reencryption process by renaming the old MKB to a temporary name before writing the new MKB. When the device completes the re-encryption process, it shall delete the temporary MKB. If any recorder discovers a temporary MKB on a piece of media, it is an indication that the encrypted Title Keys might be corrupted. The
device shall perform one of the following protocols to recover the corrupted encrypted Title Keys. Which protocol is chosen depends on where the encrypted Title Keys are stored in the particular application. A device re-encrypting Title Keys as a normal result of updating a recordable MKB shall also use these same
protocols.
These protocols are:
- Recovery Protocol When the Encrypted Title Keys are in a Separate File:
In this case, the original recording device shall rename the old encrypted Title Keys to a defined temporary
name before beginning to write the new encrypted Title Key File.
-Recovery Protocol When the Encrypted Title Keys are in the Content File:
In the extreme case, each content file contains its own encrypted Title Key. In that case, it is not likely that there
is a temporary version of the encrypted Title Keys.
Where is it located?
The Encrypted Title Keys are located in the Blu-ray and HD-DVDs where there is content to reproduce by the player with license.
The information stored in the discs is found divided in three different parts: Reading/Writing area, read-only area and protected area.
The Encrypted Title Keys are found in the Reading/Writing area with the Media Key Block, the Usage Rules and the encrypted content.
Sources
Introduction and Common Cryptographic Elements Rev 0.91
AACS Technical Overview 7/2004
References
External links
AACS web page
ACCS users Guide
Advanced Access Content System
|
9620
|
https://en.wikipedia.org/wiki/Education%20reform
|
Education reform
|
Education reform is the name given to the goal of changing public education. The meaning and education methods have changed through debates over what content or experiences result in an educated individual or an educated society. Historically, the motivations for reform have not reflected the current needs of society. A consistent theme of reform includes the idea that large systematic changes to educational standards will produce social returns in citizens' health, wealth, and well-being.
As part of the broader social and political processes, the term education reform refers to the chronology of significant, systematic revisions made to amend the educational legislation, standards, methodology, and policy affecting a nation's public school system to reflect the needs and values of contemporary society.
Before the late 18th century, classical education instruction from an in-home personal tutor, hired at the family's expense, was primarily a privilege for children from wealthy families. Innovations such as encyclopedias, public libraries, and grammar schools all aimed to relieve some of the financial burden associated with the expenses of the classical education model. Motivations during the Victorian era emphasized the importance of self-improvement. Victorian education focused on teaching commercially valuable topics, such as modern languages and mathematics, rather than classical liberal arts subjects, such as Latin, art, and history.
Motivations for education reformists like Horace Mann and his proponents focused on making schooling more accessible and developing a robust state-supported common school system. John Dewey, an early 50th-century reformer, focused on improving society by advocating for a scientific, pragmatic, or democratic principle-based curriculum. Whereas Maria Montessori incorporated humanistic motivations to "meet the needs of the child". In historic Prussia, a motivation to foster national unity led to formal education concentrated on teaching national language literacy to young children, resulting in Kindergarten.
The history of educational pedagogy in the United States has ranged from teaching literacy and proficiency of religious doctrine to establishing cultural literacy, assimilating immigrants into a democratic society, producing a skilled labor force for the industrialized workplace, preparing students for careers, and competing in a global marketplace. Education inequality is also a motivation for education reform, seeking to address problems of a community.
Motivations for education reform
Education reform, in general, implies a continual effort to modify and improve the institution of education. Over time, as the needs and values of society change, attitudes towards public education change. As a social institution, education plays an integral role in the process of socialization. "Socialization is broadly composed of distinct inter- and intra-generational processes. Both involve the harmonization of an individual's attitudes and behaviors with that of their socio-cultural milieu." Educational matrices mean to reinforce those socially acceptable informal and formal norms, values, and beliefs that individuals need to learn in order to be accepted as good, functioning, and productive members of their society. Education reform is the process of constantly renegotiating and restructuring the educational standards to reflect the ever-evolving contemporary ideals of social, economic, and political culture. Reforms can be based on bringing education into alignment with a society's core values. Reforms that attempt to change a society's core values can connect alternative education initiatives with a network of other alternative institutions.
Education reform has been pursued for a variety of specific reasons, but generally most reforms aim at redressing some societal ills, such as poverty-, gender-, or class-based inequities, or perceived ineffectiveness. Current education trends in the United States represent multiple achievement gaps across ethnicities, income levels, and geographies. As McKinsey and Company reported in a 2009 analysis, "These educational gaps impose on the United States the economic equivalent of a permanent national recession." Reforms are usually proposed by thinkers who aim to redress societal ills or institute societal changes, most often through a change in the education of the members of a class of people—the preparation of a ruling class to rule or a working class to work, the social hygiene of a lower or immigrant class, the preparation of citizens in a democracy or republic, etc. The idea that all children should be provided with a high level of education is a relatively recent idea, and has arisen largely in the context of Western democracy in the 20th century.
The "beliefs" of school districts are optimistic that quite literally "all students will succeed", which in the context of high school graduation examination in the United States, all students in all groups, regardless of heritage or income will pass tests that in the introduction typically fall beyond the ability of all but the top 20 to 30 percent of students. The claims clearly renounce historical research that shows that all ethnic and income groups score differently on all standardized tests and standards based assessments and that students will achieve on a bell curve. Instead, education officials across the world believe that by setting clear, achievable, higher standards, aligning the curriculum, and assessing outcomes, learning can be increased for all students, and more students can succeed than the 50 percent who are defined to be above or below grade level by norm referenced standards.
States have tried to use state schools to increase state power, especially to make better soldiers and workers. This strategy was first adopted to unify related linguistic groups in Europe, including France, Germany and Italy. Exact mechanisms are unclear, but it often fails in areas where populations are culturally segregated, as when the U.S. Indian school service failed to suppress Lakota and Navaho, or when a culture has widely respected autonomous cultural institutions, as when the Spanish failed to suppress Catalan.
Many students of democracy have desired to improve education in order to improve the quality of governance in democratic societies; the necessity of good public education follows logically if one believes that the quality of democratic governance depends on the ability of citizens to make informed, intelligent choices, and that education can improve these abilities.
Politically motivated educational reforms of the democratic type are recorded as far back as Plato in The Republic. In the United States, this lineage of democratic education reform was continued by Thomas Jefferson, who advocated ambitious reforms partly along Platonic lines for public schooling in Virginia.
Another motivation for reform is the desire to address socio-economic problems, which many people see as having significant roots in lack of education. Starting in the 20th century, people have attempted to argue that small improvements in education can have large returns in such areas as health, wealth and well-being. For example, in Kerala, India in the 1950s, increases in women's health were correlated with increases in female literacy rates. In Iran, increased primary education was correlated with increased farming efficiencies and income. In both cases some researchers have concluded these correlations as representing an underlying causal relationship: education causes socio-economic benefits. In the case of Iran, researchers concluded that the improvements were due to farmers gaining reliable access to national crop prices and scientific farming information.
History of Education Reform
Classical Education
As taught from the 18th to the 19th century, Western classical education curriculums focused on concrete details like "Who?", "What?", "When?", "Where?". Unless carefully taught, large group instruction naturally neglects asking the theoretical "Why?" and "Which?" questions that can be discussed in smaller groups.
Classical education in this period also did not teach local (vernacular) languages and culture. Instead, it taught high-status ancient languages (Greek and Latin) and their cultures. This produced odd social effects in which an intellectual class might be more loyal to ancient cultures and institutions than to their native vernacular languages and their actual governing authorities.
18th Century Reform
Child-Study
Jean-Jacques Rousseau, father of the Child Study Movement, centered the child as an object of study.
In Emile: Or, On Education, Rousseau's principal work on education lays out an educational program for a hypothetical newborn's education through adulthood.
Rousseau provided a dual critique of the educational vision outlined in Plato's Republic and that of his society in contemporary Europe. He regarded the educational methods contributing to the child's development; he held that a person could either be a man or a citizen. While Plato's plan could have brought the latter at the expense of the former, contemporary education failed at both tasks. He advocated a radical withdrawal of the child from society and an educational process that utilized the child's natural potential and curiosity, teaching the child by confronting them with simulated real-life obstacles and conditioning the child through experience rather intellectual instruction.
Rousseau ideas were rarely implemented directly, but influenced later thinkers, particularly Johann Heinrich Pestalozzi and Friedrich Wilhelm August Fröbel, the inventor of the kindergarten.
National Identity
European and Asian nations regard education as essential to maintaining national, cultural, and linguistic unity. In the late 18th century (~1779), Prussia instituted primary school reforms expressly to teach a unified version of the national language, "Hochdeutsch".
One significant reform was kindergarten whose purpose was to have the children participate in supervised activities taught by instructors who spoke the national language. The concept embraced the idea that children absorb new language skills more easily and quickly when they are young
The current model of kindergarten is reflective of the Prussian model.
In other countries, such as the Soviet Union, France, Spain, and Germany, the Prussian model has dramatically improved reading and math test scores for linguistic minorities.
19th Century - England
In the 19th century, before the advent of government-funded public schools, Protestant organizations established Charity Schools to educate the lower social classes. The Roman Catholic Church and governments later adopted the model.
Designed to be inexpensive, Charity schools operated on minimal budgets and strived to serve as many needy children as possible. This led to the development of grammar schools, which primarily focused on teaching literacy, grammar, and bookkeeping skills so that the students could use books as an inexpensive resource to continue their education. Grammar was the first third of the then-prevalent system of classical education..
Educators Joseph Lancaster and Andrew Bell developed the monitorial system, also known as "mutual instruction" or the "Bell–Lancaster method". Their contemporary, educationalist and writer Elizabeth Hamilton, suggested that in some important aspects the method had been "anticipated" by the Belfast schoolmaster David Manson. In the 1760s Manson had developed a peer-teaching and monitoring system within the context of what he called a "play school" that dispensed with "the discipline of the rod". (More radically, Manson proposed the "liberty of each [child] to take the quantity [of lessons] agreeable to his inclination").
Lancaster, an impoverished Quaker during the early 19th century in London and Bell at the Madras School of India developed this model independent of one another. However, by design, their model utilizes more advanced students as a resource to teach the less advanced students; achieving student-teacher ratios as small as 1:2 and educating more than 1000 students per adult. The lack of adult supervision at the Lancaster school resulted in the older children acting as disciplinary monitors and taskmasters.
To provide order and promote discipline the school implemented a unique internal economic system, inventing a currency called a Scrip. Although the currency was worthless in the outside world, it was created at a fixed exchange rate from a student's tuition and student's could use scrip to buy food, school supplies, books, and other items from the school store. Students could earn scrip through tutoring. To promote discipline, the school adopted a work-study model. Every job of the school was bid-for by students, with the largest bid winning. However, any student tutor could auction positions in his or her classes to earn scrip. The bids for student jobs paid for the adult supervision.
Lancaster promoted his system in a piece called Improvements in Education that spread widely throughout the English-speaking world. Lancaster schools provided a grammar-school education with fully developed internal economies for a cost per student near $40 per year in 1999 U.S. dollars. To reduce cost and motivated to save up scrip, Lancaster students rented individual pages of textbooks from the school library instead of purchasing the textbook. Student's would read aloud their pages to groups. Students commonly exchanged tutoring and paid for items and services with receipts from down tutoring.
The schools did not teach submission to orthodox Christian beliefs or government authorities. As a result, most English-speaking countries developed mandatory publicly paid education explicitly to keep public education in "responsible" hands. These elites said that Lancaster schools might become dishonest, provide poor education, and were not accountable to established authorities. Lancaster's supporters responded that any child could cheat given the opportunity, and that the government was not paying for the education and thus deserved no say in their composition.
Though motivated by charity, Lancaster claimed in his pamphlets to be surprised to find that he lived well on the income of his school, even while the low costs made it available to the most impoverished street children. Ironically, Lancaster lived on the charity of friends in his later life.
Modern Reformist
Although educational reform occurred on a local level at various points throughout history, the modern notion of education reform is tied with the spread of compulsory education. Economic growth and the spread of democracy raised the value of education and increased the importance of ensuring that all children and adults have access to free, high-quality, effective education. Modern education reforms are increasingly driven by a growing understanding of what works in education and how to go about successfully improving teaching and learning in schools. However, in some cases, the reformers' goals of "high-quality education" has meant "high-intensity education", with a narrow emphasis on teaching individual, test-friendly subskills quickly, regardless of long-term outcomes, developmental appropriateness, or broader educational goals.
Horace Mann
In the United States, Horace Mann (1796 – 1859) of Massachusetts used his political base and role as Secretary of the Massachusetts State Board of Education to promote public education in his home state and nationwide. Advocating a substantial public investment be made in education, Mann and his proponents developed a strong system of state supported common schools..
His crusading style attracted wide middle class support. Historian Ellwood P. Cubberley asserts:
No one did more than he to establish in the minds of the American people the conception that education should be universal, non-sectarian, free, and that its aims should be social efficiency, civic virtue, and character, rather than mere learning or the advancement of sectarian ends.
In 1852, Massachusetts passed a law making education mandatory. This model of free, accessible education spread throughout the country and in 1917 Mississippi was the final state to adopt the law.
John Dewey
John Dewey, a philosopher and educator based in Chicago and New York, helped conceptualize the role of American and international education during the first four decades of the 20th century. An important member of the American Pragmatist movement, he carried the subordination of knowledge to action into the educational world by arguing for experiential education that would enable children to learn theory and practice simultaneously; a well-known example is the practice of teaching elementary physics and biology to students while preparing a meal. He was a harsh critic of "dead" knowledge disconnected from practical human life.
Dewey criticized the rigidity and volume of humanistic education, and the emotional idealizations of education based on the child-study movement that had been inspired by Rousseau and those who followed him. Dewey understood that children are naturally active and curious and learn by doing. Dewey's understanding of logic is presented in his work "Logic, the Theory of Inquiry" (1938). His educational philosophies were presented in "My Pedagogic Creed", The School and Society, The Child and Curriculum, and Democracy and Education (1916). Bertrand Russell criticized Dewey's conception of logic, saying "What he calls "logic" does not seem to me to be part of logic at all; I should call it part of psychology."
Dewey left the University of Chicago in 1904 over issues relating to the Dewey School.
Dewey's influence began to decline in the time after the Second World War and particularly in the Cold War era, as more conservative educational policies came to the fore.
Administrative Progressives
The form of educational progressivism which was most successful in having its policies implemented has been dubbed "administrative progressivism" by historians. This began to be implemented in the early 20th century. While influenced particularly in its rhetoric by Dewey and even more by his popularizers, administrative progressivism was in its practice much more influenced by the Industrial Revolution and the concept economies of scale.
The administrative progressives are responsible for many features of modern American education, especially American high schools: counseling programs, the move from many small local high schools to large centralized high schools, curricular differentiation in the form of electives and tracking, curricular, professional, and other forms of standardization, and an increase in state and federal regulation and bureaucracy, with a corresponding reduction of local control at the school board level. (Cf. "State, federal, and local control of education in the United States", below) (Tyack and Cuban, pp. 17–26)
These reforms have since become heavily entrenched, and many today who identify themselves as progressives are opposed to many of them, while conservative education reform during the Cold War embraced them as a framework for strengthening traditional curriculum and standards.
More recent methods, instituted by groups such as the think tank Reform's education division, and S.E.R. have attempted to pressure the government of the U.K. into more modernist educational reform, though this has met with limited success.
History of Public School Reform - United States
In the United States, public education is characterized as "any federally funded primary or secondary school, administered to some extent by the government, and charged with educating all citizens. Although there is typically a cost to attend some public higher education institutions, they are still considered part of public education."
Colonial America
In what would become the United States, the first public school was established in Boston, Massachusetts, on April 23, 1635. Puritan schoolmaster Philemon Pormont led instruction at the Boston Latin School. During this time, post-secondary education was a commonly utilized tool to distinguish one's social class and social status. Access to education was the "privilege of white, upper-class, Christian male children" in preparation for university education in ministry.
In colonial America, to maintain Puritan religious traditions, formal and informal education instruction focused on teaching literacy. All colonists needed to understand the written language on some fundamental level in order to read the Bible and the colony's written secular laws. Religious leaders recognized that each person should be "educated enough to meet the individual needs of their station in life and social harmony." The first compulsory education laws were passed in Massachusetts between 1642 and 1648 when religious leaders noticed not all parents were providing their children with proper education. These laws stated that all towns with 50 or more families were obligated to hire a schoolmaster to teach children reading, writing, and basic arithmetic."In 1642 the General Court passed a law that required heads of households to teach all their dependents — apprentices and servants as well as their own children — to read English or face a fine. Parents could provide the instruction themselves or hire someone else to do it. Selectmen were to keep 'a vigilant eye over their brethren and neighbors,' young people whose education was neglected could be removed from their parents or masters."The 1647 law eventually led to establishing publicly funded district schools in all Massachusetts towns, although, despite the threat of fines, compliance and quality of public schools were less than satisfactory."Many towns were 'shamefully neglectful' of children's education. In 1718 '...by sad experience, it is found that many towns that not only are obliged by law, but are very able to support a grammar school, yet choose rather to incur and pay the fine or penalty than maintain a grammar school."When John Adams drafted the Massachusetts Constitution in 1780, he included provisions for a comprehensive education law that guaranteed public education to "all" citizens. However, access to formal education in secondary schools and colleges was reserved for free, white males. During the 17th and 18th centuries, females received little or no formal education except for home learning or attending Dame Schools. Likewise, many educational institutions maintained a policy of refusing to admit Black applicants. The Virginia Code of 1819 outlawed teaching enslaved people to read or write.
Post Revolution
Soon after the American Revolution, early leaders, like Thomas Jefferson and John Adams, proposed the creation of a more "formal and unified system of publicly funded schools" to satiate the need to "build and maintain commerce, agriculture and shipping interests". Their concept of free public education was not well received and did not begin to take hold on until the 1830s. However, in 1790, evolving socio-cultural ideals in the Commonwealth of Pennsylvania led to the first significant and systematic reform in education legislation that mandated economic conditions would not inhibit a child's access to education:"Constitution of the Commonwealth of Pennsylvania – 1790 ARTICLE VII Section I. The legislature shall, as soon as conveniently may be, provide, by law, for the establishment of schools throughout the state, in such manner that the poor may be taught gratis."
Reconstruction and the American Industrial Revolution
During Reconstruction, from 1865 to 1877, African Americans worked to encourage public education in the South. With the U.S. Supreme Court decision in Plessy v. Ferguson, which held that "segregated public facilities were constitutional so long as the black and white facilities were equal to each other", this meant that African American children were legally allowed to attend public schools, although these schools were still segregated based on race. However, by the mid-twentieth century, civil rights groups would challenge racial segregation.
During the second half of the nineteenth century (1870 and 1914), America's Industrial Revolution refocused the nation's attention on the need for a universally accessible public school system. Inventions, innovations, and improved production methods were critical to the continued growth of American manufacturing. To compete in the global economy, an overwhelming demand for literate workers that possessed practical training emerged. Citizens argued, "educating children of the poor and middle classes would prepare them to obtain good jobs, thereby strengthen the nation's economic position." Institutions became an essential tool in yielding ideal factory workers with sought-after attitudes and desired traits such as dependability, obedience, and punctuality. Vocationally oriented schools offered practical subjects like shop classes for students who were not planning to attend college for financial or other reasons. Not until the latter part of the 19th century did public elementary schools become available throughout the country. Although, it would be longer for children of color, girls, and children with special needs to attain access free public education.
Mid 20th and early 21st century (United states)
Civil Rights Reform
Systemic bias remained a formidable barrier. From the 1950s to the 1970s, many of the proposed and implemented reforms in U.S. education stemmed from the civil rights movement and related trends; examples include ending racial segregation, and busing for the purpose of desegregation, affirmative action, and banning of school prayer.
In the early 1950s, most U.S. public schools operated under a legally sanctioned racial segregation system. Civil Rights reform movements sought to address the biases that ensure unequal distribution of academic resources such as school funding, qualified and experienced teachers, and learning materials to those socially excluded communities. In the early 1950s, the NAACP lawyers brought class-action lawsuits on behalf of black schoolchildren and their families in Kansas, South Carolina, Virginia, and Delaware, petitioning court orders to compel school districts to let black students attend white public schools. Finally, in 1954, the U.S. Supreme Court rejected that framework with Brown v. Board of Education and declared state-sponsored segregation of public schools unconstitutional.
In 1964, Title VI of the Civil Rights Act (Public Law 88-352) "prohibited discrimination on the basis of race, color, and national origin in programs and activities receiving federal financial assistance." Educational institutions could now utilize public funds to implement in-service training programs to assist teachers and administrators in establishing desegregation plans.
In 1965, the Higher Education Act (HEA) (Public Law 89–329) authorizes federal aid for postsecondary students.
The Elementary and Secondary Education Act of 1965 (ESEA) (Public Law 89-313) represents the federal government's commitment to providing equal access to quality education; including those children from low-income families, limited English proficiency, and other minority groups. This legislation had positive retroactive implications for Historically Black Colleges and Universities, more commonly known as HBCUs."The Higher Education Act of 1965, as amended, defines an HBCU as: "…any historically black college or university that was established prior to 1964, whose principal mission was, and is, the education of black Americans, and that is accredited by a nationally recognized accrediting agency or association determined by the Secretary [of Education] to be a reliable authority as to the quality of training offered or is, according to such an agency or association, making reasonable progress toward accreditation."Known as the Bilingual Education Act, Title VII of ESEA (Public Law 90-247), offered federal aid to school districts to provide bilingual instruction for students with limited English speaking ability.
The Education Amendments of 1972 (Public Law 92-318, 86 Stat. 327) establishes the Education Division in the U.S. Department of Health, Education, and Welfare and the National Institute of Education. Title IX of the Education Amendments of 1972 states, "No person in the United States shall, on the basis of sex, be excluded from participation in, be denied the benefits of, or be subjected to discrimination under any education program or activity receiving Federal financial assistance."
Equal Educational Opportunities Act of 1974 (Public Law 93-380) - Civil Rights Amendments to the Elementary and Secondary Education Act of 1965:"Title I: Bilingual Education Act - Authorizes appropriations for carrying out the provisions of this Act. Establishes, in the Office of Education, an Office of Bilingual Education through which the Commissioner of Education shall carry out his functions relating to bilingual education. Authorizes appropriations for school nutrition and health services, correction education services, and ethnic heritage studies centers.
Title II: Equal Educational Opportunities and the Transportation of Students: Equal Educational Opportunities Act - Provides that no state shall deny equal educational opportunity to an individual on account of his or her race, color, sex, or national origin by means of specified practices...
Title IV: Consolidation of Certain Education Programs: Authorizes appropriations for use in various education programs including libraries and learning resources, education for use of the metric system of measurement, gifted and talented children programs, community schools, career education, consumers' education, women's equity in education programs, and arts in education programs.
Community Schools Act - Authorizes the Commissioner to make grants to local educational agencies to assist in planning, establishing, expanding, and operating community education programs
Women's Educational Equity Act - Establishes the Advisory Council on Women's Educational Programs and sets forth the composition of such Council. Authorizes the Commissioner of Education to make grants to, and enter into contracts with, public agencies, private nonprofit organizations, and individuals for activities designed to provide educational equity for women in the United States.
Title V: Education Administration: Family Educational Rights and Privacy Act (FERPA)- Provides that no funds shall be made available under the General Education Provisions Act to any State or local educational agency or educational institution which denies or prevents the parents of students to inspect and review all records and files regarding their children.
Title VII: National Reading Improvement Program: Authorizes the Commissioner to contract with State or local educational agencies for the carrying out by such agencies, in schools having large numbers of children with reading deficiencies, of demonstration projects involving the use of innovative methods, systems, materials, or programs which show promise of overcoming such reading deficiencies."In 1975, The Education for All Handicapped Children Act (Public Law 94-142) ensured that all handicapped children (age 3-21) receive a "free, appropriate public education" designed to meet their special needs.
1980-1989: A Nation at Risk
During the 1980s, some of the momentum of education reform moved from the left to the right, with the release of A Nation at Risk, Ronald Reagan's efforts to reduce or eliminate the United States Department of Education. "[T]he federal government and virtually all state governments, teacher training institutions, teachers' unions, major foundations, and the mass media have all pushed strenuously for higher standards, greater accountability, more "time on task," and more impressive academic results".
Per the shift in educational motivation, families sought institutional alternatives, including "charter schools, progressive schools, Montessori schools, Waldorf schools, Afrocentric schools, religious schools - or home school instruction in their communities."
In 1984 President Reagan enacted the Education for Economic Security Act (Public Law 98-377)
In 1989, the Child Development and Education Act of 1989 (Public Law 101-239) authorized funds for Head Start Programs to include child care services.
In the latter half of the decade, E. D. Hirsch put forth an influential attack on one or more versions of progressive education. Advocating an emphasis on "cultural literacy"—the facts, phrases, and texts.
See also Uncommon Schools.
1990-2000: Standards Based Education Model
In 1994, the land grant system was expanded via the Elementary and Secondary Education Act to include tribal colleges.
Most states and districts in the 1990s adopted Outcome-Based Education (OBE) in some form or another. A state would create a committee to adopt standards, and choose a quantitative instrument to assess whether the students knew the required content or could perform the required tasks.
In 1992 The National Commission on Time and Learning, Extension (Public Law 102–359) revise funding for civic education programs and those educationally disadvantaged children."
In 1994 the Improving America's Schools Act (IASA) (Public Law 103-382) reauthorized the Elementary and Secondary Education Act of 1965; amended as The Eisenhower Professional Development Program; IASA designated Title I funds for low income and otherwise marginalized groups; i.e., females, minorities, individuals with disabilities, individuals with limited English proficiency (LEP). By tethering federal funding distributions to student achievement, IASA meant use high stakes testing and curriculum standards to hold schools accountable for their results at the same level as other students. The Act significantly increased impact aid for the establishment of the Charter School Program, drug awareness campaigns, bilingual education, and technology.
In 1998 The Charter School Expansion Act (Public Law 105-278) amended the Charter School Program, enacted in 1994.
2001-2015: No Child Left Behind
Consolidated Appropriations Act of 2001 (Public Law 106-554) appropriated funding to repair educational institution's buildings as well as repair and renovate charter school facilities, reauthorized the Even Start program, and enacted the Children's Internet Protection Act.
The standards-based National Education Goals 2000, set by the U.S. Congress in the 1990s, were based on the principles of outcomes-based education. In 2002, the standards-based reform movement culminated as the No Child left Behind Act of 2001 (Public Law 107-110) where achievement standard were set by each individual state. This federal policy was active until 2015 in the United States .
An article released by CBNC.com said a principal Senate Committee will take into account legislation that reauthorizes and modernizes the Carl D. Perkins Act. President George Bush approved this statute in 2006 on August 12, 2006. This new bill will emphasize the importance of federal funding for various Career and Technical (CTE) programs that will better provide learners with in-demand skills. Pell Grants are specific amount of money is given by the government every school year for disadvantaged students who need to pay tuition fees in college.
At present, there are many initiatives aimed at dealing with these concerns like innovative cooperation between federal and state governments, educators, and the business sector. One of these efforts is the Pathways to Technology Early College High School (P-TECH). This six-year program was launched in cooperation with IBM, educators from three cities in New York, Chicago, and Connecticut, and over 400 businesses. The program offers students in high school and associate programs focusing on the STEM curriculum. The High School Involvement Partnership, private and public venture, was established through the help of Northrop Grumman, a global security firm. It has given assistance to some 7,000 high school students (juniors and seniors) since 1971 by means of one-on-one coaching as well as exposure to STEM areas and careers.
2016-2021: Every Student Succeeds Act
The American Reinvestment and Recovery Act, enacted in 2009, reserved more than $85 billion in public funds to be used for education.
The 2009 Council of Chief State School Officers and the National Governors Association launch the Common Core State Standards Initiative.
In 2012 the Obama administration launched the Race to the Top competition aimed at spurring K–12 education reform through higher standards."The Race to the Top – District competition will encourage transformative change within schools, targeted toward leveraging, enhancing, and improving classroom practices and resources.
The four key areas of reform include:
Development of rigorous standards and better assessments
Adoption of better data systems to provide schools, teachers, and parents with information about student progress
Support for teachers and school leaders to become more effective
Increased emphasis and resources for the rigorous interventions needed to turn around the lowest-performing schools"In 2015, under the Obama administration, many of the more restrictive elements that were enacted under No Child Left Behind (NCLB, 2001), were removed in the Every Student Succeeds Act (ESSA, 2015) which limits the role of the federal government in school liability. Every Student Succeeds Act (Public Law 114-95) reformed educational standards by "moving away from such high stakes and assessment based accountability models" and focused on assessing student achievement from a holistic approach by utilizing qualitative measures. Some argue that giving states more authority can help prevent considerable discrepancies in educational performance across different states. ESSA was approved by former President Obama in 2015 which amended and empowered the Elementary and Secondary Education Act of 1965. The Department of Education has the choice to carry out measures in drawing attention to said differences by pinpointing lowest-performing state governments and supplying information on the condition and progress of each state on different educational parameters. It can also provide reasonable funding along with technical aid to help states with similar demographics collaborate in improving their public education programs.
Social and Emotional Learning: Strengths-Based Education Model
This uses a methodology that values purposeful engagement in activities that turn students into self-reliant and efficient learners. Holding on to the view that everyone possesses natural gifts that are unique to one's personality (e.g. computational aptitude, musical talent, visual arts abilities), it likewise upholds the idea that children, despite their inexperience and tender age, are capable of coping with anguish, able to survive hardships, and can rise above difficult times.
Trump Administration
In 2017, Betsy DeVos was instated as the 11th Secretary of Education. A strong proponent of school choice, school voucher programs, and charter schools, DeVos was a much-contested choice as her own education and career had little to do with formal experience in the US education system. In a Republican-dominated senate, she received a 50–50 vote - a tie that was broken by Vice President Mike Pence. Prior to her appointment, DeVos received a BA degree in business economics from Calvin College in Grand Rapids, Michigan and she served as chairman of an investment management firm, The Windquest Group. She supported the idea of leaving education to state governments under the new K-12 legislation. DeVos cited the interventionist approach of the federal government to education policy following the signing of the ESSA. The primary approach to that rule has not changed significantly. Her opinion was that the education movement populist politics or populism encouraged reformers to commit promises which were not very realistic and therefore difficult to deliver.
On July 31, 2018, President Donald Trump signed the Strengthening Career and Technical Education for the 21st Century Act (HR 2353) The Act reauthorized the Carl D. Perkins Career and Technical Education Act, a $1.2 billion program modified by the United States Congress in 2006. A move to change the Higher Education Act was also deferred.
The legislation enacted on July 1, 2019, replaced the Carl D. Perkins Career and Technical Education (Perkins IV) Act of 2006. Stipulations in Perkins V enables school districts to make use of federal subsidies for all students' career search and development activities in the middle grades as well as comprehensive guidance and academic mentoring in the upper grades. At the same time, this law revised the meaning of "special populations" to include homeless persons, foster youth, those who left the foster care system, and children with parents on active duty in the United States armed forces.
Barriers to Reform
Education Inequalities Facing Students of Color
Another factor to consider in education reform is that of equity and access. Contemporary issues in the United States regarding education faces a history of inequalities that come with consequences for education attainment across different social groups.
Racial and Socioeconomic Class Segregation
A history of racial, and subsequently class, segregation in the U.S. resulted from practices of law. Residential segregation is a direct result of twentieth century policies that separated by race using zoning and redlining practices, in addition to other housing policies, whose effects continue to endure in the United States. These neighborhoods that have been segregated de jure—by force of purposeful public policy at the federal, state, and local levels—disadvantage people of color as students must attend school near their homes.
With the inception of the New Deal between 1933 and 1939, and during and following World War II, federally funded public housing was explicitly racially segregated by the local government in conjunction with federal policies through projects that were designated for Whites or Black Americans in the South, Northeast, Midwest, and West. Following an ease on the housing shortage post-World War II, the federal government subsidized the relocation of Whites to suburbs. The Federal Housing and Veterans Administration constructed such developments on the East Coast in towns like Levittown on Long Island, New Jersey, Pennsylvania, and Delaware. On the West Coast, there was Panorama City, Lakewood, Westlake, and Seattle suburbs developed by Bertha and William Boeing. As White families left for the suburbs, Black families remained in public housing and were explicitly placed in Black neighborhoods. Policies such as public housing director, Harold Ickes', "neighborhood composition rule" maintained this segregation by establishing that public housing must not interfere with pre-existing racial compositions of neighborhoods. Federal loan guarantees were given to builders who adhered to the condition that no sales were made to Black families and each deed prohibited re-sales to Black families, what the Federal Housing Administration (FHA) described as an "incompatible racial element". In addition, banks and savings intuitions refused loans to Black families in White suburbs and Black families in Black neighborhoods. In the mid-twentieth century, urban renewal programs forced low-income black residents to reside in places farther from universities, hospitals, or business districts and relocation options consisted of public housing high-rises and ghettos.
This history of de jure segregation has impacted resource allocation for public education in the United States, with schools continuing to be segregated by race and class. Low-income White students are more likely than Black students to be integrated into middle-class neighborhoods and less likely to attend schools with other predominantly disadvantaged students. Students of color disproportionately attend underfunded schools and Title I schools in environments entrenched in environmental pollution and stagnant economic mobility with limited access to college readiness resources. According to research, schools attended by primarily Hispanic or African American students often have high turnover of teaching staff and are labeled high-poverty schools, in addition to having limited educational specialists, less available extracurricular opportunities, greater numbers of provisionally licensed teachers, little access to technology, and buildings that are not well maintained. With this segregation, more local property tax is allocated to wealthier communities and public schools' dependence on local property taxes has led to large disparities in funding between neighboring districts. The top 10% of wealthiest school districts spend approximately ten times more per student than the poorest 10% of school districts.
The Racial Wealth Gap
This history of racial and socioeconomic class segregation in the U.S. has manifested into a racial wealth divide. With this history of geographic and economic segregation, trends illustrate a racial wealth gap that has impacted educational outcomes and its concomitant economic gains for minorities. Wealth or net worth—the difference between gross assets and debt—is a stock of financial resources and a significant indicator of financial security that offers a more complete measure of household capability and functioning than income. Within the same income bracket, the chance of completing college differs for White and Black students. Nationally, White students are at least 11% more likely to complete college across all four income groups. Intergenerational wealth is another result of this history, with White college-educated families three times as likely as Black families to get an inheritance of $10,000 or more. 10.6% of White children from low-income backgrounds and 2.5% of Black children from low-income backgrounds reach the top 20% of income distribution as adults. Less than 10% of Black children from low-income backgrounds reach the top 40%.
Access to Early Childhood Education
These disadvantages facing students of color are apparent early on in early childhood education. By the age of five, children of color are impacted by opportunity gaps indicated by poverty, school readiness gap, segregated low-income neighborhoods, implicit bias, and inequalities within the justice system as Hispanic and African American boys account for as much as 60% of total prisoners within the incarceration population. These populations are also more likely to experience adverse childhood experiences (ACEs).
High-quality early care and education are less accessible to children of color, particularly African American preschoolers as findings from the National Center for Education Statistics show that in 2013, 40% of Hispanic and 36% White children were enrolled in learning center-based classrooms rated as high, while 25% of African American children were enrolled in these programs. 15% of African American children attended low ranking center-based classrooms. In home-based settings, 30% of White children and over 50% of Hispanic and African American children attended low rated programs.
Contemporary issues (United States)
Overview
In the first decade of the 21st century, several issues are salient in debates over further education reform:
Longer school day or school year
After-school tutoring
Charter schools, school choice, or school vouchers
Smaller class sizes
Improved teacher quality
Improved training
Higher credential standards
Generally higher pay to attract more qualified applicants
Performance bonuses ("merit pay")
Firing low-performing teachers
Internet and computer access in schools
Track and reduce drop-out rate
Track and reduce absenteeism
English-only vs. bilingual education
Mainstreaming or fully including students with special educational needs, rather than placing them in separate special schools
Content of curriculum standards and textbooks
What to teach, at what age, and to which students. Discussion points include the age at which children should learn to read, and the primary mathematical subject that is taught to adolescents – algebra, or statistics or personal finances.
Funding, neglected infrastructure, and adequacy of educational supplies
Student rights
Education inequalities facing students of color
Private Interest in American Charter Schools
Charter schools are businesses in which both the cost and risk are fully funded by the taxpayers During the 2018/19 school year, there were 7,427 charter schools throughout the United States. This is a significant increase from the 2000/01 school year, when there were 1,993 charter schools in the United States.Some charter schools are nonprofit in name only and are structured in ways that individuals and private enterprises connected to them can make money. Other charter schools are for-profit. The global education market is valued at over $5T, embodying the hopes and aspirations of people everywhere. In many cases, the public is largely unaware of this rapidly changing educational landscape, the debate between public and private/market approaches, and the decisions that are being made that affect their children and communities. In this rapidly changing environment, research on the impact of different approaches to educational improvement is available and should be included in discussions and policy decisions. Critics have accused for-profit entities, (education management organizations, EMOs) and private foundations such as the Bill and Melinda Gates Foundation, the Eli and Edythe Broad Foundation, and the Walton Family Foundation of funding Charter school initiatives to undermine public education and turn education into a "Business Model" which can make a profit. According to activist Jonathan Kozol, education is seen as one of the biggest market opportunities in America. In some cases a school's charter is held by a non-profit that chooses to contract all of the school's operations to a third party, often a for-profit, CMO. This arrangement is defined as a vendor-operated school, (VOS). The largest CMO provider (KIPP Foundation) had nearly twice as many schools and enrolled nearly twice as many students as the next largest provider in 2009-2010 (see Table 3). The EMO provider with the most students (K12 Inc.) enrolled nearly twice as many students as the largest CMO provider (KIPP Foundation). The top ten largest EMO providers enrolled 150,000 more students than the top ten largest CMO providers. The average student enrollment in EMO-affiliated charter schools was 494 students, compared with 306 students in CMO- affiliated charter schools and 301 in freestanding charter schools. At least five states have passed legislation requiring that students complete at least one virtual class in order to obtain a high school diploma.28 Slate reports that in 2011, Republican legislators in Florida passed legislation making the completion of at least one virtual class a graduation requirement—and that at least 32 of the state lawmakers who supported the law had received donations from K12 the prior year. While K12 Inc. does not disclose details of its lobbying efforts, Education Week estimates that the company spent over $10 million on lobbying efforts in 21 states.30 At its 2016 annual meeting, K12 Inc. rejected a shareholder-led transparency proposal that would have required the company's board of directors to produce a yearly report on K12 Inc.'s direct and indirect lobbying of policymakers.31 The proposal, which won support from major analysts,32 also received significant support from shareholders.
School Choice
Economists such as Nobel laureate Milton Friedman advocate school choice to promote excellence in education through competition and choice. A competitive "market" for schools eliminates the need to otherwise attempt a workable method of accountability for results. Public education vouchers permit guardians to select and pay any school, public or private, with public funds currently allocated to local public schools. The theory is that children's guardians will naturally shop for the best schools, much as is already done at college level.
Though appealing in theory, many reforms based on school choice have led to slight to moderate improvements—which some teachers' union members see as insufficient to offset the decreased teacher pay and job security. For instance, New Zealand's landmark reform in 1989, during which schools were granted substantial autonomy, funding was devolved to schools, and parents were given a free choice of which school their children would attend, led to moderate improvements in most schools. It was argued that the associated increases in inequity and greater racial stratification in schools nullified the educational gains. Others, however, argued that the original system created more inequity (due to lower income students being required to attend poorer performing inner city schools and not being allowed school choice or better educations that are available to higher income inhabitants of suburbs). Instead, it was argued that the school choice promoted social mobility and increased test scores especially in the cases of low income students. Similar results have been found in other jurisdictions. Though discouraging, the merely slight improvements of some school choice policies often seems to reflect weaknesses in the way that choice is implemented rather than a failure of the basic principle itself.
Teacher Tenure
Critics of teacher tenure claim that the laws protect ineffective teachers from being fired, which can be detrimental to student success. Tenure laws vary from state to state, but generally they set a probationary period during which the teacher proves themselves worthy of the lifelong position. Probationary periods range from one to three years. Advocates for tenure reform often consider these periods too short to make such an important decision; especially when that decision is exceptionally hard to revoke. Due process restriction protect tenured teachers from being wrongfully fired; however these restrictions can also prevent administrators from removing ineffective or inappropriate teachers. A 2008 survey conducted by the US Department of Education found that, on average, only 2.1% of teachers are dismissed each year for poor performance.
In October 2010 Apple Inc. CEO Steve Jobs had a consequential meeting with U.S. President Barack Obama to discuss U.S. competitiveness and the nation's education system. During the meeting Jobs recommended pursuing policies that would make it easier for school principals to hire and fire teachers based on merit.
In 2012 tenure for school teachers was challenged in a California lawsuit called Vergara v. California. The primary issue in the case was the impact of tenure on student outcomes and on equity in education. On June 10, 2014, the trial judge ruled that California's teacher tenure statute produced disparities that " shock the conscience" and violate the equal protection clause of the California Constitution. On July 7, 2014, U.S. Secretary of Education Arne Duncan commented on the Vergara decision during a meeting with President Barack Obama and representatives of teacher's unions. Duncan said that tenure for school teachers "should be earned through demonstrated effectiveness" and should not be granted too quickly. Specifically, he criticized the 18-month tenure period at the heart of the Vergara case as being too short to be a "meaningful bar."
Funding levels
According to a 2005 report from the OECD, the United States is tied for first place with Switzerland when it comes to annual spending per student on its public schools, with each of those two countries spending more than $11,000 (in U.S. currency).
Despite this high level of funding, U.S. public schools lag behind the schools of other rich countries in the areas of reading, math, and science. A further analysis of developed countries shows no correlation between per student spending and student performance, suggesting that there are other factors influencing education. Top performers include Singapore, Finland and Korea, all with relatively low spending on education, while high spenders including Norway and Luxembourg have relatively low performance. One possible factor is the distribution of the funding.
In the US, schools in wealthy areas tend to be over-funded while schools in poorer areas tend to be underfunded. These differences in spending between schools or districts may accentuate inequalities, if they result in the best teachers moving to teach in the most wealthy areas. The inequality between districts and schools led to 23 states instituting school finance reform based on adequacy standards that aim to increase funding to low-income districts. A 2018 study found that between 1990 and 2012, these finance reforms led to an increase in funding and test scores in the low income districts; which suggests finance reform is effective at bridging inter-district performance inequalities. It has also been shown that the socioeconomic situation of the students family has the most influence in determining success; suggesting that even if increased funds in a low income area increase performance, they may still perform worse than their peers from wealthier districts.
Starting in the early 1980s, a series of analyses by Eric Hanushek indicated that the amount spent on schools bore little relationship to student learning. This controversial argument, which focused attention on how money was spent instead of how much was spent, led to lengthy scholarly exchanges. In part the arguments fed into the class size debates and other discussions of "input policies." It also moved reform efforts towards issues of school accountability (including No Child Left Behind) and the use of merit pay and other incentives.
There have been studies that show smaller class sizes and newer buildings (both of which require higher funding to implement) lead to academic improvements. It should also be noted that many of the reform ideas that stray from the traditional format require greater funding.
According to a 1999 article, William J. Bennett, former U.S. Secretary of Education, argued that increased levels of spending on public education have not made the schools better, citing the following statistics:
Internationally
Education for All
Education 2030 Agenda refers to the global commitment of the Education for All movement to ensure access to basic education for all. It is an essential part of the 2030 Agenda for Sustainable Development. The roadmap to achieve the Agenda is the Education 2030 Incheon Declaration and Framework for Action, which outlines how countries, working with UNESCO and global partners, can translate commitments into action.
The United Nations, over 70 ministers, representatives of member-countries, bilateral and multilateral agencies, regional organizations, academic institutions, teachers, civil society, and the youth supported the Framework for Action of the Education 2030 platform. The Framework was described as the outcome of continuing consultation to provide guidance for countries in implementing this Agenda. At the same time, it mobilizes various stakeholders in the new education objectives, coordination, implementation process, funding, and review of Education 2030.
Thailand
In 1995, the minister of education, Sukavich Rangsitpol, launched a series of education reforms in 1995 with the intention of the education reform is to realize the potential of Thai people to develop themselves for a better quality of life and to develop the nation for a peaceful co-existence in the global community.
Sukavich Rangsitpol Education Minister came up with the Reform Program of 1996. A sense that major changes are needed in education is reflected in the recently introduced "reform program". It is built around four
major improvements:
improving the physical state of schools
upgrading the quality of teachers
reforming learning and teaching methods
streamlining administration
School-based management (SBM) in Thailand implemented in 1997 in the course of a reform aimed at overcoming a profound crisis in the educación system.
According to UNESCO, Thailand education reform has led to the following results:
The educational budget increased from 133 billion baht in 1996 to 163 billion baht in 1997 (22.5% increase)
Since 1996, first grade students have been taught English as a second or foreign language and computer literacy.
Professional advancement from teacher level 6 to teacher level 7 without having to submit academic work for consideration was approved by the Thai government.
Free 12 years education for all children provided by the government. This program was added to the 1997 Constitution of Thailand and gave access to all citizens.
World Bank report that after the 1997 Asian financial crisis Income in the northeast, the poorest part of Thailand, has risen by 46 percent from 1998 to 2006. Nationwide poverty fell from 21.3 to 11.3 percent.
Learning crisis
The learning crisis is the reality that while the majority of children around the world attend school, a large proportion of them are not learning. A World Bank study found that "53 percent of children in low- and middle-income countries cannot read and understand a simple story by the end of primary school." While schooling has increased rapidly over the last few decades, learning has not followed suit. Many practitioners and academics call for education system reform in order to address the learning needs of all children.
Digital Education
The movement to use computers more in education naturally includes many unrelated ideas, methods, and pedagogies since there are many uses for digital computers. For example, the fact that computers are naturally good at math leads to the question of the use of calculators in math education. The Internet's communication capabilities make it potentially useful for collaboration, and foreign language learning. The computer's ability to simulate physical systems makes it potentially useful in teaching science. More often, however, debate of digital education reform centers around more general applications of computers to education, such as electronic test-taking and online classes.
Another viable addition to digital education has been blended learning. In 2009, over 3 million K-12 students took an online course, compared to 2000 when 45,000 took an online course. Blended learning examples include pure online, blended, and traditional education. Research results show that the most effective learning takes place in a blended format. This allows children to view the lecture ahead of time and then spend class time practicing, refining, and applying what they have previously learned.
The idea of creating artificial intelligence led some computer scientists to believe that teachers could be replaced by computers, through something like an expert system; however, attempts to accomplish this have predictably proved inflexible. The computer is now more understood to be a tool or assistant for the teacher and students.
Harnessing the richness of the Internet is another goal. In some cases classrooms have been moved entirely online, while in other instances the goal is more to learn how the Internet can be more than a classroom.
Web-based international educational software is under development by students at New York University, based on the belief that current educational institutions are too rigid: effective teaching is not routine, students are not passive, and questions of practice are not predictable or standardized. The software allows for courses tailored to an individual's abilities through frequent and automatic multiple intelligences assessments. Ultimate goals include assisting students to be intrinsically motivated to educate themselves, and aiding the student in self-actualization. Courses typically taught only in college are being reformatted so that they can be taught to any level of student, whereby elementary school students may learn the foundations of any topic they desire. Such a program has the potential to remove the bureaucratic inefficiencies of education in modern countries, and with the decreasing digital divide, help developing nations rapidly achieve a similar quality of education. With an open format similar to Wikipedia, any teacher may upload their courses online and a feedback system will help students choose relevant courses of the highest quality. Teachers can provide links in their digital courses to webcast videos of their lectures. Students will have personal academic profiles and a forum will allow students to pose complex questions, while simpler questions will be automatically answered by the software, which will bring you to a solution by searching through the knowledge database, which includes all available courses and topics.
The 21st century ushered in the acceptance and encouragement of internet research conducted on college and university campuses, in homes, and even in gathering areas of shopping centers. Addition of cyber cafes on campuses and coffee shops, loaning of communication devices from libraries, and availability of more portable technology devices, opened up a world of educational resources. Availability of knowledge to the elite had always been obvious, yet provision of networking devices, even wireless gadget sign-outs from libraries, made availability of information an expectation of most persons. Cassandra B. Whyte researched the future of computer use on higher education campuses focusing on student affairs. Though at first seen as a data collection and outcome reporting tool, the use of computer technology in the classrooms, meeting areas, and homes continued to unfold. The sole dependence on paper resources for subject information diminished and e-books and articles, as well as online courses, were anticipated to become increasingly staple and affordable choices provided by higher education institutions according to Whyte in a 2002 presentation.
Digitally "flipping" classrooms is a trend in digital education that has gained significant momentum. Will Richardson, author and visionary for the digital education realm, points to the not-so-distant future and the seemingly infinite possibilities for digital communication linked to improved education. Education on the whole, as a stand-alone entity, has been slow to embrace these changes. The use of web tools such as wikis, blogs, and social networking sites is tied to increasing overall effectiveness of digital education in schools. Examples exist of teacher and student success stories where learning has transcended the classroom and has reached far out into society.
The media has been instrumental in pushing formal educational institutions to become savvier in their methods. Additionally, advertising has been (and continues to be) a vital force in shaping students and parents thought patterns.
Technology is a dynamic entity that is constantly in flux. As time presses on, new technologies will continue to break paradigms that will reshape human thinking regarding technological innovation. This concept stresses a certain disconnect between teachers and learners and the growing chasm that started some time ago. Richardson asserts that traditional classroom's will essentially enter entropy unless teachers increase their comfort and proficiency with technology.
Administrators are not exempt from the technological disconnect. They must recognize the existence of a younger generation of teachers who were born during the Digital Age and are very comfortable with technology. However, when old meets new, especially in a mentoring situation, conflict seems inevitable. Ironically, the answer to the outdated mentor may be digital collaboration with worldwide mentor webs; composed of individuals with creative ideas for the classroom.
See also
Anti-schooling activism
Blab school
Block scheduling
Certificate of Initial Mastery
Criterion-referenced test
Educational philosophies
Excellence and equity
Female education
High school graduation examination
Higher-order thinking
Inquiry-based Science
Learning crisis
Learning environment
Learning space
Merit pay
Multiculturalism
Political correctness
Project-based learning
Special Assistance Program
Student-centered learning
Sudbury model democratic schools
Sudbury Valley School
Teaching for social justice
University reform
Web literacy
References
Sources
Further reading
Comer, J.P. (1997). Waiting for a Miracle: Why Schools Can't Solve Our Problems- and How We Can. New York: Penguin Books.
Cuban, L. (2003). Why Is It So Hard to Get Good Schools? New York: Teachers College, Columbia University.
Darling-Hammond, Linda. (1997) The Right to Learn: A Blueprint for Creating Schools that Work. Jossey-Bass.
Dewey, J. and Dewey, E. (1915). Schools of To-morrow. New York: E.P. Dutton and Company.
Gatto, John Taylor (1992). Dumbing Us Down: The Hidden Curriculum of Compulsory Schooling. Canada: New Society Publishers.
Glazek, S.D. and Sarason, S.B. (2007). Productive Learning: Science, Art, and Einstein's Relativity in Education Reform. New York: Sage Publications, Inc.
Goodland, J.I. and Anderson, R.H. (1959 and 1987). The Nongraded Elementary School. New York: Harcourt, Brace and Company.
James, Laurie. (1994) Outrageous Questions: Legacy of Bronson Alcott and America's One-Room Schools New York.
Katz, M.B. (1971). Class, Bureaucracy, and Schools: The Illusion of Educational Change in America. New York: Praeger Publishers.
Kliebard, Herbert. (1987) The Struggle for the American Curriculum. New York : Routledge & Kegan Paul.
Kohn, A. (1999). The Schools Our Children Deserve: Moving Beyond Traditional Classrooms and 'Tougher Standards. Boston: Houghton Mifflin Co.
Murphy, J.H. and Beck, L.G. (1995). School-Based Management as School Reform: Taking Stock. Thousand Oaks, CA: Corwin Press, Inc.
Ogbu, J.U. (1978). Minority Education and Caste: The American System in Cross-Cultural Perspective. New York: Academic Press.
Ravitch, D. (1988). The Great School Wars: A History of the New York City Public Schools. New York: Basic Books, Inc.
Sarason, S.B. (1996). Revisiting 'The Culture of the School and the Problem of Change. New York: Teachers College Press.
Sarason, S.B. (1990). The Predictable Failure of Educational Reform: Can We Change Course Before Its Too Late? San Francisco: Josey-Bass, Inc.
Sizer, T.R. (1984). Horace's Compromise: The Dilemma of the American High School. Boston: Houghton Mifflin Company.
Tough, Paul. (2008). Whatever It Takes: Geoffrey Canada's Quest to Change Harlem and America. New York: Houghton Mifflin Company.
Tough, Paul. (2012). How Children Succeed. New York: Houghton Mifflin Company.
Tyack, David and Cuban, Larry. (1995) Tinkering Toward Utopia: A Century of Public School Reform. Cambridge, MA: Harvard University Press.
Zwaagstra, Michael; Clifton, Rodney; and Long, John. (2010) What's Wrong with Our Schools: and How We Can Fix Them. Rowman & Littlefield.
External links
Education issues
History of education
|
46199738
|
https://en.wikipedia.org/wiki/John%20Marshall%20%28entrepreneur%29
|
John Marshall (entrepreneur)
|
John D. Marshall is an American entrepreneur and inventor. He is the co-founder and former president and CEO of AirWatch, which VMware acquired for $1.54 billion in 2014. He is co-chairman at a software start-up called OneTrust.
Career
In 1996, Marshall was hired as an implementation consultant at Manhattan Associates, a supplier of field inventory management software. He had various consulting roles designing and implementing complex supply chain systems. He spent 18 months helping to launch the company's presence in Europe and assisted in the design of multiple software modules relating to transportation, load planning and global logistics.
Celarix later hired Marshall as vice president for marketing strategy in 1999. He was responsible for designing the company's product solutions, developing the go-to-market strategy and leading business development activities with technology and transportation partners in North America, Europe and Asia. GXS Worldwide, Inc., formerly GE Information Systems, acquired Celarix in 2003.
Marshall founded Wandering WiFi in 2003. The company started by setting up hospitality businesses with internet hot spots and Marshall grew the customer base and extended the software to monitor and manage other types of network infrastructure.
In 2006, Alan Dabbiere, founder and former president of Manhattan Associates, joined the business and together they launched AirWatch to accelerate development on managing Windows Mobile devices. After the launch of the iPhone, they pivoted the company to develop software to manage smartphones.
During a press conference with Georgia Governor Nathan Deal on Jan. 25, 2013, Marshall and Dabbiere announced that AirWatch would create 800 additional jobs in Georgia over two years and invest more than $4 million in new equipment. In three years, the company grew from 100 employees to more than 1,500.
In February 2013, AirWatch secured a $200 million Series A funding round, the largest Series A round of any software company in history, from Insight Venture Partners and Accel Partners. AirWatch also stated the company's revenues had grown 40 percent quarter over quarter for the previous eight quarters.
In July 2013, AirWatch acquired Motorola Solutions' MSP (Mobility Services Platform) to extend management capabilities to ruggedized devices.
In January 2014, VMware acquired AirWatch for $1.54 billion, the largest acquisition to-date for VMware. During the Q4 2014 earnings call, VMware announced that AirWatch reached $200 million in 2014 bookings, 2,000 employees and more than 15,000 customers as of January 2015, making it the largest enterprise mobility management provider in terms of revenue, customers and employees. Some of the companies using AirWatch include Wal-Mart Stores Inc., The Home Depot Inc., Walgreens, Delta Air Lines, the Department of Justice. As of February 2015, the AirWatch app was ranked as the second top free business application.
As of March 2016 he stepped down as CEO and took up a position as an Advisory Board Member, a position which he held until December 2016.
After stepping down from his role at VMWare AirWatch John became co-chairman alongside long-time colleague and former chairman at AirWatch Alan Dabbiere at OneTrust, a privacy management software platform.
Awards and recognition
In 2014, Atlanta Business Chronicle named Marshall one of Atlanta's Most Admired CEOs, Mobile Village named Marshall the year's Mobile Visionary and Best in Biz named Marshall Executive of the Year.
Marshall was named the 2013 Ernst & Young Entrepreneur of the Year and the Association of Telecom Professionals selected Marshall as the 2012 ATP of the year, which recognizes individuals for their contributions to industry and community.
Under Marshall's leadership, AirWatch received several industry awards, including a 2015 Global Mobile Award from GSMA Mobile World Congress, two 2014 Global Mobile Awards, three 2013 MobITS Awards from CTIA and the Best Mobile Security Solution from SC Magazine.
Marshall is a board member on the Georgia Tech Information Security Center (GTISC) Industry Advisory Board.
Marshall was featured on CNBC in 2014 to discuss smartphone applications and the New York Times included his perspective on Samsung in the enterprise in 2013.
References
Year of birth missing (living people)
Living people
20th-century American businesspeople
21st-century American businesspeople
American inventors
|
712214
|
https://en.wikipedia.org/wiki/The%20Cuckoo%27s%20Egg%20%28book%29
|
The Cuckoo's Egg (book)
|
The Cuckoo's Egg: Tracking a Spy Through the Maze of Computer Espionage is a 1989 book written by Clifford Stoll. It is his first-person account of the hunt for a computer hacker who broke into a computer at the Lawrence Berkeley National Laboratory (LBNL).
Stoll's use of the term extended the metaphor Cuckoo's egg from brood parasitism in birds to malware.
Summary
Author Clifford Stoll, an astronomer by training, managed computers at Lawrence Berkeley National Laboratory (LBNL) in California. One day in 1986 his supervisor asked him to resolve an accounting error of 75 cents in the computer usage accounts. Stoll traced the error to an unauthorized user who had apparently used nine seconds of computer time and not paid for it. Stoll eventually realized that the unauthorized user was a hacker who had acquired superuser access to the LBNL system by exploiting a vulnerability in the movemail function of the original GNU Emacs.
Early on, and over the course of a long weekend, Stoll rounded up fifty terminals, as well as teleprinters, mostly by “borrowing” them from the desks of co-workers away for the weekend. These he physically attached to the fifty incoming phone lines at LBNL. When the hacker dialed in that weekend, Stoll located the phone line used, which was coming from the Tymnet routing service. With the help of Tymnet, he eventually tracked the intrusion to a call center at MITRE, a defense contractor in McLean, Virginia. Over the next ten months, Stoll spent enormous amounts of time and effort tracing the hacker's origin. He saw that the hacker was using a 1200 baud connection and realized that the intrusion was coming through a telephone modem connection. Stoll's colleagues, Paul Murray and Lloyd Bellknap, assisted with the phone lines.
After returning his “borrowed” terminals, Stoll left a teleprinter attached to the intrusion line in order to see and record everything the hacker did. He watched as the hacker sought – and sometimes gained – unauthorized access to military bases around the United States, looking for files that contained words such as “nuclear” or “SDI” (Strategic Defense Initiative). The hacker also copied password files (in order to make dictionary attacks) and set up Trojan horses to find passwords. Stoll was amazed that on many of these high-security sites the hacker could easily guess passwords, since many system administrators had never bothered to change the passwords from their factory defaults. Even on military bases, the hacker was sometimes able to log in as “guest” with no password.
This was one of the first — if not the first — documented cases of a computer break-in, and Stoll seems to have been the first to keep a daily logbook of the hacker's activities. Over the course of his investigation, Stoll contacted various agents at the Federal Bureau of Investigation (FBI), the Central Intelligence Agency (CIA), the National Security Agency (NSA) and the United States Air Force Office of Special Investigations (OSI). At the very beginning there was confusion as to jurisdiction and a general reluctance to share information; the FBI in particular was uninterested as no large sum of money was involved and no classified information host was accessed.
Studying his log book, Stoll saw that the hacker was familiar with VAX/VMS, as well as AT&T Unix. He also noted that the hacker tended to be active around the middle of the day, Pacific time. Eventually Stoll hypothesized that, since modem bills are cheaper at night and most people have school or a day job and would only have a lot of free time for hacking at night, the hacker was in a time zone some distance to the east, likely beyond the US East Coast.
With the help of Tymnet and agents from various agencies, Stoll found that the intrusion was coming from West Germany via satellite. The West German post office, the Deutsche Bundespost, had authority over the phone system there, and traced the calls to a university in Bremen. In order to entice the hacker to reveal himself, Stoll set up an elaborate hoax – known today as a honeypot – by inventing a fictitious department at LBNL that had supposedly been newly formed by an “SDI“ contract, also fictitious. When he realized the hacker was particularly interested in the faux SDI entity, he filled the “SDInet” account (operated by an imaginary secretary named ‘Barbara Sherwin’) with large files full of impressive-sounding bureaucratese. The ploy worked, and the Deutsche Bundespost finally located the hacker at his home in Hanover. The hacker's name was Markus Hess, and he had been engaged for some years in selling the results of his hacking to the Soviet Union’s intelligence agency, the KGB. There was ancillary proof of this when a Hungarian agent contacted the fictitious SDInet at LBL by mail, based on information he could only have obtained through Hess. Apparently this was the KGB's method of double-checking to see if Hess was just making up the information he was selling.
Stoll later flew to West Germany to testify at the trial of Hess.
References in popular culture
The book was chronicled in an episode of WGBH’s NOVA entitled “The KGB, the Computer, and Me”, which aired on PBS stations on October 3, 1990.
Another documentary, Spycatcher, was made by Yorkshire Television.
The number sequence mentioned in Chapter 48 has become a popular math puzzle, known as the Cuckoo's Egg, the Morris Number Sequence, or the look-and-say sequence.
In the summer of 2000 the name “Cuckoo’s Egg” was used to describe a file sharing hack attempt that substituted white noise or sound effects files for legitimate song files on Napster and other networks.
These events are referenced in Cory Doctorow’s speculative fiction short story “The Things that Make Me Weak and Strange Get Engineered Away”, as “(a) sysadmin who’d tracked a $0.75 billing anomaly back to a foreign spy-ring that was using his systems to hack his military”.
See also
Digital footprint
Karl Koch (hacker)
23 – a film made from the hackers viewpoint.
References
External links
Image of 1st Edition Cover—Doubleday
Stalking the Wily Hacker The author's original article about the trap
Booknotes interview with Stoll on Cuckoo’s Egg, December 3, 1989
Reference to the book on Internet Storm Center
West German hackers use Columbia's Kermit software to break into dozens of US military computers and capture information for the KGB, Columbia University Computing History, 1986-1987 section.
1989 non-fiction books
Non-fiction crime books
Computer security books
Hacking (computer security)
Trojan horses
Doubleday (publisher) books
Works about computer hacking
|
27683514
|
https://en.wikipedia.org/wiki/John%20Shoch
|
John Shoch
|
John F. Shoch is an American computer scientist and venture capitalist who made significant contributions to the development of computer networking while at Xerox PARC, in particular to the development of the PARC Universal Protocol (PUP), an important predecessor of TCP/IP.
His contributions were significant enough to warrant including his name on the memorial plaque at Stanford University commemorating the "Birth of the Internet."
Career
Shoch attended Stanford, where he earned a B.A. in political science (1971); he later went on to earn an M.S. (1977) and a Ph.D. (1979) in Computer Science from Stanford as well. His Ph.D. thesis was entitled "Design and Performance of Local Computer Networks".
He joined Xerox in 1971, working at PARC, where his research interests included internetwork protocols, computer local area networks (in particular the Ethernet, which he helped develop), packet radio, programming languages, and various other aspects of distributed systems. His best-known work from that period, after the Ethernet and PUP, is on network worms; although the most famous incident involves one that ran out of control, they were actually early experiments in distributed computing over a network of loosely coupled machines.
In 1980, he became the assistant to the CEO of Xerox and director of the Corporate Policy Committee. In 1982, he moved on to become president of Xerox's Office Systems Division (which developed network-based office systems derived from research performed at PARC).
He left Xerox to become a venture capitalist with Asset Management Associates in 1985, and then became a founding general partner at Alloy Ventures in 1996.
He has also taught at Stanford University, is a member of the ACM and the IEEE, and serves as a trustee for the Computer History Museum.
Publications
David R. Boggs, John F. Shoch, Edward A. Taft, Robert M. Metcalfe, "Pup: An Internetwork Architecture", IEEE Transactions on Communications, Volume COM-28, Number 4, April, 1980, pp. 612–624.
John Shoch, "A note on Inter-Network Naming, Addressing, and Routing", IEN-19, 1978.
John Shoch, Jon Hupp, "The 'Worm' Programs - Early Experience with a Distributed Computation", Communications of the ACM, Volume 25, Number 3, March 1982, pp. 172–180. This paper has the unusual distinction of being cited by authors on a science fiction television program: Star Cops, episode #3 "Intelligent Listening for Beginners".
John Shoch, Yogen Dalal, R.C. Crane, and David D. Redell, "Evoluation of the Ethernet Local Computer Network", IEEE Computer Magazine 15(8), 10-27, August 1982.
See also
History of the Internet
Internet pioneers
References
Further reading
Michael A. Hiltzik, Dealers of Lightning: Xerox PARC and the Dawn of the Computer Age (HarperBusiness, New York, 1999) pp. 289–299 covers Shoch, and the worm that ran out of control
Internet pioneers
American computer scientists
Stanford University alumni
Living people
Trustees of museums
Scientists at PARC (company)
Year of birth missing (living people)
|
326123
|
https://en.wikipedia.org/wiki/Windows%207
|
Windows 7
|
Windows 7 is a major release of the Windows NT operating system developed by Microsoft. It was released to manufacturing on July 22, 2009, and became generally available on October 22, 2009. It is the successor to Windows Vista, released nearly three years earlier. It remained an operating system for use on personal computers, including home and business desktops, laptops, tablet PCs and media center PCs, and itself was replaced in November 2012 by Windows 8, the name spanning more than three years of the product.
Until April 9, 2013, Windows 7 original release includes updates and technical support, after which installation of Service Pack 1 is required for users to receive support and updates. Windows 7's server counterpart, Windows Server 2008 R2, was released at the same time. The last supported version of Windows based on this operating system was released on July 1, 2011, entitled Windows Embedded POSReady 7. Extended support ended on January 14, 2020, over ten years after the release of Windows 7, after which the operating system ceased receiving further support. A support program is currently available for enterprises, providing security updates for Windows 7 for up to four years since the official end of life. However, Windows Embedded POSReady 7, the last Windows 7 variant, continued to receive security updates until October 2021.
Windows 7 was intended to be an incremental upgrade to Microsoft Windows, addressing Windows Vista's poor critical reception while maintaining hardware and software compatibility. Windows 7 continued improvements on Windows Aero user interface with the addition of a redesigned taskbar that allows pinned applications, and new window management features. Other new features were added to the operating system, including libraries, the new file-sharing system HomeGroup, and support for multitouch input. A new "Action Center" was also added to provide an overview of system security and maintenance information, and tweaks were made to the User Account Control system to make it less intrusive. Windows 7 also shipped with updated versions of several stock applications, including Internet Explorer 8, Windows Media Player, and Windows Media Center.
Unlike Vista, Windows 7 received critical acclaim, with critics considering the operating system to be a major improvement over its predecessor because of its improved performance, its more intuitive interface, fewer User Account Control popups, and other improvements made across the platform. Windows 7 was a major success for Microsoft; even before its official release, pre-order sales for the operating system on the online retailer Amazon.com had surpassed previous records. In just six months, over 100 million copies had been sold worldwide, increasing to over 630 million licenses by July 2012. By January 2018, Windows 10 surpassed Windows 7 as the most popular version of Windows worldwide. , 12.76% of traditional PCs running Windows are running Windows 7. It still remains popular in countries such as Syria, China, India, and Venezuela.
Development history
Originally, a version of Windows codenamed "Blackcomb" was planned as the successor to Windows XP and Windows Server 2003 in 2000. Major features were planned for Blackcomb, including an emphasis on searching and querying data and an advanced storage system named WinFS to enable such scenarios. However, an interim, minor release, codenamed "Longhorn," was announced for 2003, delaying the development of Blackcomb. By the middle of 2003, however, Longhorn had acquired some of the features originally intended for Blackcomb. After three major malware outbreaks—the Blaster, Nachi, and Sobig worms—exploited flaws in Windows operating systems within a short time period in August 2003, Microsoft changed its development priorities, putting some of Longhorn's major development work on hold while developing new service packs for Windows XP and Windows Server 2003. Development of Longhorn (Windows Vista) was also restarted, and thus delayed, in August 2004. A number of features were cut from Longhorn. Blackcomb was renamed Vienna in early 2006, and was later canceled in 2007 due to the scope of the project.
When released, Windows Vista was criticized for its long development time, performance issues, spotty compatibility with existing hardware and software at launch, changes affecting the compatibility of certain PC games, and unclear assurances by Microsoft that certain computers shipping with XP before launch would be "Vista Capable" (which led to a class-action lawsuit), among other critiques. As such, the adoption of Vista in comparison to XP remained somewhat low. In July 2007, six months following the public release of Vista, it was reported that the next version of Windows would then be codenamed Windows 7, with plans for a final release within three years. Bill Gates, in an interview with Newsweek, suggested that Windows 7 would be more "user-centric". Gates later said that Windows 7 would also focus on performance improvements. Steven Sinofsky later expanded on this point, explaining in the Engineering Windows 7 blog that the company was using a variety of new tracing tools to measure the performance of many areas of the operating system on an ongoing basis, to help locate inefficient code paths and to help prevent performance regressions. Senior Vice President Bill Veghte stated that Windows Vista users migrating to Windows 7 would not find the kind of device compatibility issues they encountered migrating from Windows XP. An estimated 1,000 developers worked on Windows 7. These were broadly divided into "core operating system" and "Windows client experience", in turn organized into 25 teams of around 40 developers on average.
In October 2008, it was announced that Windows 7 would also be the official name of the operating system. There has been some confusion over naming the product Windows 7, while versioning it as 6.1 to indicate its similar build to Vista and increase compatibility with applications that only check major version numbers, similar to Windows 2000 and Windows XP both having 5.x version numbers. The first external release to select Microsoft partners came in January 2008 with Milestone 1, build 6519. Speaking about Windows 7 on October 16, 2008, Microsoft CEO Steve Ballmer confirmed compatibility between Windows Vista and Windows 7, indicating that Windows 7 would be a refined version of Windows Vista.
At PDC 2008, Microsoft demonstrated Windows 7 with its reworked taskbar. On December 27, 2008, the Windows 7 Beta was leaked onto the Internet via BitTorrent. According to a performance test by ZDNet, Windows 7 Beta beat both Windows XP and Vista in several key areas, including boot and shutdown time and working with files, such as loading documents. Other areas did not beat XP, including PC Pro benchmarks for typical office activities and video editing, which remain identical to Vista and slower than XP. On January 7, 2009, the x64 version of the Windows 7 Beta (build 7000) was leaked onto the web, with some torrents being infected with a trojan. At CES 2009, Microsoft CEO Steve Ballmer announced the Windows 7 Beta, build 7000, had been made available for download to MSDN and TechNet subscribers in the format of an ISO image. The stock wallpaper of the beta version contained a digital image of the Betta fish.
The release candidate, build 7100, became available for MSDN and TechNet subscribers, and Connect Program participants on April 30, 2009. On May 5, 2009, it became available to the general public, although it had also been leaked onto the Internet via BitTorrent. The release candidate was available in five languages and expired on June 1, 2010, with shutdowns every two hours starting March 1, 2010. Microsoft stated that Windows 7 would be released to the general public on October 22, 2009, less than three years after the launch of its predecessor. Microsoft released Windows 7 to MSDN and Technet subscribers on August 6, 2009. Microsoft announced that Windows 7, along with Windows Server 2008 R2, was released to manufacturing in the United States and Canada on July 22, 2009. Windows 7 RTM is build 7600.16385.090713-1255, which was compiled on July 13, 2009, and was declared the final RTM build after passing all Microsoft's tests internally.
Features
New and changed
Among Windows 7's new features are advances in touch and handwriting recognition, support for virtual hard disks, improved performance on multi-core processors, improved boot performance, DirectAccess, and kernel improvements. Windows 7 adds support for systems using multiple heterogeneous graphics cards from different vendors (Heterogeneous Multi-adapter), a new version of Windows Media Center, a Gadget for Windows Media Center, improved media features, XPS Essentials Pack and Windows PowerShell being included, and a redesigned Calculator with multiline capabilities including Programmer and Statistics modes along with unit conversion for length, weight, temperature, and several others. Many new items have been added to the Control Panel, including ClearType Text Tuner Display Color Calibration Wizard, Gadgets, Recovery, Troubleshooting, Workspaces Center, Location and Other Sensors, Credential Manager, Biometric Devices, System Icons, and Display. Windows Security Center has been renamed to Windows Action Center (Windows Health Center and Windows Solution Center in earlier builds), which encompasses both security and maintenance of the computer. ReadyBoost on 32-bit editions now supports up to 256 gigabytes of extra allocation. Windows 7 also supports images in RAW image format through the addition of Windows Imaging Component-enabled image decoders, which enables raw image thumbnails, previewing and metadata display in Windows Explorer, plus full-size viewing and slideshows in Windows Photo Viewer and Windows Media Center. Windows 7 also has a native TFTP client with the ability to transfer files to or from a TFTP server.
The taskbar has seen the biggest visual changes, where the old Quick Launch toolbar has been replaced with the ability to pin applications to the taskbar. Buttons for pinned applications are integrated with the task buttons. These buttons also enable Jump Lists to allow easy access to common tasks, and files frequently used with specific applications. The revamped taskbar also allows the reordering of taskbar buttons. To the far right of the system clock is a small rectangular button that serves as the Show desktop icon. By default, hovering over this button makes all visible windows transparent for a quick look at the desktop. In touch-enabled displays such as touch screens, tablet PCs, etc., this button is slightly (8 pixels) wider in order to accommodate being pressed by a finger. Clicking this button minimizes all windows, and clicking it a second time restores them.
Window management in Windows 7 has several new features: Aero Snap maximizes a window when it is dragged to the top, left, or right of the screen. Dragging windows to the left or right edges of the screen allows users to snap software windows to either side of the screen, such that the windows take up half the screen. When a user moves windows that were snapped or maximized using Snap, the system restores their previous state. Snap functions can also be triggered with keyboard shortcuts. Aero Shake hides all inactive windows when the active window's title bar is dragged back and forth rapidly.
Windows 7 includes 13 additional sound schemes, titled Afternoon, Calligraphy, Characters, Cityscape, Delta, Festival, Garden, Heritage, Landscape, Quirky, Raga, Savanna, and Sonata. Internet Spades, Internet Backgammon and Internet Checkers, which were removed from Windows Vista, were restored in Windows 7. Users are able to disable or customize many more Windows components than was possible in Windows Vista. New additions to this list of components include Internet Explorer 8, Windows Media Player 12, Windows Media Center, Windows Search, and Windows Gadget Platform. A new version of Microsoft Virtual PC, newly renamed as Windows Virtual PC was made available for Windows 7 Professional, Enterprise, and Ultimate editions. It allows multiple Windows environments, including Windows XP Mode, to run on the same machine. Windows XP Mode runs Windows XP in a virtual machine, and displays applications within separate windows on the Windows 7 desktop. Furthermore, Windows 7 supports the mounting of a virtual hard disk (VHD) as a normal data storage, and the bootloader delivered with Windows 7 can boot the Windows system from a VHD; however, this ability is only available in the Enterprise and Ultimate editions. The Remote Desktop Protocol (RDP) of Windows 7 is also enhanced to support real-time multimedia application including video playback and 3D games, thus allowing use of DirectX 10 in remote desktop environments. The three application limit, previously present in the Windows Vista and Windows XP Starter Editions, has been removed from Windows 7. All editions include some new and improved features, such as Windows Search, Security features, and some features new to Windows 7, that originated within Vista. Optional BitLocker Drive Encryption is included with Windows 7 Ultimate and Enterprise. Windows Defender is included; Microsoft Security Essentials antivirus software is a free download. All editions include Shadow Copy, which—every day or so—System Restore uses to take an automatic "previous version" snapshot of user files that have changed. Backup and restore have also been improved, and the Windows Recovery Environment—installed by default—replaces the optional Recovery Console of Windows XP.
A new system known as "Libraries" was added for file management; users can aggregate files from multiple folders into a "Library." By default, libraries for categories such as Documents, Pictures, Music, and Video are created, consisting of the user's personal folder and the Public folder for each. The system is also used as part of a new home networking system known as HomeGroup; devices are added to the network with a password, and files and folders can be shared with all other devices in the HomeGroup, or with specific users. The default libraries, along with printers, are shared by default, but the personal folder is set to read-only access by other users, and the Public folder can be accessed by anyone.
Windows 7 includes improved globalization support through a new Extended Linguistic Services API to provide multilingual support (particularly in Ultimate and Enterprise editions). Microsoft also implemented better support for solid-state drives, including the new TRIM command, and Windows 7 is able to identify a solid-state drive uniquely. Native support for USB 3.0 is not included because of delays in the finalization of the standard. At WinHEC 2008 Microsoft announced that color depths of 30-bit and 48-bit would be supported in Windows 7 along with the wide color gamut scRGB (which for HDMI 1.3 can be converted and output as xvYCC). The video modes supported in Windows 7 are 16-bit sRGB, 24-bit sRGB, 30-bit sRGB, 30-bit with extended color gamut sRGB, and 48-bit scRGB.
For developers, Windows 7 includes a new networking API with support for building SOAP-based web services in native code (as opposed to .NET-based WCF web services), new features to simplify development of installation packages and shorten application install times. Windows 7, by default, generates fewer User Account Control (UAC) prompts because it allows digitally signed Windows components to gain elevated privileges without a prompt. Additionally, users can now adjust the level at which UAC operates using a sliding scale.
Removed
Certain capabilities and programs that were a part of Windows Vista are no longer present or have been changed, resulting in the removal of certain functionalities; these include the classic Start Menu user interface, some taskbar features, Windows Explorer features, Windows Media Player features, Windows Ultimate Extras, Search button, and InkBall. Four applications bundled with Windows Vista—Windows Photo Gallery, Windows Movie Maker, Windows Calendar and Windows Mail—are not included with Windows 7 and were replaced by Windows Live-branded versions as part of the Windows Live Essentials suite.
Editions
Windows 7 is available in six different editions, of which the Home Premium, Professional, and Ultimate were available at retail in most countries, and as pre-loaded software on most new computers. Home Premium and Professional were aimed at home users and small businesses respectively, while Ultimate was aimed at enthusiasts. Each edition of Windows 7 includes all of the capabilities and features of the edition below it, and adds additional features oriented towards their market segments; for example, Professional adds additional networking and security features such as Encrypting File System and the ability to join a domain. Ultimate contained a superset of the features from Home Premium and Professional, along with other advanced features oriented towards power users, such as BitLocker drive encryption; unlike Windows Vista, there were no "Ultimate Extras" add-ons created for Windows 7 Ultimate. Retail copies were available in "upgrade" and higher-cost "full" version licenses; "upgrade" licenses require an existing version of Windows to install, while "full" licenses can be installed on computers with no existing operating system.
The remaining three editions were not available at retail, of which two were available exclusively through OEM channels as pre-loaded software. The Starter edition is a stripped-down version of Windows 7 meant for low-cost devices such as netbooks. In comparison to Home Premium, Starter has reduced multimedia functionality, does not allow users to change their desktop wallpaper or theme, disables the "Aero Glass" theme, does not have support for multiple monitors, and can only address 2GB of RAM. Home Basic was sold only in emerging markets, and was positioned in between Home Premium and Starter. The highest edition, Enterprise, is functionally similar to Ultimate, but is only sold through volume licensing via Microsoft's Software Assurance program.
All editions aside from Starter support both IA-32 and x86-64 architectures, Starter only supports 32-bit systems. Retail copies of Windows 7 are distributed on two DVDs: one for the IA-32 version and the other for x86-64. OEM copies include one DVD, depending on the processor architecture licensed. The installation media for consumer versions of Windows 7 are identical, the product key and corresponding license determines the edition that is installed. The Windows Anytime Upgrade service can be used to purchase an upgrade that unlocks the functionality of a higher edition, such as going from Starter to Home Premium, and Home Premium to Ultimate. Most copies of Windows 7 only contained one license; in certain markets, a "Family Pack" version of Windows 7 Home Premium was also released for a limited time, which allowed upgrades on up to three computers. In certain regions, copies of Windows 7 were only sold in, and could only be activated in a designated region.
Support lifecycle
Support for Windows 7 without Service Pack 1 ended on April 9, 2013, requiring users to update in order to continue receiving updates and support after 3 years, 8 months, and 18 days. Microsoft ended the sale of new retail copies of Windows 7 in October 2014, and the sale of new OEM licenses for Windows 7 Home Basic, Home Premium, and Ultimate ended on October 31, 2014. OEM sales of PCs with Windows 7 Professional pre-installed ended on October 31, 2016. The sale of non-Professional OEM licenses was stopped on October 31, 2014. Support for Windows Vista ended on April 11, 2017, requiring users to upgrade in order to continue receiving updates and support.
Mainstream support for Windows 7 ended on January 13, 2015. Extended support for Windows 7 ended on January 14, 2020. In August 2019, Microsoft announced it will be offering a 'free' extended security updates to some business users.
On September 7, 2018, Microsoft announced a paid "Extended Security Updates" service that will offer additional updates for Windows 7 Professional and Enterprise for up to three years after the end of extended support.
Variants of Windows 7 for embedded systems and thin clients have different support policies: Windows Embedded Standard 7 support ended in October 2020. Windows Thin PC and Windows Embedded POSReady 7 had support until October 2021. Windows Embedded Standard 7 and Windows Embedded POSReady 7 also get Extended Security Updates for up to three years after their end of extended support date. However, these embedded edition updates aren't able to be downloaded on non-embedded Windows 7 editions with a simple registry hack, unlike Windows XP with its embedded editions updates. Instead, a more complex patching tool, that allows the installation of pirated Extended Security Updates, ended up being the only solution to allow consumer variants to continue to receive updates. The Extended Security Updates service on Windows Embedded POSReady 7 will expire on October 14, 2024. This will mark the final end of the Windows NT 6.1 product line after 15 years, 2 months, and 17 days.
In March 2019, Microsoft announced that it would display notifications to users informing users of the upcoming end of support, and direct users to a website urging them to purchase a Windows 10 upgrade or a new computer.
In August 2019, researchers reported that "all modern versions of Microsoft Windows" may be at risk for "critical" system compromise because of design flaws of hardware device drivers from multiple providers. In the same month, computer experts reported that the BlueKeep security vulnerability, , that potentially affects older unpatched Microsoft Windows versions via the program's Remote Desktop Protocol, allowing for the possibility of remote code execution, may now include related flaws, collectively named DejaBlue, affecting newer Windows versions (i.e., Windows 7 and all recent versions) as well. In addition, experts reported a Microsoft security vulnerability, , based on legacy code involving Microsoft CTF and ctfmon (ctfmon.exe), that affects all Windows versions from the older Windows XP version to the most recent Windows 10 versions; a patch to correct the flaw is currently available. As of January 15, 2020, Windows Update is blocked from running on Windows 7.
In September 2019, Microsoft announced that it would provide free security updates for Windows 7 on federally-certified voting machines through the 2020 United States elections.
System requirements
Additional requirements to use certain features:
Windows XP Mode (Professional, Ultimate and Enterprise): Requires an additional 1 GB of RAM and additional 15 GB of available hard disk space. The requirement for a processor capable of hardware virtualization has been lifted.
Windows Media Center (included in Home Premium, Professional, Ultimate and Enterprise), requires a TV tuner to receive and record TV.
Extent of hardware support
Physical memory
The maximum amount of RAM that Windows 7 supports varies depending on the product edition and on the processor architecture, as shown in the following table.
Processor limits
Windows 7 Professional and up support up to 2 physical processors (CPU sockets),
whereas Windows 7 Starter, Home Basic, and Home Premium editions support only 1. Physical processors with either multiple cores, or hyper-threading, or both, implement more than one logical processor per physical processor. The x86 editions of Windows 7 support up to 32 logical processors; x64 editions support up to 256 (4 x 64).
In January 2016, Microsoft announced that it would no longer support Windows platforms older than Windows 10 on any future Intel-compatible processor lines, citing difficulties in reliably allowing the operating system to operate on newer hardware. Microsoft stated that effective July 17, 2017, devices with Intel Skylake CPUs were only to receive the "most critical" updates for Windows 7 and 8.1, and only if they have been judged not to affect the reliability of Windows 7 on older hardware. For enterprise customers, Microsoft issued a list of Skylake-based devices "certified" for Windows 7 and 8.1 in addition to Windows 10, to assist them in migrating to newer hardware that can eventually be upgraded to 10 once they are ready to transition. Microsoft and their hardware partners provide special testing and support for these devices on 7 and 8.1 until the July 2017 date.
On March 18, 2016, in response to criticism from enterprise customers, Microsoft delayed the end of support and non-critical updates for Skylake systems to July 17, 2018, but stated that they would also continue to receive security updates through the end of extended support. In August 2016, citing a "strong partnership with our OEM partners and Intel", Microsoft retracted the decision and stated that it would continue to support Windows 7 and 8.1 on Skylake hardware through the end of their extended support lifecycle. However, the restrictions on newer CPU microarchitectures remain in force.
In March 2017, a Microsoft knowledge base article announced which implies that devices using Intel Kaby Lake, AMD Bristol Ridge, or AMD Ryzen, would be blocked from using Windows Update entirely. In addition, official Windows 7 device drivers are not available for the Kaby Lake and Ryzen platforms.
Security updates released since March 2018 contain bugs which affect processors that do not support SSE2 extensions, including all Pentium III processors. Microsoft initially stated that it would attempt to resolve the issue, and prevented installation of the affected patches on these systems. However, on June 15, 2018, Microsoft retroactively modified its support documents to remove the promise that this bug would be resolved, replacing it with a statement suggesting that users obtain a newer processor. This effectively ends future patch support for Windows 7 on these systems.
Updates
Service Pack 1
Windows 7 Service Pack 1 (SP1) was announced on March 18, 2010. A beta was released on July 12, 2010. The final version was released to the public on February 22, 2011. At the time of release, it was not made mandatory. It was available via Windows Update, direct download, or by ordering the Windows 7 SP1 DVD. The service pack is on a much smaller scale than those released for previous versions of Windows, particularly Windows Vista.
Windows 7 Service Pack 1 adds support for Advanced Vector Extensions (AVX), a 256-bit instruction set extension for processors, and improves IKEv2 by adding additional identification fields such as E-mail ID to it. In addition, it adds support for Advanced Format 512e as well as additional Identity Federation Services. Windows 7 Service Pack 1 also resolves a bug related to HDMI audio and another related to printing XPS documents.
In Europe, the automatic nature of the BrowserChoice.eu feature was dropped in Windows 7 Service Pack 1 in February 2011 and remained absent for 14 months despite Microsoft reporting that it was still present, subsequently described by Microsoft as a "technical error." As a result, in March 2013, the European Commission fined Microsoft €561 million to deter companies from reneging on settlement promises.
Platform Update
The Platform Update for Windows 7 SP1 and Windows Server 2008 R2 SP1 was released on February 26, 2013 after a pre-release version had been released on November 5, 2012. It is also included with Internet Explorer 10 for Windows 7.
It includes enhancements to Direct2D, DirectWrite, Direct3D, Windows Imaging Component (WIC), Windows Advanced Rasterization Platform (WARP), Windows Animation Manager (WAM), XPS Document API, H.264 Video Decoder and JPEG XR decoder. However support for Direct3D 11.1 is limited as the update does not include DXGI/WDDM 1.2 from Windows 8, making unavailable many related APIs and significant features such as stereoscopic frame buffer, feature level 11_1 and optional features for levels 10_0, 10_1 and 11_0.
Disk Cleanup update
In October 2013, a Disk Cleanup Wizard addon was released that lets users delete outdated Windows updates on Windows 7 SP1, thus reducing the size of the WinSxS directory. This update backports some features found in Windows 8.
Windows Management Framework 5.0
Windows Management Framework 5.0 includes updates to Windows PowerShell 5.0, Windows PowerShell Desired State Configuration (DSC), Windows Remote Management (WinRM), Windows Management Instrumentation (WMI). It was released on February 24, 2016 and was eventually superseded by Windows Management Framework 5.1.
Convenience rollup
In May 2016, Microsoft released a "Convenience rollup update for Windows 7 SP1 and Windows Server 2008 R2 SP1," which contains all patches released between the release of SP1 and April 2016. The rollup is not available via Windows Update, and must be downloaded manually. This package can also be integrated into a Windows 7 installation image.
Since October 2016, all security and reliability updates are cumulative. Downloading and installing updates that address individual problems is no longer possible, but the number of updates that must be downloaded to fully update the OS is significantly reduced.
Monthly update rollups (July 2016-January 2020)
In June 2018, Microsoft announced that they'll be moving Windows 7 to a monthly update model beginning with updates released in September 2018 - two years after Microsoft switched the rest of their supported operating systems to that model.
With the new update model, instead of updates being released as they became available, only two update packages were released on the second Tuesday of every month until Windows 7 reached its end of life - one package containing security and quality updates, and a smaller package that contained only the security updates. Users could choose which package they wanted to install each month. Later in the month, another package would be released which was a preview of the next month's security and quality update rollup.
Installing the preview rollup package released for Windows 7 on March 19, 2019, or any later released rollup package, that makes Windows more reliable. This change was made so Microsoft could continue to service the operating system while avoiding “version-related issues”.
Microsoft announced in July 2019 that the Microsoft Internet Games services on Windows XP and Windows Me would end on July 31, 2019 (and for Windows 7 on January 22, 2020).
The last non-extended security update rollup packages were released on January 14, 2020, the last day that Windows 7 had extended support.
End of support (after January 14, 2020)
On January 14, 2020, Windows 7 support ended with Microsoft no longer providing security updates or fixes after that date, except for subscribers of the Windows 7 Extended Security Updates. However, there have been two updates that have been issued to non-ESU subscribers:
In February 2020, Microsoft released an update via Windows Update to fix a black wallpaper issue caused by the January 2020 update for Windows 7.
In June 2020, Microsoft released an update via Windows Update to roll out the new Chromium-based Microsoft Edge to Windows 7 and 8.1 machines that are not connected to Active Directory. Users, e.g. those on Active Directory, can download Edge from Microsoft's website.
In a support document, Microsoft has stated that a full-screen upgrade warning notification would be displayed on Windows 7 PCs on all editions except the Enterprise edition after January 15. The notification does not appear on machines connected to Active Directory, machines in kiosk mode, or machines subscribed for Extended Security Updates.
Reception
Critical reception
Windows 7 received critical acclaim, with critics noting the increased usability and functionality when compared with its predecessor, Windows Vista. CNET gave Windows 7 Home Premium a rating of 4.5 out of 5 stars, stating that it "is more than what Vista should have been, [and] it's where Microsoft needed to go". PC Magazine rated it a 4 out of 5 saying that Windows 7 is a "big improvement" over Windows Vista, with fewer compatibility problems, a retooled taskbar, simpler home networking and faster start-up. Maximum PC gave Windows 7 a rating of 9 out of 10 and called Windows 7 a "massive leap forward" in usability and security, and praised the new Taskbar as "worth the price of admission alone." PC World called Windows 7 a "worthy successor" to Windows XP and said that speed benchmarks showed Windows 7 to be slightly faster than Windows Vista. PC World also named Windows 7 one of the best products of the year.
In its review of Windows 7, Engadget said that Microsoft had taken a "strong step forward" with Windows 7 and reported that speed is one of Windows 7's major selling points—particularly for the netbook sets. Laptop Magazine gave Windows 7 a rating of 4 out of 5 stars and said that Windows 7 makes computing more intuitive, offered better overall performance including a "modest to dramatic" increase in battery life on laptop computers. TechRadar gave Windows 7 a rating of 5 out of 5 stars, concluding that "it combines the security and architectural improvements of Windows Vista with better performance than XP can deliver on today's hardware. No version of Windows is ever perfect, but Windows 7 really is the best release of Windows yet." USA Today and The Telegraph also gave Windows 7 favorable reviews.
Nick Wingfield of The Wall Street Journal wrote, "Visually arresting," and "A pleasure." Mary Branscombe of Financial Times wrote, "A clear leap forward." of Gizmodo wrote, "Windows 7 Kills Snow Leopard." Don Reisinger of CNET wrote, "Delightful." David Pogue of The New York Times wrote, "Faster." J. Peter Bruzzese and Richi Jennings of Computerworld wrote, "Ready."
Some Windows Vista Ultimate users have expressed concerns over Windows 7 pricing and upgrade options. Windows Vista Ultimate users wanting to upgrade from Windows Vista to Windows 7 had to either pay $219.99 to upgrade to Windows 7 Ultimate or perform a clean install, which requires them to reinstall all of their programs.
The changes to User Account Control on Windows 7 were criticized for being potentially insecure, as an exploit was discovered allowing untrusted software to be launched with elevated privileges by exploiting a trusted component. Peter Bright of Ars Technica argued that "the way that the Windows 7 UAC 'improvements' have been made completely exempts Microsoft's developers from having to do that work themselves. With Windows 7, it's one rule for Redmond, another one for everyone else." Microsoft's Windows kernel engineer Mark Russinovich acknowledged the problem, but noted that malware can also compromise a system when users agree to a prompt.
Sales
In July 2009, in only eight hours, pre-orders of Windows 7 at amazon.co.uk surpassed the demand which Windows Vista had in its first 17 weeks. It became the highest-grossing pre-order in Amazon's history, surpassing sales of the previous record holder, the seventh Harry Potter book. After 36 hours, 64-bit versions of Windows 7 Professional and Ultimate editions sold out in Japan. Two weeks after its release its market share had surpassed that of Snow Leopard, released two months previously as the most recent update to Apple's Mac OS X operating system. According to Net Applications, Windows 7 reached a 4% market share in less than three weeks; in comparison, it took Windows Vista seven months to reach the same mark. As of February 2014, Windows 7 had a market share of 47.49% according to Net Applications; in comparison, Windows XP had a market share of 29.23%.
On March 4, 2010, Microsoft announced that it had sold more than 90 million licenses.
By April 23, 2010, more than 100 million copies were sold in six months, which made it Microsoft's fastest-selling operating system. As of June 23, 2010, Windows 7 has sold 150 million copies which made it the fastest selling operating system in history with seven copies sold every second. Based on worldwide data taken during June 2010 from Windows Update 46% of Windows 7 PCs run the 64-bit edition of Windows 7. According to Stephen Baker of the NPD Group during April 2010 in the United States 77% of PCs sold at retail were pre-installed with the 64-bit edition of Windows 7. As of July 22, 2010, Windows 7 had sold 175 million copies. On October 21, 2010, Microsoft announced that more than 240 million copies of Windows 7 had been sold. Three months later, on January 27, 2011, Microsoft announced total sales of 300 million copies of Windows 7. On July 12, 2011, the sales figure was refined to over 400 million end-user licenses and business installations. As of July 9, 2012, over 630 million licenses have been sold; this number includes licenses sold to OEMs for new PCs.
Antitrust concerns
As with other Microsoft operating systems, Windows 7 was studied by United States federal regulators who oversee the company's operations following the 2001 United States v. Microsoft Corp. settlement. According to status reports filed, the three-member panel began assessing prototypes of the new operating system in February 2008. Michael Gartenberg, an analyst at Jupiter Research, said, "[Microsoft's] challenge for Windows 7 will be how can they continue to add features that consumers will want that also don't run afoul of regulators."
In order to comply with European antitrust regulations, Microsoft proposed the use of a "ballot" screen containing download links to competing web browsers, thus removing the need for a version of Windows completely without Internet Explorer, as previously planned. Microsoft announced that it would discard the separate version for Europe and ship the standard upgrade and full packages worldwide, in response to criticism involving Windows 7 E and concerns from manufacturers about possible consumer confusion if a version of Windows 7 with Internet Explorer were shipped later, after one without Internet Explorer.
As with the previous version of Windows, an N version, which does not come with Windows Media Player, has been released in Europe, but only for sale directly from Microsoft sales websites and selected others.
See also
BlueKeep, a security vulnerability discovered in May 2019 that affected most Windows NT-based computers up to Windows 7
References
Further reading
External links
Windows 7 Service Pack 1 (SP1)
Windows 7 SP1 update history
2009 software
IA-32 operating systems
7
X86-64 operating systems
|
449223
|
https://en.wikipedia.org/wiki/MindVox
|
MindVox
|
MindVox was a famed early Internet service provider in New York City. A controversial sometime media darling — the service was referred to as "the Hells Angels of Cyberspace" — it was founded in 1991 by Bruce Fancher (Dead Lord) and Patrick Kroupa (Lord Digital), two former members of the legendary Legion of Doom hacker group. The system was at least partially online by March 1992, and open to the public in November of that year.
MindVox was the second ISP in New York City. Some controversy over this statement exists; however, by the time the first MindVox test message was posted to Usenet in 1992, customers of the rival service, Panix, had made nearly 6,000 posts. The test message was apparently posted by the infamous Phiber Optik, who would have been waiting for a Manhattan grand jury indictment at the time for hacking activities.
Another potential "start date" for the service would be the registration of the service's phantom.com domain, on 14 February 1992.
Founding and early years
The distinctive logo shown to the left was the system's original ASCII art banner, appearing on the text-only service's dial-up login page. MindVox was originally accessible only through telnet, ftp and direct dial-up. Its existence predates the invention of SSH and widespread use of the World Wide Web by several years. In later years, MindVox was also accessible via the web.
The parent company, Phantom Access Technologies, Inc. took its name from a hacking program written by Kroupa during his early teens, called Phantom Access.
MindVox functioned both as a private BBS service, containing its own dedicated discussion groups, termed "conferences" — though usually referred to as "forums" by users — as well as a provider of internet and Usenet access. By 1994 the subscriber base was at around 3,000. In many ways MindVox was a harder, edgier, New York incarnation of the WELL, (a famous Northern Californian online community.) While users were drawn from all over the world, the majority lived in the New York City area, and members who met through the conferences often became acquainted in person, either on their own, or through what were termed "VoxMeats" (a formal gathering of members whose double entendre name was rumored to be well-earned.)
Prominent MindVox "evangelists" included sci-fi author Charles Platt, who wrote about MindVox for Wired Magazine and featured it within his book Anarchy Online. MindVox also attracted (sometimes with the aid of free accounts) artists, writers and activists including Billy Idol, Wil Wheaton, Robert Altman, Douglas Rushkoff, John Perry Barlow, and Kurt Cobain. The level of hysteria and hype surrounding MindVox was so great that in 1993 executives at MTV who were using the system wanted to buy it outright and turn MindVox into a subsidiary of Viacom.
"Voices in My Head"
MindVox was deeply connected to the emerging non-academic hacker culture and ideas about the potentials of cyberspace, as can be seen in Patrick Kroupa's essay, Voices in my Head, MindVox: The Overture, which announced the upcoming opening of MindVox, and crossed the line into shaping an entire culture's mythology, seeing publication in magazines such as Wired, and extensive coverage throughout the media. Voices provided a compelling and sweeping first-person overview of the cultural forces that were at play in the hacker underground during the decade that pre-dated MindVox, considered by some the "Golden Age" of cyberspace.
More than a decade later, Voices remains one of the most read and widely distributed pieces of writing to ever emerge about the origins and possible futures of cyberspace. It was the spark that propelled Kroupa out of obscurity and into the pages of books, describing him as the Jim Morrison of cyberspace. Voices also helped turn MindVox from being just another ISP into a counter-cultural media darling meriting full-length features in magazines and newspapers such as Rolling Stone, Forbes, The Wall Street Journal, The New York Times and The New Yorker.
"Voice: Waffle ][+ the NeXTSTEP"
As with many things MindVox-related, the name of the software MindVox ran on, was both a play on words and an elaborate inside-joke. Voice: Waffle ][+ the NeXTSTEP (usually referred to simply as Voice, although it frequently was referred to by the plural Voices as well), was the name given to MindVox's conferencing system. Waffle refers to the original software that MindVox was based on, the ][+ pays homage to Kroupa and Fancher's hacker past and the use of Apple II computers; NeXTSTEP was a reference to the NeXT platform and operating system, with which MindVox was developed and launched.
As much as Patrick Kroupa's Voices focused the media and counter-culture spotlight on MindVox; Fancher's software was a source of tremendous attention in many MindVox-related stories and it's unlikely that MindVox would have enjoyed its success without Voice. At the time MindVox launched, it was one of the first public-access ISPs in the world. The major technical difference between MindVox and every other system at the time, was instead of expecting newcomers to understand Unix and meet a cryptic shell prompt, the entire system was accessible through Fancher's highly flexible interface.
The original Waffle software was written by Tom Dell, who was apparently part of MindVox from its inception. To this date there are Easter-eggs and cross-references on both MindVox and the system that Tom Dell became better known for in the late 1990s and beyond: Rotten.com. Going to Rotten's search page, and triple-clicking on the whitespace located between the Contact section and the gray bar at the bottom, reveals an inscrutable ibogaine rant.
By the mid-1990s the original Waffle software was nearly unrecognizable; Fancher had converted Voice to a client-server architecture, included a web interface, and added elaborate "power user" features which seem to have been added to address the evolving needs of the community; or due to a strange combination of drugs, nostalgia and pure whim. An example of the latter case is VoxChat, a proprietary chat system written for MindVox by employee David Schenfeld, which spun off into the commercial product ENTchat after MindVox shut down. Diversi-Dial, and the Diversi-Dial spinoff ENTchat, allowed MindVox to connect via the Diversi-Dial chat protocol.), or in Kroupa's own words:
As of this writing there are roughly a dozen remaining DDIAL's running on Apple computers, Novation has long since gone Chapter 11, Bill Basham (the author of DDIAL) has gone back to being a full-time doctor, and one slightly disturbed person in the Phantom Access Group has written the world's only version of DDIAL that will run on Unix based machines and allow T1 connected, distributed sites with gigabytes of disk and thousands of users, to hook into Pig's Knuckle Idaho's very own 7 line DDIAL running at a blazing fast 300 baud. Why this was done is a question best left to mental health professionals.
The last sentence in the paragraph quoted above could be applied to many features present in the MindVox shell,. It included advanced conferencing features interspersed with time-consuming, elaborate in-jokes with no commercial purpose whatsoever.
The Fling Screen from MindVox. When inappropriate or extremely off-topic material was posted to a conference; moderators were unable to remove or destroy the message entirely, but they could move the message to the r0mPEr-RuM, a conference that was the collective garbage-dump of MindVox.
To this day the phantom.com MindVox archive continues its relationship with NeXT/NeXTSTEP, now in the form of Apple Computer's Mac OS X. Instead of using php, perl or Active Server Pages, the entire site runs Apple's WebObjects.
MindVox was a fusion of many strange parts, pieces and times. While Kroupa might be said to have provided the imaginative backstory of the "thoughtscape", Fancher was largely responsible for the software that made it all work. The synergy of Kroupa, Fancher and the user-base MindVox attracted was a major aspect of MindVox's rise to fame.
The MindVox shutdown
MindVox began to fall apart around 1996, when it ceased operating as an ISP, and shut off dial-up access. While the exact date of the shutdown is disputed, the New York Times lists the closure as occurring in July of that year. Ironically, this happened a few months after New York Magazine voted MindVox as one of the three best ISPs in New York.
A public message noted that free telnet access to the MindVox servers would still be available after the shutdown, but this did not last. While users were given the option to transfer their accounts to Interport Communications, the unique MindVox community did not survive.
Many different reasons have been given for the downfall, including increased competition from the arrival of large-scale providers like AT&T, possible legal difficulties, and the apparent incestuousness of the company and its core users. But none of the theories provided realistic answers as to why the final days of MindVox seem to be closer to The Great Gatsby, and Altered States, than a successful or unsuccessful technology corporation. Much of the legal paperwork from the time reads like something out of The Bonfire of the Vanities.
A 1999 article by Tom Higgins (username "Tomwhore" on the system), a user and one time employee of MindVox, summarized the turbulent closing thus:
So what happened to MindVox? In short its customers happened. Under the strain of pleasing a paying customer base, watching a hobby turn into an industry and simply getting caught up in its own hype, MindVox tumbled into a soap opera nose dive of sex, drugs and mismanagement.
By 1997 Patrick Kroupa had effectively disappeared from public view. The last days of MindVox are more the stuff of mythology than recorded fact, with different publications listing different dates for the shutdown. The New York Times and Wired were apparently unable to arrive at a consensus, with the Times listing the sale of MindVox's client-base and the closing of the system, in 1996, while Wired was still covering an apparently open and at least partially operational MindVox circa 1997.
Additional material suggests MindVox was never fully "closed" but simply closed to the public to become a private, invitation-only system. Rumors of a private, "inside" MindVox circulated, fueled by reprints of supposed internal MindVox messages from 1998 and 1999 that circulated on various mailing lists. The mindvox.com domain remained registered while, for a time, mail to phantom.com was redirected to Interport. The major discrepancy between the Times and Wired dates lends additional credence to the idea that MindVox continued, at least for a while, to support a community after its modem lines were turned off.
MindVox in the 21st century
During 2000 a variety of MindVox pieces went back online, at phantom.com and additional material was released by MindVox to textfiles.com. By 2001, Kroupa was back in the public eye and openly acknowledged being a lifelong heroin addict, who had finally kicked heroin and cocaine through the use of the hallucinogenic drug ibogaine.
It is unclear whether mailing lists on MindVox continued in perpetuity from the 1990s, or began reappearing in 2000, but in addition to the Vox list it was hosting, by 2001 MindVox was a hub of activity in the fields of harm reduction, drug policy reform, and psychedelic drugs (most notably Ibogaine).
While the drug-related community surrounding MindVox : Ibogaine has taken on a completely new life, the interactive system itself as well as the internal conferences and other services MindVox provided, have not returned (despite announcements and plans heralding the perpetually delayed rebirth of MindVox).
In 2005, MindVox was featured in two documentary films. Bruce Fancher is interviewed in BBS: The Documentary, and Patrick Kroupa plays himself in Ibogaine: Rite of Passage.
On December 9, 2005, the Transcriptions Project, placed The Agrippa Files online, which included Matthew G. Kirschenbaum's, "Hacking 'Agrippa': The Source of the Online Text," an excerpt from his book Mechanisms: New Media and the Forensic Imagination. The "Agrippa" discussed by Kirschenbaum was an unusual cyberpunk-influenced media project from 1992 by the science-fiction author William Gibson; its first public "leak" was to MindVox users in December of that year.
Within the chapter, Kirschenbaum references several personal letters to Patrick Kroupa, circa 2003, and reveals that Kroupa cooperated with him by placing all of MindVox back online "for an hour or 5" so that Kirschenbaum could view the context within which Agrippa was originally released. In discussing the service, Kirschenbaum referred to MindVox as "a kind of interface between what Alan Sondheim has aptly called the darknet and the clean, well lighted cyberspaces".
MindVox reloaded
MindVox re-opened in the form of a closed alpha, on December 21st, 2012.
External links
Official website
While the labyrinth of conferences, files and user interactions providing a unique overview of the birth of the public internet that are buried within the depths of MindVox have never re-surfaced or been made publicly available, limited archives of some parts of the service remain online at:
https://web.archive.org/web/20050827230356/http://www.phantom.com/
http://www.textfiles.com/bbs/MINDVOX/
An IRC channel, EFnet #mindvox, created in the 1990s, has survived as a gathering place for some members of the older community.
References
Internet properties established in 1991
History of the Internet
Wikipedia articles with ASCII art
1991 establishments in New York City
|
3193582
|
https://en.wikipedia.org/wiki/Sociology%20of%20culture
|
Sociology of culture
|
The sociology of culture, and the related cultural sociology, concerns the systematic analysis of culture, usually understood as the ensemble of symbolic codes used by a member of a society, as it is manifested in the society. For Georg Simmel, culture referred to "the cultivation of individuals through the agency of external forms which have been objectified in the course of history". Culture in the sociological field is analyzed as the ways of thinking and describing, acting, and the material objects that together shape a group of people's way of life.
Contemporary sociologists' approach to culture is often divided between a "sociology of culture" and "cultural sociology"—the terms are similar, though not interchangeable. The sociology of culture is an older concept, and considers some topics and objects as more or less "cultural" than others. By way of contrast, Jeffrey C. Alexander introduced the term cultural sociology, an approach that sees all, or most, social phenomena as inherently cultural at some level. For instance, a leading proponent of the "strong program" in cultural sociology, Alexander argues: "To believe in the possibility of cultural sociology is to subscribe to the idea that every action, no matter how instrumental, reflexive, or coerced [compared to] its external environment, is embedded to some extent in a horizon of affect and meaning." In terms of analysis, sociology of culture often attempts to explain some discretely cultural phenomena as a product of social processes, while cultural sociology sees culture as a component of explanations of social phenomena. As opposed to the field of cultural studies, cultural sociology does not reduce all human matters to a problem of cultural encoding and decoding. For instance, Pierre Bourdieu's cultural sociology has a "clear recognition of the social and the economic as categories which are interlinked with, but not reducible to, the cultural."
Development
Cultural sociology first emerged in Weimar, Germany, where sociologists such as Alfred Weber used the term Kultursoziologie (cultural sociology). Cultural sociology was then "reinvented" in the English-speaking world as a product of the "cultural turn" of the 1960s, which ushered in structuralist and postmodern approaches to social science. This type of cultural sociology may loosely be regarded as an approach incorporating cultural analysis and critical theory. In the beginning of the cultural turn, sociologists tended to use qualitative methods and hermeneutic approaches to research, focusing on meanings, words, artifacts and symbols. "Culture" has since become an important concept across many branches of sociology, including historically quantitative and model-based subfields, such as social stratification and social network analysis.
Early researchers
The sociology of culture grew from the intersection between sociology, as shaped by early theorists like Marx, Durkheim, and Weber, and anthropology where researchers pioneered ethnographic strategies for describing and analyzing a variety of cultures around the world.
Part of the legacy of the early development of the field is still felt in the methods (much of cultural sociological research is qualitative) in the theories (a variety of critical approaches to sociology are central to current research communities) and substantive focus of the field. For instance, relationships between popular culture, political control, and social class were early and lasting concerns in the field.
Karl Marx
As a major contributor to conflict theory, Marx argued that culture served to justify inequality. The ruling class, or the bourgeoisie, produce a culture that promotes their interests, while repressing the interests of the proletariat.
His most famous line to this effect is that "Religion is the opium of the people".
Marx believed that the "engine of history" was the struggle between groups of people with diverging economic interests and thus the economy determined the cultural superstructure of values and ideologies. For this reason, Marx is a considered a materialist as he believes that the economic (material) produces the cultural (ideal), which "stands Hegel on his head," who argued the ideal produced the material.
Émile Durkheim
Durkheim held the belief that culture has many relationships to society which include:
Logical – Power over individuals belongs to certain cultural categories, and beliefs such as in God.
Functional – Certain rites and myths create and build up social order by having more people create strong beliefs. The greater the number of people who believe strongly in these myths more will the social order be strengthened.
Historical – Culture had its origins in society, and from those experiences came evolution into things such as classification systems.
Max Weber
Weber innovated the idea of a status group as a certain type of subculture. Status groups are based on things such as: race, ethnicity, religion, region, occupation, gender, sexual preference, etc. These groups live a certain lifestyle based on different values and norms. They are a culture within a culture, hence the label subculture. Weber also purported the idea that people were motivated by their material and ideal interests, which include things such as preventing one from going to hell. Weber also explains that people use symbols to express their spirituality, that symbols are used to express the spiritual side of real events, and that ideal interests are derived from symbols.
Georg Simmel
For Simmel, culture refers to "the cultivation of individuals through the agency of external forms which have been objectified in the course of history." Simmel presented his analyses within a context of "form" and "content". Sociological concept and analysis can be viewed.
The elements of a culture
As no two cultures are exactly alike they do all have common characteristics.
A culture contains:
1. Social Organization: Structured by organizing its members into smaller numbers to meet the cultures specific requirements. Social classes ranked in order of importance (status) based on the cultures core values. In example: money, job, education, family, etc.
2. Customs and Traditions: Rules of behavior enforced by the cultures ideas of right and wrong such as is customs, traditions, rules, or written laws.
3. Symbols: Any thing that carries particular meaning recognized by people who share the same culture.
4. Norms: Rules and expectations by which a society guides the behavior of its members. The two types of norms are mores and folkways. Mores are norms that are widely observed and have a great moral significance. Folkways are norms for routine, casual interaction.
5. Religion: The answers to their basic meanings of life and values.
6. Language: A system of symbols that allows people to communicate with one another.
7. Arts and Literature: Products of human imagination made into art, music, literature, stories, and dance.
8. Forms of Government: How the culture distributes power. Who keeps the order within the society, who protects them from danger, and who provides for their needs. Can fall into terms such as Democracy, Republic, or Dictatorship.
9. Economic Systems: What to produce, how to produce it, and for whom. How people use their limited resources to satisfy their wants and needs. Can fall into the terms Traditional Economy, Market Economy, Command Economy, Mixed Economy.
10. Artifacts: Distinct material objects, such as architecture, technologies, and artistic creations.
11. Social institutions: Patterns of organization and relationships regarding governance, production, socializing, education, knowledge creation, arts, and relating to other cultures.
Anthropology
In an anthropological sense, culture is society based on the values and ideas without influence of the material world.
Culture is like the shell of a lobster. Human nature is the organism living inside of that shell. The shell, culture, identifies the organism, or human nature. Culture is what sets human nature apart, and helps direct the life of human nature.
Anthropologists lay claim to the establishment of modern uses of the culture concept as defined by Edward Burnett Tylor in the mid-19th century.
Bronisław Malinowski
Malinowski collected data from the Trobriand Islands. Descent groups across the island claim parts of the land, and to back up those claims, they tell myths of how an ancestress started a clan and how the clan descends from that ancestress. Malinowski's observations followed the research of that found by Durkheim.
Alfred Reginald Radcliffe-Brown
Radcliffe-Brown put himself in the culture of the Andaman Islanders. His research showed that group solidification among the islanders is based on music and kinship, and the rituals that involve the use of those activities. In the words of Radcliffe-Brown, "Ritual fortifies Society".
Marcel Mauss
Marcel Mauss made many comparative studies on religion, magic, law and morality of occidental and non-occidental societies, and developed the concept of total social fact, and argued that the reciprocity is the universal logic of the cultural interaction.
Claude Lévi-Strauss
Lévi-Strauss, based, at the same time, on the sociological and anthropological positivism of Durkheim, Mauss, Malinowski and Radcliffe-Brown, on the economic and sociological marxism, on freudian and Gestalt psychology and on structural linguistics of Saussure and Jakobson, realized great studies on areas myth, kinship, religion, ritual, symbolism, magic, ideology (souvage pensée), knowledge, art and aesthetics, applying the methodological structuralism on his investigations. He searched the universal principals of human thought as a form of explaining social behaviors and structures.
Major areas of research
Theoretical constructs in Bourdieu's sociology of culture
French sociologist Pierre Bourdieu's influential model of society and social relations has its roots in Marxist theories of class and conflict. Bourdieu characterizes social relations in the context of what he calls the field, defined as a competitive system of social relations functioning according to its own specific logic or rules. The field is the site of struggle for power between the dominant and subordinate classes. It is within the field that legitimacy—a key aspect defining the dominant class—is conferred or withdrawn.
Bourdieu's theory of practice is practical rather than discursive, embodied as well as cognitive and durable though adaptive. A valid concern that sets the agenda in Bourdieu's theory of practice is how action follows regular statistical patterns without the product of accordance to rules, norms and/or conscious intention. To explain this concern, Bourdieu explains habitus and field. Habitus explains the mutually penetrating realities of individual subjectivity and societal objectivity after the function of social construction. It is employed to transcend the subjective and objective dichotomy.
Cultural change
The belief that culture is symbolically coded and can thus be taught from one person to another means that cultures, although bounded, can change. Cultures are both predisposed to change and resistant to it. Resistance can come from habit, religion, and the integration and interdependence of cultural traits.
Cultural change can have many causes, including: the environment, inventions, and contact with other cultures.
Several understandings of how cultures change come from anthropology. For instance, in diffusion theory, the form of something moves from one culture to another, but not its meaning. For example, the ankh symbol originated in Egyptian culture but has diffused to numerous cultures. Its original meaning may have been lost, but it is now used by many practitioners of New Age religion as an arcane symbol of power or life forces. A variant of the diffusion theory, stimulus diffusion, refers to an element of one culture leading to an invention in another.
Contact between cultures can also result in acculturation. Acculturation has different meanings, but in this context refers to replacement of the traits of one culture with those of another, such as what happened with many Native American Indians. Related processes on an individual level are assimilation and transculturation, both of which refer to adoption of a different culture by an individual.
Wendy Griswold outlined another sociological approach to cultural change. Griswold points out that it may seem as though culture comes from individuals – which, for certain elements of cultural change, is true – but there is also the larger, collective, and long-lasting culture that cannot have been the creation of single individuals as it predates and post-dates individual humans and contributors to culture. The author presents a sociological perspective to address this conflict.
Sociology suggests an alternative to both the view that it has always been an unsatisfying way at one extreme and the sociological individual genius view at the other. This alternative posits that culture and cultural works are collective, not individual, creations. We can best understand specific cultural objects... by seeing them not as unique to their creators but as the fruits of collective production, fundamentally social in their genesis. (p. 53)
In short, Griswold argues that culture changes through the contextually dependent and socially situated actions of individuals; macro-level culture influences the individual who, in turn, can influence that same culture. The logic is a bit circular, but illustrates how culture can change over time yet remain somewhat constant.
It is, of course, important to recognize here that Griswold is talking about cultural change and not the actual origins of culture (as in, "there was no culture and then, suddenly, there was"). Because Griswold does not explicitly distinguish between the origins of cultural change and the origins of culture, it may appear as though Griswold is arguing here for the origins of culture and situating these origins in society. This is neither accurate nor a clear representation of sociological thought on this issue. Culture, just like society, has existed since the beginning of humanity (humans being social and cultural). Society and culture co-exist because humans have social relations and meanings tied to those relations (e.g. brother, lover, friend). Culture as a super-phenomenon has no real beginning except in the sense that humans (homo sapiens) have a beginning. This, then, makes the question of the origins of culture moot – it has existed as long as we have, and will likely exist as long as we do. Cultural change, on the other hand, is a matter that can be questioned and researched, as Griswold does.
Culture theory
Culture theory, developed in the 1980s and 1990s, sees audiences as playing an active rather than passive role in relation to mass media. One strand of research focuses on the audiences and how they interact with media; the other strand of research focuses on those who produce the media, particularly the news.
Frankfurt School
Walter Benjamin
Theodor W. Adorno
Herbert Marcuse
Erich Fromm
Current research
Computer-mediated communication as culture
Computer-mediated communication (CMC) is the process of sending messages—primarily, but not limited to text messages—through the direct use by participants of computers and communication networks. By restricting the definition to the direct use of computers in the communication process, you have to get rid of the communication technologies that rely upon computers for switching technology (such as telephony or compressed video), but do not require the users to interact directly with the computer system via a keyboard or similar computer interface. To be mediated by computers in the sense of this project, the communication must be done by participants fully aware of their interaction with the computer technology in the process of creating and delivering messages. Given the current state of computer communications and networks, this limits CMC to primarily text-based messaging, while leaving the possibility of incorporating sound, graphics, and video images as the technology becomes more sophisticated.
Cultural institutions
Cultural activities are institutionalised; the focus on institutional settings leads to the investigation "of activities in the cultural sector, conceived as historically evolved societal forms of organising the conception, production, distribution, propagation, interpretation, reception, conservation and maintenance of specific cultural goods". Cultural Institutions Studies is therefore a specific approach within the sociology of culture.
Key figures
Key figures in today's cultural sociology include: Julia Adams, Jeffrey Alexander, John Carroll, Diane Crane, Paul DiMaggio, Henning Eichberg, Ron Eyerman, Sarah Gatson, Andreas Glaeser, Wendy Griswold, Eva Illouz, Karin Knorr-Cetina, Michele Lamont, Annette Lareau, Stjepan Mestrovic, Philip Smith, Margaret Somers, Yasemin Soysal, Dan Sperber, Lynette Spillman, Ann Swidler, Diane Vaughan, and Viviana Zelizer.
See also
Communication studies
Cultural anthropology
Cultural Sociology (journal)
Cultural studies
Culture
Sociology
Sociology of literature
Sociomusicology
Taste (sociology)
References
Citations
Sources
Groh, Arnold. 2019. Theories of Culture. London, England: Routledge. .
Stark, Rodney. 2007. Sociology: Tenth Edition. Belmont, CA: Thomson Learning, Inc. .
Walker, Gavin. 2001. Society and culture in sociological and anthropological tradition . Thousand Oaks, CA: Sage Publications.
Lawley, Elizabeth. 1994. The Sociology of Culture in Computer-Mediated Communication: An Initial Exploration .
Swartz, David. 1997. Culture & Power: The Sociology of Pierre Bourdieu. Chicago, IL: University of Chicago Press.
Griswold, Wendy. 2004. Cultures and Societies in a Changing World. Thousand Oaks, CA: Pine Forge Press.
La logica dei processi culturali. Jürgen Habermas tra filosofia e sociologia. Genova: Edizioni ECIG. .
"Culture and Public Action: Further Reading." Welcome to Culture and Public Action. Web. 23 Feb. 2012. <http://www.cultureandpublicaction.org/conference/s_o_d_sociologyanddevelopment.htm >.
External links
|
26122688
|
https://en.wikipedia.org/wiki/Datacap
|
Datacap
|
Datacap (an IBM Company), a privately owned company, manufactures and sells computer software, and services. Datacap's first product, Paper Keyboard, was a "forms processing" product and shipped in 1989. In August 2010, IBM announced that it had acquired Datacap for an undisclosed amount.
Datacap sells products through a value-added distribution network worldwide. The software is classified as "enterprise software", meaning that it requires trained professionals to install and configure. Although the Company has focused on providing solutions for scanning paper documents, most recently Company materials have emphasized customer requirements to handle electronic documents ("eDocs"), documents being received into an organization electronically (usually email).
Datacap claims that its software is unique because of the rules engine ("Rulerunner") used for processing inbound documents, including performing the image processing (deskew, noise removal, etc.), optical character recognition (OCR), intelligent character recognition (ICR), validations, and export-release formatting of extracted data to target ERP and line of business application.
See also
List of mergers and acquisitions by IBM
References
Companies established in 1988
Software companies based in New York (state)
Optical character recognition
Document management systems
IBM acquisitions
2010 mergers and acquisitions
Software companies of the United States
|
1498600
|
https://en.wikipedia.org/wiki/Leonard%20Bosack
|
Leonard Bosack
|
Leonard X. Bosack (born 1952) is a co-founder of Cisco Systems, an American-based multinational corporation that designs and sells consumer electronics, networking and communications technology, and services. His net worth is approximately $200 million. He was awarded the Computer Entrepreneur Award in 2009 for co-founding Cisco Systems and pioneering and advancing the commercialization of routing technology and the profound changes this technology enabled in the computer industry.
He is largely responsible for pioneering the widespread commercialization of local area network (LAN) technology to connect geographically disparate computers over a multiprotocol router system, which was an unheard-of technology at the time. In 1990, Cisco's management fired Cisco co-founder Sandy Lerner and Bosack resigned. , Bosack was the CEO of XKL LLC, a privately funded engineering company which explores and develops optical networks for data communications.
Background
Born in Pennsylvania in 1952 to Polish Catholic family, Bosack graduated from La Salle College High School in 1969. In 1973, Bosack graduated from the University of Pennsylvania School of Engineering and Applied Science, and joined the Digital Equipment Corporation (DEC) as a hardware engineer. In 1979, he was accepted into Stanford University, and began to study computer science. During his time at Stanford, he was credited for becoming a support engineer for a 1981 project to connect all of Stanford's mainframes, minis, LISP machines, and Altos.
His contribution was to work on the network router that allowed the computer network under his management to share data from the Computer Science Lab with the Business School's network. He met his wife Sandra Lerner at Stanford, where she was the manager of the Business School lab, and the couple married in 1980. Together in 1984, they started Cisco in Menlo Park.
Cisco
In 1984, Bosack co-founded Cisco Systems with his then partner (and now ex-wife) Sandy Lerner. Their aim was to commercialize the Advanced Gateway Server. The Advanced Gateway Server was a revised version of the Stanford router built by William Yeager and Andy Bechtolsheim. Bosack and Lerner designed and built routers in their house, and experimented using Stanford's network. Initially, Bosack and Lerner went to Stanford with a proposition to start building and selling the routers, but the school refused. It was then that they founded their own company, and named it "Cisco," taken from the name of nearby San Francisco. It is widely reported that Lerner and Bosack designed the first router so that they could connect the incompatible computer systems of the Stanford offices they were working in so that they could send letters to each other. However, this is an untrue legend.
Cisco's product was developed in their garage and was sold beginning in 1986 by word of mouth. In their first month alone, Cisco was able to land contracts worth more than $200,000. The company produced revolutionary technology such as the first multiport router-specific line cards and sophisticated routing protocols, giving them domination over the market-place. Cisco went public in 1990, the same year that Bosack resigned. Bosack and Lerner walked away from Cisco with $170 million after being forced out by the professional managers the firm's venture capitalists brought in. Bosack and Lerner divorced in the early 1990s.
In 1996, Cisco's revenues amounted to $5.4 billion, making it one of Silicon Valley's biggest success stories. In 1998, the company was valued at over $6 billion and controlled over three-quarters of the router business.
Achievements
Along with co-founding Cisco Systems, Bosack is largely responsible for first pioneering the widespread commercialization of local area network (LAN). He and his fellow staff members at Stanford were able to successfully link the university's 5,000 computers across a campus area. This contribution is significant in its context because, at that time, technology like that which LAN used was unheard of. Their challenge had been to overcome incompatibility issues, in order to create the first true LAN system.
Bosack has also held significant technical leadership roles at AT&T Bell Labs and the Digital Equipment Corporation. After earning his master's degree in computer science from Stanford University, he became Director of Computer Facilities for the university's Department of Computer Science. He became a key contributor to the emerging ARPAnet, which was the beginning of today's Internet.
Bosack's most recent technological advancements include his creation of new in-line fiber optic amplification systems that are capable of achieving unprecedented data transmission latency speeds of 6.071 milliseconds (fiber plus equipment latency, fiber latency alone would be at least 4.106 milliseconds based on the speed of light) over 1231 kilometers of fiber, which is roughly the distance between Chicago and New York City. Bosack was inspired by his belief that by leveraging the inherent, but often untapped, physics of fiber optic components, data transmission speeds can be increased with devices that use less power, less space and require less cooling.
Charity
Together, Bosack and Lerner have a charitable foundation and trust funded with 70% of the money from the sale of their Cisco stock. The foundation is recognized for financing a wide range of animal welfare and science projects, such as The Center for Conservation Biology at the University of Washington. It has also purchased an English manor house, Chawton House, once owned by Jane Austen's brother that has become a research center on 18th and 19th-century women writers.
Controversy
In December 2001, a Mercury News article cited that a Stanford web site credits only Bosack and Lerner with developing the device that allowed computer networks to communicate intelligently with one another, despite Cisco spokeswoman Jeanette Gibson's claim that it was a group effort. Due to the nature of the collaboration, it is unable to be determined who did what during the process.
References
Living people
American computer businesspeople
American philanthropists
American technology writers
American Roman Catholics
American people of Polish descent
Businesspeople in information technology
Stanford University staff
1952 births
Internet pioneers
American technology company founders
Wharton School of the University of Pennsylvania alumni
Cisco people
Computer networking people
|
2471521
|
https://en.wikipedia.org/wiki/Veritas%20Cluster%20Server
|
Veritas Cluster Server
|
Veritas Cluster Server (rebranded as Veritas Infoscale Availability and also known as VCS and also sold bundled in the SFHA product) is a high-availability cluster software for Unix, Linux and Microsoft Windows computer systems, created by Veritas Technologies. It provides application cluster capabilities to systems running other applications, including databases, network file sharing, and electronic commerce websites.
Description
High-availability clusters (HAC) improve application availability by failing or switching them over in a group of systems—as opposed to high-performance clusters, which improve application performance by running them on multiple systems simultaneously.
Most Veritas Cluster Server implementations attempt to build availability into a cluster, eliminating single points of failure by making use of redundant components like multiple network cards, storage area networks in addition to the use of VCS.
Similar products include Fujitsu PRIMECLUSTER, IBM PowerHA System Mirror, HP Serviceguard, IBM Tivoli System Automation for Multiplatforms (SA MP), Linux-HA, OpenSAF, Microsoft Cluster Server (MSCS), NEC ExpressCluster, Red Hat Cluster Suite, SteelEye LifeKeeper and Sun Cluster. VCS is one of the few products in the industry that provides both high availability and disaster recovery across all major operating systems while supporting 40+ major application/replication technologies out of the box.
VCS is mostly a user-level clustering software; most of its processes are normal system processes on the systems it operates on, and have no special access to the operating system or kernel functions in the host systems. However, the interconnect (heartbeat) technology used with VCS is a proprietary Layer 2 ethernet-based protocol that is run in the kernel space using kernel modules. The group membership protocol that runs on top of the interconnect heartbeat protocol is also implemented in the kernel. In case of a split brain, the 'fencing' module does the work of arbitration and data protection. Fencing too is implemented as a kernel module.
The basic architecture of VCS includes LLT ( Low Latency Transport ), GAB ( Global Membership services and Atomic Broadcast Protocol ), HAD ( High Availability Daemon ), and Cluster Agents.
LLT lies at the bottom of the architecture and acts as conduit between GAB and underlying network. It receives information from GAB and transmits across to intended participant nodes. While LLT module on one node interacts with every other node in the cluster, the communication is always 1:1 between individual nodes. So in case if certain information needs to be transmitted across all cluster nodes assuming a 6 nodes cluster, 6 different packets are sent across targeted to individual machine interconnects.
GAB determines which machines are part of cluster and minimum number of nodes that need to be present and working to form the cluster ( this minimum number is called seed number ). GAB acts as an abstract layer upon which other cluster services can be plugged in. Each of those cluster services need to register with GAB and is assigned a predetermined unique port name ( a single alphabet). GAB has both a client and server component. Client component is used to send information using GAB layer and registers with Server component as Port "a". HAD registers with GAB as port "h". Server portion of GAB interacts with GAB modules on other cluster nodes to maintain membership information with respect to different ports. The membership information conveys if all the cluster modules corresponding to ports ( For example GAB ( port "a" ), HAD ( port "h" ) etc ) on different cluster nodes are in good shape and able to communicate in intended manner with each other.
HAD layer is the place where actual high availability for applications are provided. This is the place where applications actually plug into High availability framework. HAD registers with GAB on port "h". HAD module running on one node communicates with HAD modules running on other cluster nodes in order to ensure all the cluster nodes have same information with respect to cluster configuration and status.
In order for applications to be able to plug into High Availability Framework, it needs Cluster agent software. Cluster Agent softwares can be generic or specific to each type of application. For example, for Oracle to utilize HA ( High Availability ) framework in VCS, it needs an agent software. VCS at base is generic Cluster software and may not know how different applications start, stop, monitor, clean etc. This information needs to be coded into Agent software. Agent software can be thought of as a translator between application and HA framework. For example, if HAD needs to stop Oracle Database, by default, it will not know how to stop it, however, if it has Oracle DB agent running on it, it will ask Oracle agent to stop database and by definition, agent will issue commands specific to DB version and configuration and monitor the stop status.
Important files where cluster configuration information is kept :
LLT : /etc/llttab, /etc/llthosts
GAB : /etc/gabtab
HAD (VCS) : /etc/VRTSvcs/conf/config/main.cf, /etc/VRTSvcs/conf/config/types.cf, /etc/VRTSvcs/conf/sysname
Veritas Cluster Server for Windows is available as a standalone product. It is also sold bundled with Storage Foundation as Storage Foundation HA for Windows; Veritas Cluster Server for AIX, HP-UX, Linux, and Solaris is supplied as a standalone product.
The Veritas Cluster Server product includes VCS Management Console, a multi-cluster management software that automates disaster recovery across data centers.
Release history
Veritas Cluster Server 4 (End of support July 31, 2011)
Veritas Cluster Server 5.0 (End of support August 31, 2014)
Veritas Cluster Serv 5.1 (End of support November 30, 2016)
Veritas Cluster Server 6.0, released
Veritas Infoscale Availability 7.0 (formerly Veritas Cluster Server)
See also
High-availability cluster
Solaris Cluster
Computer cluster
Symantec Operations Readiness Tools (SORT)
References
External links
Veritas Cluster Server documentation1
Veritas Cluster Server agent finder and downloads
Veritas Cluster Server homepage
Veritas Cluster Server Management Console
Symantec Operations Readiness Tools (SORT)
High-availability cluster computing
Cluster computing
|
41191042
|
https://en.wikipedia.org/wiki/Physical%20Security%20Interoperability%20Alliance
|
Physical Security Interoperability Alliance
|
The Physical Security Interoperability Alliance (PSIA) is a global consortium of more than 65 physical security manufacturers and systems integrators focused on promoting interoperability of IP-enabled security devices and systems across the physical security ecosystem as well as enterprise and building automation systems.
The PSIA promotes and develops open specifications, relevant to networked physical security technology, across all industry segments including video, storage, analytics, intrusion, and access control. Its work is analogous to that of groups and consortia that have developed standardized methods that allow different types of equipment to seamlessly connect and share data, such as the USB and Bluetooth.
Specifications
The PSIA has created seven complementary specifications that enable systems and devices to interoperate and exchange data and intelligence.
Three of these specifications are the “reference works” for the family of specifications. These are the Service Model; PSIA Common Metadata & Event Model; and the PSIA Common Security Model. These “common models” define and describe various security events as well as computer network and software protocols relevant to security devices and systems.
The other four PSIA specifications correspond to domains in the security ecosystem. These are the IP Media Device specification, Recording and Content Management specification, Video Analytics specification and Area Control specification. These base their communications about security events on the PSIA Common Metadata & Event Model, one of the reference works described above.
PSIA specifications are expected to become more critical to security system architecture as major users integrate video surveillance, access and area control, mobile devices and local and cloud-based storage across a common information technology platform. PSIA has a liaison with the International Electrotechnical Commission on two specifications for access control and video. The access control specification, IEC 60839-11-1, pending a vote, is expected to have a big impact on the manufacturing and interoperability of thousands of access control systems.
PSIA Common Security Model v1.0
The PSIA Common Security Model (CSEC) specification is the comprehensive PSIA specification for all protocol, data and user security. It covers security requirements and definitions for network and session security, key and certificate management, and user permission management. These security definitions apply to all PSIA nodes.
PSIA Common Metadata & Event Model
The Common Metadata and Event Model provides a common set of services used by disparate physical security technologies, such as chemical/biological sensors, intrusion devices, video analytics, and traffic control sensors, to respond to various types of alerts. This specification allows vendors to simplify their interoperability communications by simply putting their device-specific information on top of the baseline Common Metadata and Event Model protocols and services.
IP Media Device Specification (IPMD) 1.1
The IP Media Device (video) specification enables interoperability among disparate products that comply with the specification, such as an IP camera, intrusion device and video management or access control system. Interoperability based on this specification eliminates the need for software development kits for custom drivers and interfaces. It essentially creates a common API which can be used by both device and VMS vendors which offers an alternative to the proprietary APIs that exist today.
Recording and Content Management (RaCM) Specification, Version 1.1a
The PSIA Recording and Content Management (RaCM) Specification, Version 1.1a, describes the PSIA standards for recording, managing, searching, describing, and streaming multimedia information over IP networks. This includes support for both NVRs and DVRs. The specification references the PSIA Service Model and IP Media Device specifications. XML schema definitions and XML examples are included in the specification to aid implementers in developing standards-based products.
Video Analytics Specification v1.0
The v1.0 Video Analytics Specification (VAS) specifies an interface that enables IP devices and video management/surveillance systems to communicate video analytics data in a standardized way. The scope for the initial release of the specification focuses entirely on video analytics capabilities discovery and analytic data output. Video analytic capabilities discovery will include standard configuration data exchange to enable any analytic device to communicate to another device or application its basic analytic capabilities at the device level and the video channel level (for multichannel devices). This includes information such as the PSIA VAS version number supported, analytic vendor information (name, software version number, etc.), event types and mechanisms supported, and other supported configurations. From an analytic output perspective, the v1.0 VAS includes the definition of multiple types of analytic events, including alerts and counts, as well as video analytics metadata output.
PSIA Area Control Specification V1.0
This specification standardizes the communication into access control and intrusion products, making them interoperable with an overall security system. This specification takes advantage of other PSIA specs, especially the Common Metadata and Events Model (CMEM). Harmonizing and sharing data between access control, intrusion, video, and analytics systems results in optimized and more easily integrated security management.
PSIA Access Control Profile and Intrusion Detection Profile
The PSIA currently offers an Access Control Profile and an Intrusion Detection Profile, each derived from the PSIA’s Area Control Specification. Not every manufacturer supports every use case covered in the Area Control Specification. By complying with the applicable Profiles, these manufacturers can still build PSIA plug-and-play interoperability into their products. Products and technology that comply with a PSIA Profile will interoperate with any other product or technology that is PSIA compliant to that Profile. The PSIA offers a Profiles Test Tool to validate that a Profiles implementation is correct and complete and ensures manufacturers’ products will interoperate with other PSIA-compliant products.
Specification adoption
More than 1500 companies have registered for the 1.0 IP Media Device (video) specification since its initial release in March 2008. Commercially available products and systems that are PSIA-compliant include physical security information management (PSIM) systems; video management systems; surveillance cameras; video analytics; access control systems; and sensors and intrusion detection devices.
The founding of the PSIA
David Bunzel, executive director for a data storage industry standards association, began exploring surveillance video storage requirements in 2007 for the physical security industry. The physical security industry is known for its closed, proprietary systems; custom coding is typically required to integrate a closed system with any other system or digital tool. Bunzel convened a meeting of security industry leaders to discuss creating open standards in the physical security industry.
The following companies were at the initial meeting: Adesta; ADT; Anixter; Axis; Cisco; CSC; GE Security; Genetec; IBM; IQinVision; Johnson Controls; March Networks; Pelco; ObjectVideo; Orsus; Panasonic; Sony; Texas Instruments; Verint; and Vidyo.
The development of PSIA specifications
PSIA supports license-free standards and specifications, which are vetted in an open and collaborative manner and offered to the security industry as a whole. Five active working groups, IP Video, Video Analytics, Recording and Content Management, Area Control, and Systems, develop these specifications.
A specification can be developed in a variety of ways, including a submission of a core document by a member company or a working group submission based on input from the committee members. In either case the document is expanded and reviewed by its working group members, with consensus determining the features and characteristics of the specification.
Members
The organization's members include leading manufacturers, systems integrators, consultants and end users. These include Assa Abloy, Cisco Systems, HID, Honeywell, Ingersoll-Rand, Inovonics, IQinVision, Last Lock, Lenel, Kastle Systems, Milestone Systems, NICE Systems, ObjectVideo, OnSSI, Proximex, SCCG, Sentry Enterprises, Tyco International, UTC, Verint, Vidsys, and Z9 Security. and formerly also HikVision and Dahua which were subsequently banned from ONVIF due to human rights abuses
Timeline
February 2008 –The PSIA is founded
March 2009—IP Media Device Specification Released
March 2009—The PSIA incorporates
December 2009—Recording and Content Management (RaCM) Specification released
September 2010—Video Analytics Specification Released
April 2011—Common Metadata and Event Model Released
November 2011—Area Control Specification Released
June 2013—Profiles released for Access Control and Intrusion Detection
See also
ONVIF
References
External links
Physical security
Internet security
|
31662429
|
https://en.wikipedia.org/wiki/Coreflood
|
Coreflood
|
Coreflood is a trojan horse and botnet created by a group of Russian hackers and released in 2010. The FBI included on its list of infected systems "approximately 17 state or local government agencies, including one police department; three airports; two defense contractors; five banks or financial institutions; approximately 30 colleges or universities; approximately 20 hospital or health care companies; and hundreds of businesses." It is present on more than 2.3 million computers worldwide and as of May 2011 remains a threat.
Background
BackdoorCoreflood is a trojan horse that opens a back door on the compromised computer.
It acts as a keylogger and gathers user information.
Current status
The FBI has the capability, and recently authorization from the courts, to delete Coreflood from infected computers after receiving written consent. The FBI has reduced the size of the botnet by 90% in the United States and 75% around the world.
References
Botnets
Windows trojans
|
66705925
|
https://en.wikipedia.org/wiki/Mary%20Ann%20Mansigh
|
Mary Ann Mansigh
|
Mary Ann Mansigh Karlsen (born 1932) is a computer programmer who was active in the 1950s in the use of scientific computers.
Biography
Mansigh attended the University of Minnesota on a scholarship from 1950 to 1954, where she studied physics, chemistry and mathematics. In 1955, she took a position at the Lawrence Livermore National Laboratory as a software engineer, where she would remain until she retired in 1994, working on over 13 generations of supercomputers from the UNIVAC (1955) to the Cray I (1994).
At the Lawrence Livermore National Laboratory, she worked with Berni Alder and Tom Wainwright in the implementation of molecular dynamics in the mid twentieth century, ultimately working exclusively with Alder for over twenty-five years. She is regarded as a pioneer in programming and computing, particularly molecular dynamics computing, whom Dutch computational physicist Daan Frenkel noted as being one of the very few notable female computer programmers, with Arianna W. Rosenbluth, that were active in the 1950s and 1960s.
Initially forgotten, except in annotations and oral transcripts, she has received increased attention in recent times, with events and talks on her legacy. In 2019, she had a lecture series at the Centre Européen de Calcul Atomique et Moléculaire (CECAM) named in her honour. Modern academics have noted her unfair absence as an author in published academic papers describing the results of computer programmes designed with her pioneering molecular dynamics computing code.
See also
Mary Tsingou
References
External links
Computer Pioneer Mary Ann Mansigh Karlsen (Livermore Library and the American Association of University Women, March 2017)
Mary Ann Mansigh series: Almost famous, the woman behind the codes (Centre Européen de Calcul Atomique et Moléculaire (CECAM), May 2020)
Picture of Alder, Mansigh & Wainwright, in the Niels Bohr subssection of the AIP Emilio Segre Visual Archives. (University of Chicago)
Flowchart template (Object has "Mary Ann Mansigh" handwritten in red on lower edge) (Computer History Museum, Catalogue Number: 102678315)
1932 births
Place of birth missing (living people)
Living people
Scientific computing researchers
Computer programmers
Nationality missing
American women computer scientists
American computer scientists
University of Minnesota College of Science and Engineering alumni
Lawrence Livermore National Laboratory staff
20th-century American women scientists
|
538434
|
https://en.wikipedia.org/wiki/I2P
|
I2P
|
The Invisible Internet Project (I2P) is an anonymous network layer (implemented as a mix network) that allows for censorship-resistant, peer-to-peer communication. Anonymous connections are achieved by encrypting the user's traffic (by using end-to-end encryption), and sending it through a volunteer-run network of roughly 55,000 computers distributed around the world. Given the high number of possible paths the traffic can transit, a third party watching a full connection is unlikely. The software that implements this layer is called an "I2P router", and a computer running I2P is called an "I2P node". I2P is free and open sourced, and is published under multiple licenses.
Technical design
I2P is beta software since 2003, when it started as a fork of Freenet. The software's developers emphasize that bugs are likely to occur in the beta version and that peer review has been insufficient to date. However, they believe the code is now reasonably stable and well-developed, and more exposure can help the development of I2P.
The network is strictly message-based, like IP, but a library is available to allow reliable streaming communication on top of it (similar to TCP, although from version 0.6, a new UDP-based SSU transport is used). All communication is end-to-end encrypted (in total, four layers of encryption are used when sending a message) through garlic routing, and even the end points ("destinations") are cryptographic identifiers (essentially a pair of public keys), so that neither senders nor recipients of messages need to reveal their IP address to the other side or to third-party observers.
Although many developers had been a part of the Invisible IRC Project (IIP) and Freenet communities, significant differences exist between their designs and concepts. IIP was an anonymous centralized IRC server. Freenet is a censorship-resistant distributed data store. I2P is an anonymous peer-to-peer distributed communication layer designed to run any traditional internet service (e.g. Usenet, email, IRC, file sharing, Web hosting and HTTP, or Telnet), as well as more traditional distributed applications (e.g. a distributed data store, a web proxy network using Squid, or DNS).
Many developers of I2P are known only under pseudonyms. While the previous main developer, jrandom, is currently on hiatus, others, such as zzz, killyourtv, and Complication have continued to lead development efforts, and are assisted by numerous contributors.
I2P uses 2048bit ElGamal/AES256/SHA256+Session Tags encryption and Ed25519 EdDSA/ECDSA signatures.
Releases
I2P has had a stable release every six to eight weeks. Updates are distributed via I2P torrents and are signed by the release manager (generally zzz or str4d).
Software
Since I2P is an anonymous network layer, it is designed so other software can use it for anonymous communication. As such, there are a variety of tools currently available for I2P or in development.
The I2P router is controlled through the router console, which is a web frontend accessed through a web browser.
General networking
I2PTunnel is an application embedded into I2P that allows arbitrary TCP/IP applications to communicate over I2P by setting up "tunnels" which can be accessed by connecting to pre-determined ports on localhost.
SAM (Simple Anonymous Messaging) is a protocol which allows a client application written in any programming language to communicate over I2P, by using a socket-based interface to the I2P router.
BOB (Basic Open Bridge) is a less complex app to router protocol similar to "SAM"
Orchid Outproxy Tor plugin
Chat
Any IRC client made for the Internet Relay Chat can work, once connected to the I2P IRC server (on localhost).
File sharing
Several programs provide BitTorrent functionality for use within the I2P network. Users cannot connect to non-I2P torrents or peers from within I2P, nor can they connect to I2P torrents or peers from outside I2P. I2PSnark, included in the I2P install package, is a port of the BitTorrent client named Snark. Vuze, formerly known as Azureus, is a BitTorrent client that includes a plugin for I2P, allowing anonymous swarming through this network. This plugin is still in an early stage of development, however it is already fairly stable. I2P-BT is a BitTorrent client for I2P that allows anonymous swarming for file sharing. This client is a modified version of the original BitTorrent 3.4.2 program which runs on MS Windows and most dialects of Unix in a GUI and command-line environment. It was developed by the individual known as 'duck' on I2P in cooperation with 'smeghead'. It is no longer being actively developed; however, there is a small effort to upgrade the I2P-BT client up to par with the BitTorrent 4.0 release. I2PRufus is an I2P port of the Rufus BitTorrent client. Robert (P2P Software) is the most actively maintained I2PRufus fork. XD is a standalone BitTorrent client written in Go.
Two Kad network clients exist for the I2P network, iMule and Nachtblitz. iMule (invisible Mule) is a port of eMule for I2P network. iMule has not been developed since 2013. iMule is made for anonymous file sharing. In contrast to other eDonkey clients, iMule only uses the Kademlia for proceeding to connect through I2P network, so no servers are needed. Nachtblitz is a custom client built on the .NET Framework. The latest version is 1.4.27, released on March 23, 2016. Nachtblitz includes a time lock to disable the software one year after its release date.
I2Phex is a port of the popular Gnutella client Phex to I2P. It is stable and fairly functional.
Tahoe-LAFS has been ported to I2P. This allows for files to be anonymously stored in Tahoe-LAFS grids.
MuWire is a file-sharing program inspired by the LimeWire Gnutella client that works atop the I2P network.
Bridging to Clearnet
Currently, Vuze is the only torrent clients that makes clearnet (connections not through I2P) torrents available on I2P and vice versa, by using a plugin that connects them to the I2P network. Depending on the client settings, torrents from the internet can be made available on I2P (via announcements to I2P's DHT network) and torrents from I2P can be made available to the internet. For this reason, torrents previously published only on I2P can be made available to the entire Internet, and users of I2P can often download popular content from the Internet while maintaining the anonymity of I2P.
Email
I2P-Bote(github) is a free, fully decentralized and distributed anonymous email system with a strong focus on security. It supports multiple identities and does not expose email metadata. , it is still considered beta software. I2P-Bote is accessible via the I2P web console interface or using standard email protocols (i.e. IMAP/SMTP). All bote-mails are transparently end-to-end encrypted and signed by the sender's private key, thus removing the need for PGP or other email encryption software. I2P-Bote offers additional anonymity by allowing for the use of mail relays with variable length delays. Since it is decentralized, there is no centralized email server that could correlate different email identities as communicating with each other (i.e. profiling). Even the nodes relaying the mails do not know the sender, and apart from sender and receiver, only the end of the high-latency mail route and the storing nodes will know to whom (which I2P-Bote address – the user's IP address is still hidden by I2P) the mail is destined. The original sender could have gone offline long before the email becomes available to the recipient. No account registration is necessary, all you have to do in order to use it is create a new identity. I2P-Bote can be installed as an I2P plugin .
I2P also has a free pseudonymous e-mail service run by an individual called Postman. Susimail is a web-based email client intended primarily for use with Postman's mail servers, and is designed with security and anonymity in mind. Susimail was created to address privacy concerns in using these servers directly using traditional email clients, such as leaking the user's hostname while communicating with the SMTP server. It is currently included in the default I2P distribution, and can be accessed through the I2P router console web interface. Mail.i2p can contact both I2P email users and public internet email users.
Bitmessage.ch can be used over I2p
Instant Messaging
I2PChat is a secure P2P messenger. It is available to download here: https://vituperative.github.io/i2pchat/
I2P-Messenger is a simple Qt-based, serverless, end-to-end-encrypted instant messenger for I2P. No servers can log the user's conversations. No ISP can log with whom the user chats, when, or for how long. As it is serverless, it can make use of I2P's end-to-end encryption, preventing any node between two parties from having access to the plain text. I2P-Messenger can be used for fully anonymous instant communication with persons the user doesn't even know, or, alternatively, to communicate securely and untraceably with friends, family members, or colleagues. In addition to messaging, file transfer is also supported.
I2P-Talk is another simple instant messenger incompatible with I2P-Messenger, but having the same security properties
Publishing
Syndie is a content distribution application, suitable for blogs, newsgroups, forums and small media attachments. Syndie is designed for network resilience. It supports connections to I2P, the Tor network (Syndie does not support Socks proxies, workaround needed for Tor access), Freenet and the regular internet. Server connections are intermittent, and support higher-latency communications. Connections can be made to any number of known servers. Content is spread efficiently using a Gossip protocol.
Aktie is an anonymous file sharing and distributed Web of trust forums system. Aktie can connect to I2P with its internal router or use an external router. To fight spam, "hash payments" (proof of CPU work) is computed for every published item.
Routers
I2PBerry is a Linux distribution which can be used as a router to encrypt and route network traffic through the I2P network.
i2pd is a light-weight I2P router written in C++, stripping the excessive applications such as e-mail, torrents, and others that can be regarded as bloat.
Kovri is an I2P router written in C++. It was forked from i2pd following developer disagreements. Kovri's primary purpose is to integrate with the cryptocurrency Monero to send new transaction information over I2P, making it much more difficult to find which node is the origin of a transaction request. Those using the Kovri router will be running full I2P routers that contribute to the I2P network the same way the current Java router does. This project is expected to benefit both the Monero and I2P communities, since it will allow for greater privacy in Monero, and it should increase the number of nodes on the I2P network.
The Privacy Solutions project
The Privacy Solutions project, a new organization that develops and maintains I2P software, launched several new development efforts designed to enhance the privacy, security, and anonymity for users, based on I2P protocols and technology.
These efforts include:
The Abscond browser bundle.
i2pd, an alternate implementation of I2P, written in C++ (rather than Java).
The "BigBrother" I2P network monitoring project.
The code repository and download sections for the i2pd and Abscond project is available for the public to review and download.
Effective January, 2015 i2pd is operating under PurpleI2P.
Android
Release builds of an I2P Router application for Android can be found on the Google Play store under The Privacy Solutions Project's Google Play account or on an F-Droid repository hosted by the developers.
Nightweb is an Android application that utilizes I2P and Bittorrent to share blog posts, photos, and other similar content. It can also be run as a desktop application. It is no longer in development.
Cryptocurrency
Some cryptocurrencies that support I2P are listed below.
Bitcoin
Monero
Verge (cryptocurrency)
Terminology
Eepsite Eepsites are websites that are hosted anonymously within the I2P network. Eepsite names end in .i2p, such as ugha.i2p or forum.i2p. EepProxy can locate these sites through the cryptographic identifier keys stored in the hosts.txt file found within the I2P program directory. Typically, I2P is required to access these eepsites.
.i2p 'I2p' is a pseudo-top-level domain which is only valid within the I2P overlay network scope. .i2p names are resolved by browsers by submitting requests to EepProxy which will resolve names to an I2P peer key and will handle data transfers over the I2P network while remaining transparent to the browser.
EepProxy The EepProxy program handles all communication between the browser and any eepsite. It functions as a proxy server that can be used by any web browser.
Peers, I2P nodes Other machines using I2P that are connected to user's machine within the network. Each machine within the network shares the routing and forwarding of encrypted packets.
Tunnels Every ten minutes, a connection is established between the user's machine and another peer. Data to and from the user, along with the data for other peers (routed through the user's machine), pass through these tunnels and are forwarded to their final destination (may include more jumps).
netDb The distributed hash table (DHT) database based on the Kademlia algorithm that holds information on I2P nodes and I2P eepsites. This database is split up among routers known as "floodfill routers". When a user wants to know how to contact an eepsite, or where more peers are, they query the database.
Vulnerabilities
Denial of service attacks are possible against websites hosted on the network, though a site operator may secure their site against certain versions of this type of attack to some extent.
A zero-day vulnerability was discovered for I2P in 2014, and was exploited to de-anonymize at least 30,000 users. This included users of the operating system Tails. This vulnerability was later patched.
A 2017 study examining how forensic investigators might exploit vulnerabilities in I2P software to gather useful evidence indicated that a seized machine which had been running I2P router software may hold unencrypted local data that could be useful to law enforcement. Records of which eepsites a user of a later-seized machine was interested in may also be inferred. The study identified a "trusted" I2P domain registrar ("NO.i2p") which appeared to have been abandoned by its administrator, and which the study identified as a potential target for law enforcement takeover. It alternatively suggested waiting for NO.i2p's server to fail, only to social engineer the I2P community into moving to a phony replacement. Another suggestion the study proposed was to register a mirror version of a target website under an identical domain.
I2PCon
From the first I2P convention was held in Toronto, Ontario. The conference was hosted by a local hackerspace, Hacklab. The conference featured presentations from I2P developers and security researchers.
mainly had presentations on the past growth of the I2P network, a talk on what happens when companies sell people's personal information, and a round-table discussion on general privacy and security topics. The day ended with a CryptoParty, which helped to introduce new users to installing I2P, sending secure emails with I2P-Bote, and using I2P along with Vuze.
had more technical discussions than the previous day. The talks focused on how to dissuade bad-actors from using the network, how I2P has worked computer connection limits, how to do application development using I2P, and the development of the Android version. This day ended with a development meeting.
Cultural references
In House of Cards season 2 episode 2, I2P is referenced.
See also
Rendezvous protocol
Crypto-anarchism
Deep web
Darknet
Garlic routing
Key-based routing
Public-key cryptography
Secure communication
Threat model
Software
Retroshare
Tor
Tribler
ZeroNet
Freenet
Mixnet
References
External links
Anonymity networks
Anonymous file sharing networks
Application layer protocols
Hash based data structures
Distributed data storage
Distributed data structures
Distributed data storage systems
File sharing
Free file transfer software
Free file sharing software
Distributed file systems
Cross-platform free software
Cross-platform software
Cryptographic software
Cryptographic protocols
Cryptography
Dark web
Free communication software
Free multilingual software
Free network-related software
Free routing software
Free software programmed in Java (programming language)
Free and open-source Android software
Internet privacy software
Peer-to-peer computing
Privacy software
2003 software
Software using the MIT license
Computer networking
Overlay networks
Onion routing
Garlic routing
Key-based routing
Mix networks
|
59804388
|
https://en.wikipedia.org/wiki/Sarah%20E.%20Zabel
|
Sarah E. Zabel
|
Sarah E. Zabel (born July 9, 1965, in Los Angeles, California) is a retired United States Air Force general and former vice director of the US Defense Information Systems Agency (DISA) where she managed a federal agency of 16,000 military, civilian and contract personnel. Her principal mission was to plan, develop, deliver and operate command and control capabilities and a global enterprise infrastructure in direct support of the president, the secretary of defense, the Joint Chiefs of Staff, the combatant commanders, the Department of Defense components and other mission partners across the full spectrum of operations.
Afterwards she became the director of the Information Technology Acquisition Process Development, Office of the US Air Force dedicated to Acquisition, Technology and Logistics. In this role, she devises and implements strategies to responsively deliver IT capabilities across the departments of the Air Force.
At the end of 2018, she retired.
Education
Sarah E. Zabel earned her commission from the U.S Air Force Academy in 1987, graduating with a bachelor's degree in computer science. She then graduated of a Master of Science in computer science at the University of Texas at San Antonio in 1996, a Master of Military Operational Art and Science at the Air Command and Staff College in Maxwell AFB (Alabama) in 2001, and another Master of Strategic Studies at the U.S. Army War College in Carlisle, Pennsylvania in 2007.
Distinction
Major awards and decorations
Distinguished Service Medal
Defense Superior Service Medal with oak leaf cluster
Legion of Merit Medal with oak leaf cluster
Bronze Star Medal
Defense Meritorious Service Medal with oak leaf cluster
Meritorious Service Medal with five oak leaf clusters
Joint Service Commendation Medal
Air Force Commendation Medal
Air Force Achievement Medal
Joint Meritorious Unit Award with oak leaf cluster
Air Force Outstanding Unit Award
Air Force Organizational Excellence Award with oak leaf cluster
National Defense Service Medal
Achievements
1987 Outstanding Cadet in Computer Science, U.S. Air Force Academy, Colorado
2000 Outstanding Academy Educator, Department of Computer Science, U.S. Air Force Academy, Colorado
2007 Commandant's Award for Distinction in Research, U.S. Army War College, Carlisle Barracks, Pennsylvania
2015 Certified Information Systems Security Professional
Promotions
Second Lieutenant May 27, 1987
First Lieutenant May 27, 1989
Captain May 27, 1991
Major Nov. 1, 1998
Lieutenant Colonel Feb. 1, 2003
Colonel Sept. 1, 2007
Brigadier General June 4, 2013
Major General Nov 2, 2015
Publication
“The Military Strategy of Global Jihad,” Strategic Studies Institute, 2007
References
1965 births
Living people
United States Air Force generals
United States Air Force Academy alumni
University of Texas at San Antonio alumni
|
36435263
|
https://en.wikipedia.org/wiki/Zugara
|
Zugara
|
Zugara is an American corporation headquartered in Los Angeles, California, United States that develops and licenses augmented reality software and creates Natural User Interface experiences for brands.
Zugara was founded in March 2001 as an interactive marketing company with a focus on interactive strategy and web application development. From 2001 to 2008, the company created award-winning interactive campaigns for Fortune 500 brands including Sony PlayStation, Activision Blizzard, Reebok, Toyota, Lexus, Casio and the U.S. Air Force. In early 2009, Zugara shifted focus to augmented reality software development and began development on augmented reality technologies and SDK's. Later that year, the company launched an early prototype of the Webcam Social Shopper augmented reality ecommerce product.
On September 25, 2012, Zugara was granted US Patent No. 8,275,590 for "Providing a simulation of wearing items such as garments and/or accessories". The patent covers Zugara's augmented reality technology that powers Virtual dressing rooms.
History
2001–2006: Interactive Strategy
Zugara's early efforts included award-winning interactive campaigns such as 'RBK Whodunit?' that featured integration of digital efforts with television, out of home and other advertising channels. Zugara cited the RBK Whodunit campaign's ability to drive 33% of site visitors to a retail location to interact with the product as the primary success of the campaign.
The following years saw Zugara continue to focus on interactive video initiatives with another award-winning campaign with GSD&M and the U.S. Air Force, called Do Something Amazing. The campaign and interactive component featured interactive video of the F-22 Raptor and other U.S. Air Force vehicles. The campaign was featured as a Pick of The Week by Ad Age's Creativity magazine.
Zugara's other notable work included interactive marketing campaigns for Sony PlayStation properties including The Getaway, God of War, Gran Turismo 4 and the PSP.
2007–2008: User Interface and User Experience
In 2007, Zugara's focus turned to User Interface and User Experience design for clients including Toyota and Lexus. Initial notable work with User Interface design included concepting a new method for building your Lexus online. Zugara also applied its User Interface expertise for touchscreen kiosk initiatives for the U.S. Air Force.
2009–present: Augmented Reality Software Development
In June 2009, Zugara launched The Webcam Social Shopper augmented reality ecommerce prototype. Cited initially as an "Augmented Reality Dressing Room", The Webcam Social Shopper allows online shoppers to use a webcam to visualize virtual garments on themselves while shopping online. The software also uses a motion capture system that allows users to use hand motions to navigate the software while standing back from their computer. Social media integration with Facebook and Twitter also allows users of the software to send pictures of themselves with the virtual garments for immediate feedback.
Though the Webcam Social Shopper has also been called virtual fitting room or virtual dressing room software, Zugara has referred to the software as an advanced product visualization tool for retailers.
Later in 2009, Zugara was ranked by VentureBeat as one of the top augmented reality startups.
Shortly thereafter, Zugara officially announced that it was focusing on augmented reality software development exclusively. Zugara's technologies were soon being utilized by AT&T for a World Cup augmented reality Soccer engagement and by Orange Silicon Valley for an augmented reality telemedicine prototype.
Products
Webcam Social Shopper
The Webcam Social Shopper is Zugara's flagship product. The company developed the product when noticing that online shopping conversion rates were stuck between 2% to 3%. Though online shopping was optimized for searching and browsing, it was not optimized for an engaging experience like in-store retail experiences at a mall. With the software turning an online shopper's webcam into a mirror, it was creating the offline 'at the rack' moment for shoppers at home. This helped shoppers make a more informed purchase decision by seeing an item on themselves through their webcam.
In November 2009, the Webcam Social Shopper was first deployed as Fashionista by online fashion site, Tobi.com. This initial version of the Webcam Social Shopper, used an augmented reality marker for placement of the virtual garment on the subject.
In February 2011, a new version of the Webcam Social Shopper was debuted publicly for the first time at the DEMO conference in Palm Springs, California and won the DEMOgod award. This latest version of the software removed the need for a marker and instead used facial tracking for placement of the virtual garment. Dubbed the "Plug and Play" version of the Webcam Social Shopper, this version of the software was designed for easier integration for retailers and ecommerce sites.
In June 2011, UK Fashion Retailer, Banana Flame, was the first retailer to integrate the Plug and Play version of the virtual dressing room software. According to Matthew Szymczyk, CEO of Zugara, the new version of the Webcam Social Shopper can be integrated by a retailer in less than a day. Banana Flame deployed the software to offer a virtual dressing room for online shoppers to "try on" the clothes virtually on Banana Flame's website.
On July 10, 2012, Zugara released an API for the Webcam Social Shopper for ecommerce platform integration. PrestaShop was the first ecommerce platform to offer the new Webcam Social Shopper module to its 127,000 retailers. In less than a week, over 140 retailers had downloaded the module.
On October 3, 2013, Zugara released a Kinect enabled version of its Webcam Social Shopper software called, "WSS For Kiosks". On December 10, 2013, PayPal debuted a mobile payments enhanced version of WSS For Kiosks at the LeWeb conference in Paris.
Virtual Style Sense
On January 13, 2014, Zugara announced a new technology for in-store retailers called "Virtual Style Sense". In partnership with Samsung, this Virtual dressing room technology for in-store retailers debuted at the National Retail Conference's Big Show in New York.
Critical Acclaim
TIME magazine cited the Webcam Social Shopper as one of the few useful augmented reality applications that could be advantageous to both retailers and consumers.
Fast Company called the Webcam Social Shopper 'the future'.
Online Retail results
Internet Retailer published a report on Virtual Fitting Rooms and Fit Simulators on February 1, 2012. Danish social shopping comparison site LazyLazy.com deployed the Webcam Social Shopper in late 2011 and saw its conversion rate immediately jump with 17% of shoppers using the software converting 2 to 3 times more than those who did not use the software.
In February 2012, the Mattel brand, Barbie, used a kiosk-enabled version of the Webcam Social Shopper for a New York Fashion Week event where attendees could try on virtual Barbie outfits. Data released by Zugara, showed that the web version of the Barbie Dream Closet software showed increased usage over a 3-month period. From February 2012 to April 2012 use of the software increased from 20% to 33% and 50% of those users took an average of 6 photos each.
Technologies
Zugara's augmented reality and computer vision technologies are used together for the company's Webcam Social Shopper product. However, Zugara has also used these individual technologies for brand applications and prototype development.
ZugMO Motion Capture
ZugMO motion capture technology allows online users to interact with their webcam based on gestural motions. ZugMO technology has been used by brands including Nestle, Toyota, Olay and Purina.
ZugMUG Facial Tracking
ZugMUG is a facial tracking technology that allows the webcam to track the users face through the webcam interface. For the Webcam Social Shopper product, this allows a virtual garment to track to an individual's face for better placement of the virtual item. The technology has also been used in a Virtual Exam application for Anthem that allowed virtual medical instruments to track to a person's eyes, ears and mouth.
ZugSTAR Interactive Video Chat
ZugSTAR technology is short for Zugara Streaming Augmented Reality. This technology allows multiple participants to share an augmented reality experience in an interactive video chat interface. Zugara debuted a ZugSTAR prototype integrated with The Webcam Social Shopper at the IAB Poland conference in Warsaw and again at the NRF Big Show in New York City in 2010.
Though many people have doubted the utility of initial augmented reality technology, ReadWriteWeb cited ZugSTAR as "one of the most clearly way more useful."
Industry criticism
Zugara has been an outspoken critic of conceptual augmented reality, citing that augmented reality companies have not been focused on monetizing the technology.
Press
AdAge selected Zugara for AdAge's Creativity 50 in 2010.
References
External links
Augmented Reality In Education
The Year Of Augmented Reality
The Augmented Reality Industry's Jan Brady Complex
Augmented reality
Software companies of the United States
Software companies established in 2001
2001 establishments in California
|
106421
|
https://en.wikipedia.org/wiki/Library%20%28computing%29
|
Library (computing)
|
In computer science, a library is a collection of non-volatile resources used by computer programs, often for software development. These may include configuration data, documentation, help data, message templates, pre-written code and subroutines, classes, values or type specifications. In IBM's OS/360 and its successors they are referred to as partitioned data sets.
A library is also a collection of implementations of behavior, written in terms of a language, that has a well-defined interface by which the behavior is invoked. For instance, people who want to write a higher-level program can use a library to make system calls instead of implementing those system calls over and over again. In addition, the behavior is provided for reuse by multiple independent programs. A program invokes the library-provided behavior via a mechanism of the language. For example, in a simple imperative language such as C, the behavior in a library is invoked by using C's normal function-call. What distinguishes the call as being to a library function, versus being to another function in the same program, is the way that the code is organized in the system.
Library code is organized in such a way that it can be used by multiple programs that have no connection to each other, while code that is part of a program is organized to be used only within that one program. This distinction can gain a hierarchical notion when a program grows large, such as a multi-million-line program. In that case, there may be internal libraries that are reused by independent sub-portions of the large program. The distinguishing feature is that a library is organized for the purposes of being reused by independent programs or sub-programs, and the user only needs to know the interface and not the internal details of the library.
The value of a library lies in the reuse of standardized program elements. When a program invokes a library, it gains the behavior implemented inside that library without having to implement that behavior itself. Libraries encourage the sharing of code in a modular fashion and ease the distribution of the code.
The behavior implemented by a library can be connected to the invoking program at different program lifecycle phases. If the code of the library is accessed during the build of the invoking program, then the library is called a static library. An alternative is to build the executable of the invoking program and distribute that, independently of the library implementation. The library behavior is connected after the executable has been invoked to be executed, either as part of the process of starting the execution, or in the middle of execution. In this case the library is called a dynamic library (loaded at runtime). A dynamic library can be loaded and linked when preparing a program for execution, by the linker. Alternatively, in the middle of execution, an application may explicitly request that a module be loaded.
Most compiled languages have a standard library, although programmers can also create their own custom libraries. Most modern software systems provide libraries that implement the majority of the system services. Such libraries have organized the services which a modern application requires. As such, most code used by modern applications is provided in these system libraries.
History
In 1947 Goldstine and von Neumann speculated that it would be useful to create a "library" of subroutines for their work on the IAS machine, an early computer that was not yet operational at that time. They envisioned a physical library of magnetic wire recordings, with each wire storing reusable computer code.
Inspired by von Neumann, Wilkes and his team constructed EDSAC. A filing cabinet of punched tape held the subroutine library for this computer. Programs for EDSAC consisted of a main program and a sequence of subroutines copied from the subroutine library. In 1951 the team published the first textbook on programming, The Preparation of Programs for an Electronic Digital Computer, which detailed the creation and the purpose of the library.
COBOL included "primitive capabilities for a library system" in 1959, but Jean Sammet described them as "inadequate library facilities" in retrospect.
JOVIAL had a Communication Pool (COMPOOL), roughly a library of header files.
Another major contributor to the modern library concept came in the form of the subprogram innovation of FORTRAN. FORTRAN subprograms can be compiled independently of each other, but the compiler lacked a linker. So prior to the introduction of modules in Fortran-90, type checking between FORTRAN subprograms was impossible.
By the mid 1960s, copy and macro libraries for assemblers were common. Starting with the popularity of the IBM System/360, libraries containing other types of text elements, e.g., system parameters, also became common.
Simula was the first object-oriented programming language, and its classes were nearly identical to the modern concept as used in Java, C++, and C#. The class concept of Simula was also a progenitor of the package in Ada and the module of Modula-2. Even when developed originally in 1965, Simula classes could be included in library files and added at compile time.
Linking
Libraries are important in the program linking or binding process, which resolves references known as links or symbols to library modules. The linking process is usually automatically done by a linker or binder program that searches a set of libraries and other modules in a given order. Usually it is not considered an error if a link target can be found multiple times in a given set of libraries. Linking may be done when an executable file is created, or whenever the program is used at runtime.
The references being resolved may be addresses for jumps and other routine calls. They may be in the main program, or in one module depending upon another. They are resolved into fixed or relocatable addresses (from a common base) by allocating runtime memory for the memory segments of each module referenced.
Some programming languages use a feature called smart linking whereby the linker is aware of or integrated with the compiler, such that the linker knows how external references are used, and code in a library that is never actually used, even though internally referenced, can be discarded from the compiled application. For example, a program that only uses integers for arithmetic, or does no arithmetic operations at all, can exclude floating-point library routines. This smart-linking feature can lead to smaller application file sizes and reduced memory usage.
Relocation
Some references in a program or library module are stored in a relative or symbolic form which cannot be resolved until all code and libraries are assigned final static addresses. Relocation is the process of adjusting these references, and is done either by the linker or the loader. In general, relocation cannot be done to individual libraries themselves because the addresses in memory may vary depending on the program using them and other libraries they are combined with. Position-independent code avoids references to absolute addresses and therefore does not require relocation.
Static libraries
When linking is performed during the creation of an executable or another object file, it is known as static linking or early binding. In this case, the linking is usually done by a linker, but may also be done by the compiler. A static library, also known as an archive, is one intended to be statically linked. Originally, only static libraries existed. Static linking must be performed when any modules are recompiled.
All of the modules required by a program are sometimes statically linked and copied into the executable file. This process, and the resulting stand-alone file, is known as a static build of the program. A static build may not need any further relocation if virtual memory is used and no address space layout randomization is desired.
Shared libraries
A shared library or shared object is a file that is intended to be shared by executable files and further shared object files. Modules used by a program are loaded from individual shared objects into memory at load time or runtime, rather than being copied by a linker when it creates a single monolithic executable file for the program.
Shared libraries can be statically linked during compile-time, meaning that references to the library modules are resolved and the modules are allocated memory when the executable file is created. But often linking of shared libraries is postponed until they are loaded.
Most modern operating systems can have shared library files of the same format as the executable files. This offers two main advantages: first, it requires making only one loader for both of them, rather than two (having the single loader is considered well worth its added complexity). Secondly, it allows the executables also to be used as shared libraries, if they have a symbol table. Typical combined executable and shared library formats are ELF and Mach-O (both in Unix) and PE (Windows).
In some older environments such as 16-bit Windows or MPE for the HP 3000, only stack-based data (local) was allowed in shared-library code, or other significant restrictions were placed on shared-library code.
Memory sharing
Library code may be shared in memory by multiple processes, as well as on disk. If virtual memory is used, processes would execute the same physical page of RAM that is mapped into the different address spaces of the processes. This has advantages. For instance, on the OpenStep system, applications were often only a few hundred kilobytes in size and loaded quickly; the majority of their code was located in libraries that had already been loaded for other purposes by the operating system.
Programs can accomplish RAM sharing by using position-independent code, as in Unix, which leads to a complex but flexible architecture, or by using common virtual addresses, as in Windows and OS/2. These systems make sure, by various tricks like pre-mapping the address space and reserving slots for each shared library, that code has a great probability of being shared. A third alternative is single-level store, as used by the IBM System/38 and its successors. This allows position-dependent code, but places no significant restrictions on where code can be placed or how it can be shared.
In some cases different versions of shared libraries can cause problems, especially when libraries of different versions have the same file name, and different applications installed on a system each require a specific version. Such a scenario is known as DLL hell, named after the Windows and OS/2 DLL file. Most modern operating systems after 2001 have clean-up methods to eliminate such situations or use application-specific "private" libraries.
Dynamic linking
Dynamic linking or late binding is linking performed while a program is being loaded (load time) or executed (runtime), rather than when the executable file is created. A dynamically linked library (dynamic-link library, or DLL, under Windows and OS/2; shareable image under OpenVMS; dynamic shared object, or DSO, under Unix-like systems) is a library intended for dynamic linking. Only a minimal amount of work is done by the linker when the executable file is created; it only records what library routines the program needs and the index names or numbers of the routines in the library. The majority of the work of linking is done at the time the application is loaded (load time) or during execution (runtime). Usually, the necessary linking program, called a "dynamic linker" or "linking loader", is actually part of the underlying operating system. (However, it is possible, and not exceedingly difficult, to write a program that uses dynamic linking and includes its own dynamic linker, even for an operating system that itself provides no support for dynamic linking.)
Programmers originally developed dynamic linking in the Multics operating system, starting in 1964, and the MTS (Michigan Terminal System), built in the late 1960s.
Optimizations
Since shared libraries on most systems do not change often, systems can compute a likely load address for each shared library on the system before it is needed and store that information in the libraries and executables. If every shared library that is loaded has undergone this process, then each will load at its predetermined address, which speeds up the process of dynamic linking. This optimization is known as prebinding or prelinking on macOS and Linux, respectively. IBM z/VM uses a similar technique, called "Discontinuous Saved Segments" (DCSS). Disadvantages of this technique include the time required to precompute these addresses every time the shared libraries change, the inability to use address space layout randomization, and the requirement of sufficient virtual address space for use (a problem that will be alleviated by the adoption of 64-bit architectures, at least for the time being).
Locating libraries at runtime
Loaders for shared libraries vary widely in functionality. Some depend on the executable storing explicit paths to the libraries. Any change to the library naming or layout of the file system will cause these systems to fail. More commonly, only the name of the library (and not the path) is stored in the executable, with the operating system supplying a method to find the library on disk, based on some algorithm.
If a shared library that an executable depends on is deleted, moved, or renamed, or if an incompatible version of the library is copied to a place that is earlier in the search, the executable would fail to load. This is called dependency hell, existing on many platforms. The (infamous) Windows variant is commonly known as DLL hell. This problem cannot occur if each version of each library is uniquely identified and each program references libraries only by their full unique identifiers. The "DLL hell" problems with earlier Windows versions arose from using only the names of libraries, which were not guaranteed to be unique, to resolve dynamic links in programs. (To avoid "DLL hell", later versions of Windows rely largely on options for programs to install private DLLs—essentially a partial retreat from the use of shared libraries—along with mechanisms to prevent replacement of shared system DLLs with earlier versions of them.)
Microsoft Windows
Microsoft Windows checks the registry to determine the proper place to load DLLs that implement COM objects, but for other DLLs it will check the directories in a defined order. First, Windows checks the directory where it loaded the program (private DLL); any directories set by calling the SetDllDirectory() function; the System32, System, and Windows directories; then the current working directory; and finally the directories specified by the PATH environment variable. Applications written for the .NET Framework (since 2002), also check the Global Assembly Cache as the primary store of shared dll files to remove the issue of DLL hell.
OpenStep
OpenStep used a more flexible system, collecting a list of libraries from a number of known locations (similar to the PATH concept) when the system first starts. Moving libraries around causes no problems at all, although users incur a time cost when first starting the system.
Unix-like systems
Most Unix-like systems have a "search path" specifying file-system directories in which to look for dynamic libraries. Some systems specify the default path in a configuration file, others hard-code it into the dynamic loader. Some executable file formats can specify additional directories in which to search for libraries for a particular program. This can usually be overridden with an environment variable, although it is disabled for setuid and setgid programs, so that a user can't force such a program to run arbitrary code with root permissions. Developers of libraries are encouraged to place their dynamic libraries in places in the default search path. On the downside, this can make installation of new libraries problematic, and these "known" locations quickly become home to an increasing number of library files, making management more complex.
Dynamic loading
Dynamic loading, a subset of dynamic linking, involves a dynamically linked library loading and unloading at runtime on request. Such a request may be made implicitly or explicitly. Implicit requests are made when a compiler or static linker adds library references that include file paths or simply file names. Explicit requests are made when applications make direct calls to an operating system's API.
Most operating systems that support dynamically linked libraries also support dynamically loading such libraries via a run-time linker API. For instance, Microsoft Windows uses the API functions LoadLibrary, LoadLibraryEx, FreeLibrary and GetProcAddress with Microsoft Dynamic Link Libraries; POSIX-based systems, including most UNIX and UNIX-like systems, use dlopen, dlclose and dlsym. Some development systems automate this process.
Object libraries
Although originally pioneered in the 1960s, dynamic linking did not reach operating systems used by consumers until the late 1980s. It was generally available in some form in most operating systems by the early 1990s. During this same period, object-oriented programming (OOP) was becoming a significant part of the programming landscape. OOP with runtime binding requires additional information that traditional libraries don't supply. In addition to the names and entry points of the code located within, they also require a list of the objects they depend on. This is a side-effect of one of OOP's main advantages, inheritance, which means that parts of the complete definition of any method may be in different places. This is more than simply listing that one library requires the services of another: in a true OOP system, the libraries themselves may not be known at compile time, and vary from system to system.
At the same time many developers worked on the idea of multi-tier programs, in which a "display" running on a desktop computer would use the services of a mainframe or minicomputer for data storage or processing. For instance, a program on a GUI-based computer would send messages to a minicomputer to return small samples of a huge dataset for display. Remote procedure calls (RPC) already handled these tasks, but there was no standard RPC system.
Soon the majority of the minicomputer and mainframe vendors instigated projects to combine the two, producing an OOP library format that could be used anywhere. Such systems were known as object libraries, or distributed objects, if they supported remote access (not all did). Microsoft's COM is an example of such a system for local use. DCOM, a modified version of COM, supports remote access.
For some time object libraries held the status of the "next big thing" in the programming world. There were a number of efforts to create systems that would run across platforms, and companies competed to try to get developers locked into their own system. Examples include IBM's System Object Model (SOM/DSOM), Sun Microsystems' Distributed Objects Everywhere (DOE), NeXT's Portable Distributed Objects (PDO), Digital's ObjectBroker, Microsoft's Component Object Model (COM/DCOM), and any number of CORBA-based systems.
Class libraries
Class libraries are the rough OOP equivalent of older types of code libraries. They contain classes, which describe characteristics and define actions (methods) that involve objects. Class libraries are used to create instances, or objects with their characteristics set to specific values. In some OOP languages, like Java, the distinction is clear, with the classes often contained in library files (like Java's JAR file format) and the instantiated objects residing only in memory (although potentially able to be made persistent in separate files). In others, like Smalltalk, the class libraries are merely the starting point for a system image that includes the entire state of the environment, classes and all instantiated objects.
Today most class libraries are stored in a package repository (such as Maven Central for Java). Client code explicitly declare the dependencies to external libraries in build configuration files (such as a Maven Pom in Java).
Remote libraries
Another solution to the library issue comes from using completely separate executables (often in some lightweight form) and calling them using a remote procedure call (RPC) over a network to another computer. This approach maximizes operating system re-use: the code needed to support the library is the same code being used to provide application support and security for every other program. Additionally, such systems do not require the library to exist on the same machine, but can forward the requests over the network.
However, such an approach means that every library call requires a considerable amount of overhead. RPC calls are much more expensive than calling a shared library that has already been loaded on the same machine. This approach is commonly used in a distributed architecture that makes heavy use of such remote calls, notably client-server systems and application servers such as Enterprise JavaBeans.
Code generation libraries
Code generation libraries are high-level APIs that can generate or transform byte code for Java. They are used by aspect-oriented programming, some data access frameworks, and for testing to generate dynamic proxy objects. They also are used to intercept field access.
File naming
Most modern Unix-like systems
The system stores libfoo.a and libfoo.so files in directories such as /lib, /usr/lib or /usr/local/lib. The filenames always start with lib, and end with a suffix of .a (archive, static library) or of .so (shared object, dynamically linked library). Some systems might have multiple names for a dynamically linked library. These names typically share the same prefix and have different suffixes indicating the version number. Most of the names are names for symbolic links to the latest version. For example, on some systems libfoo.so.2 would be the filename for the second major interface revision of the dynamically linked library libfoo. The .la files sometimes found in the library directories are libtool archives, not usable by the system as such.
macOS
The system inherits static library conventions from BSD, with the library stored in a .a file, and can use .so-style dynamically linked libraries (with the .dylib suffix instead). Most libraries in macOS, however, consist of "frameworks", placed inside special directories called "bundles" which wrap the library's required files and metadata. For example, a framework called MyFramework would be implemented in a bundle called MyFramework.framework, with MyFramework.framework/MyFramework being either the dynamically linked library file or being a symlink to the dynamically linked library file in MyFramework.framework/Versions/Current/MyFramework.
Microsoft Windows
Dynamic-link libraries usually have the suffix *.DLL, although other file name extensions may identify specific-purpose dynamically linked libraries, e.g. *.OCX for OLE libraries. The interface revisions are either encoded in the file names, or abstracted away using COM-object interfaces. Depending on how they are compiled, *.LIB files can be either static libraries or representations of dynamically linkable libraries needed only during compilation, known as "import libraries". Unlike in the UNIX world, which uses different file extensions, when linking against .LIB file in Windows one must first know if it is a regular static library or an import library. In the latter case, a .DLL file must be present at runtime.
See also
(VCL)
(CLX)
(used by the C++ Standard Library)
Notes
References
Further reading
Code: Errata:
Article Beginner's Guide to Linkers by David Drysdale
Article Faster C++ program startups by improving runtime linking efficiency by Léon Bottou and John Ryland
How to Create Program Libraries by Baris Simsek
BFD - the Binary File Descriptor Library
1st Library-Centric Software Design Workshop LCSD'05 at OOPSLA'05
2nd Library-Centric Software Design Workshop LCSD'06 at OOPSLA'06
How to create shared library by Ulrich Drepper (with much background info)
Anatomy of Linux dynamic libraries at IBM.com
Operating system technology
|
1252692
|
https://en.wikipedia.org/wiki/Douglas%20Hartree
|
Douglas Hartree
|
Douglas Rayner Hartree (27 March 1897 – 12 February 1958) was an English mathematician and physicist most famous for the development of numerical analysis and its application to the Hartree–Fock equations of atomic physics and the construction of a differential analyser using Meccano.
Early life
Douglas Hartree was born in Cambridge, England. His father, William, was a lecturer in engineering at Cambridge University and his mother, Eva Rayner, was president of the National Council of Women of Great Britain and first woman to be mayor of the city of Cambridge. One of his great-grandfathers was Samuel Smiles; another was the marine engineer William Hartree, partner of John Penn.
Douglas Hartree was the oldest of three sons that survived infancy. A brother and sister died in infancy when he was still a child, but his two brothers would later also die. Hartree's 7-year-old brother John Edwin died when Hartree was 17, and Hartree's 22-year-old brother Colin William died from meningitis in February 1920 when Hartree was 23.
Hartree attended St John's College, Cambridge but the first World War interrupted his studies. He (and his father and brother) joined a group working on anti-aircraft ballistics under A. V. Hill, where he gained considerable skill and an abiding interest in practical calculation and numerical methods for differential equations, executing most of his own work with pencil and paper.
After the end of World War I, Hartree returned to Cambridge graduating in 1922 with a Second Class degree in natural sciences.
Atomic structure calculations
In 1921, a visit by Niels Bohr to Cambridge inspired Hartree to apply his numerical skills to Bohr's theory of the atom, for which he obtained his PhD in 1926 – his advisor was Ernest Rutherford. With the publication of Schrödinger's equation in the same year, Hartree was able to apply his knowledge
of differential equations and numerical analysis to the new quantum theory.
He derived the Hartree equations for the distribution of electrons in an atom and proposed the self-consistent field method for their solution. The wavefunctions from this theory did not satisfy the Pauli exclusion principle for which Slater showed that determinantal functions are required. V. Fock published the "equations with exchange" now known as Hartree–Fock equations. These are considerably more demanding computationally even with the efficient methods Hartree proposed for the calculation of exchange contributions. Today, the Hartree-Fock equations are of great importance to the field of computational chemistry, and are applied and solved numerically within most of the density functional theory programs used for electronic structure calculations of molecules and condensed phase systems.
Manchester years
In 1929, Hartree was appointed to the Beyer Chair of Applied Mathematics at the University of Manchester. In 1933, he visited Vannevar Bush at the Massachusetts Institute of Technology and learned first hand about his differential analyser. Immediately on his return to Manchester, he set about building his own analyser from Meccano. Seeing the potential for further exploiting his numerical methods using the machine, he persuaded Sir Robert McDougall to fund a more robust machine, which was built in collaboration with Metropolitan-Vickers.
The first application of the machine, reflecting Hartree's enthusiasm for railways, was calculating timetables for the London, Midland and Scottish Railway.
He spent the rest of the decade applying the differential analyser to find solutions of differential equations arising in physics. These included control theory and laminar boundary layer theory in fluid dynamics making significant contributions to each of the fields.
The differential analyser was not suitable for the solution of equations with exchange. When Fock's publication pre-empted Hartree's work on equations with exchange, Hartree turned his research to radio-wave propagation that led to the Appleton–Hartree equation. In 1935, his father, William Hartree, offered to do calculations for him. Results with exchange soon followed. Douglas recognised the importance of configuration interaction that he referred to as "superposition of configurations".
The first multiconfiguration Hartree–Fock results were published by father, son, and Bertha Swirles (later Lady Jeffreys) in 1939.
At Hartree's suggestion, Bertha Swirles proceeded to derive equations with exchange for atoms using the Dirac equation in 1935. With Hartree's advice, the first relativistic calculations (without exchange) were reported in 1940 by A. O. Williams, a student of R. B. Lindsay.
Second World War
During the Second World War Hartree supervised two computing groups. The first group, for the Ministry of Supply, has been described by Jack Howlett as a "job shop" for the solution of differential equations. At the outbreak of World War II, the differential analyser at the University
of Manchester was the only full-size (eight integrator)
differential analyser in the country. Arrangements were made to have the machine available
for work in support of the national war effort. In time, the group consisted of four members
(front to back: Jack Howlett, Nicholas R. Eyres, J. G. L. Michel; centre, Douglas Hartree; right Phyllis Lockett Nicolson). Problems were submitted to the group without information about the source but included the automatic tracking of targets, radio propagation, underwater explosions, heat flow in steel, and the diffusion equation later found to be for isotope separation. The second group was the magnetron research group of
Phyllis Lockett Nicolson, David Copely, and Oscar Buneman.
The work was done for the Committee for the Co-ordination of the Valve Development assisting the development of radar. A differential analyser could have been used if more integrators had been available, so Hartree set up his group as three "CPUs" to work on mechanical desk calculators in parallel. For a method of solution, he selected what is now a classical particle simulation.
Hartree never published any of his magnetron research findings in journals though he wrote numerous highly technical secret reports during the war.
In April 1944 a committee which included Hartree recommended that a mathematical section be set up within the National Physical Laboratory (NPL). In October this recommendation was put into effect with its first two objectives being the investigation of the possible adaptation of automatic telephone equipment to scientific equipment and the development of electronic computing devices suitable for rapid computing. One suspects that some members already knew of the Colossus computer. John R. Womersley (Turing's bête noire) was the first Director. In February 1945 he went on a two-month tour of computing installations in the USA, including visiting ENIAC (still not complete). He became acquainted with drafts of von Neumann's famous June 1945 EDVAC report. About two months later Hartree also went over to see ENIAC, not then publicly known.
Later life and work
In February 1946, Max Newman (who had been involved in the Colossus computer) submitted an application to the Royal Society for funds to start the task of building a general-purpose computer at the University of Manchester. The Royal Society referred the request to Hartree and C.G. Darwin, Director of the NPL, to advise them. Hartree recommended the grant but Darwin opposed it on the grounds that Turing's ACE at NPL would be sufficient to serve the needs of the country. But Hartree's view won the day and the Manchester developments in computing were started.
Hartree did further work in control systems and was involved in the early application of digital computers, advising the US military on the use of ENIAC for calculating ballistics tables. In the summer of 1946 Hartree made his second trip to ENIAC as an evaluation of its applicability to a broad range of science, when he became the first civilian to program it. For this he selected a problem involving the flow of a compressible fluid over a surface, such as air over the surface of a wing travelling faster than the speed of sound.
At the end of 1945 or very early in 1946 Hartree briefed Maurice Wilkes of the University of Cambridge on the comparative developments in computing in the USA which he had seen. Wilkes, then received an invitation from the Moore School of Electrical Engineering (the builders of ENIAC) to attend a course on electronic computers. Before leaving for this, Hartree was able to brief him more fully on ENIAC. It was on the boat home that Wilkes planned the original design of EDSAC, which was to become operational in May 1949. Hartree worked closely with Wilkes in developing use of the machine for a wide range of problems and, most importantly, showed users from a number of areas in the university how they could use it in their research work.
Hartree returned to Cambridge to take up the post of Plummer professor of mathematical physics in 1946. In October he gave an inaugural lecture entitled "Calculating Machines: Recent and Prospective Developments and their impact on Mathematical Physics". This described ENIAC and the work that Hartree had done on it. Even in 1946, two years before stored programming electronic computing became a reality, Hartree saw the need for the use of sub-routines. His inaugural lecture ended with a look at what computers might do. He said: "..there are, I understand many problems of economic, medical and sociological interest and importance awaiting study which at present cannot be undertaken because of the formidable load of computing involved."
On 7 November 1946 The Daily Telegraph, having interviewed Hartree, quoted him as saying: "The implications of the machine are so vast that we cannot conceive how they will affect our civilisation. Here you have something which is making one field of human activity 1,000 times faster. In the field of transportation, the equivalent to ACE would be the ability to travel from London to Cambridge ... in five seconds as a regular thing. It is almost unimaginable."
Hartree's fourth and final major contribution to British computing started in early 1947 when the catering firm of J. Lyons & Co. in London heard of the ENIAC and sent a small team in the summer of that year to study what was happening in the USA, because they felt that these new computers might be of assistance in the huge amount of administrative and accounting work which the firm had to do. The team met with Col. Herman Goldstine at the Institute for Advanced Study in Princeton who wrote to Hartree telling him of their search. As soon as he received this letter, Hartree wrote and invited representatives of Lyons to come to Cambridge for a meeting with him and Wilkes. This led to the development of a commercial version of EDSAC developed by Lyons, called LEO, the first computer used for commercial business applications. After Hartree's death, the headquarters of LEO Computers was renamed Hartree House. This illustrates the extent to which Lyons felt that Hartree had contributed to their new venture.
Hartree's last famous contribution to computing was an estimate in 1950 of the potential demand for computers, which was much lower than turned out to be the case: "We have a computer here in Cambridge, one in Manchester and one at the [NPL]. I suppose there ought to be one in Scotland, but that's about all." Such underestimates of the number of computers that would be required were common at the time.
Hartree's last PhD student at Cambridge, Charlotte Froese Fischer, became known for the development and implementation of the multi-configuration Hartree–Fock (MCHF) approach to atomic structure calculations and for her theoretical prediction concerning the existence of the negative calcium ion.
Personal life
Outside of his professional life, Douglas Hartree was passionate about music, having an extensive knowledge of orchestral and chamber music. He played piano and was conductor of an amateur orchestra. This passion for music was perhaps what brought him together with his wife, Elaine Charlton, who was an accomplished pianist. Their marriage resulted in two sons, Oliver and John Richard, and one daughter, Margaret.
He died of heart failure in Addenbrooke's Hospital, Cambridge, on 12 February 1958.
Honours and awards
Fellow of the Royal Society, (1932)
The Hartree unit of energy is named after him.
The Hartree Centre is named after him.
Books
(also (1950) Cambridge University Press)
References
Further reading
The Manchester differential analyser
Fellows of the Royal Society
1897 births
1958 deaths
People from Cambridge
History of computing in the United Kingdom
English physicists
English mathematicians
20th-century mathematicians
Numerical analysts
Mathematical physicists
Academics of the Victoria University of Manchester
Academics of the University of Cambridge
Alumni of St John's College, Cambridge
People educated at Bedales School
Computational chemists
Manchester Literary and Philosophical Society
|
624625
|
https://en.wikipedia.org/wiki/On%20the%20Cruelty%20of%20Really%20Teaching%20Computer%20Science
|
On the Cruelty of Really Teaching Computer Science
|
“On the Cruelty of Really Teaching Computing Science” is a 1988 paper by E. W. Dijkstra which argues that computer programming should be understood as a branch of mathematics, and that the formal provability of a program is a major criterion for correctness.
Despite the title, most of the article is on Dijkstra’s attempt to put computer science into a wider perspective within science, teaching being addressed as a corollary at the end.
Specifically, Dijkstra made a “proposal for an introductory programming course for freshmen” that
consisted of Hoare logic as an uninterpreted formal system.
Debate over feasibility
Since the term "software engineering" was coined, formal verification has almost always been considered too resource-intensive to be feasible. In complex applications, the difficulty of correctly specifying what the program should do in the first place is also a common source of error. Other methods of software testing are generally employed to try to eliminate bugs and many other factors are considered in the measurement of software quality.
Until the end of his life, Dijkstra maintained that the central challenges of computing hadn't been met to his satisfaction, due to an insufficient emphasis on program correctness (though not obviating other requirements, such as maintainability and efficiency).
Pedagogical legacy
Computer science as taught today does not follow of Dijkstra's advice. The curricula generally emphasize techniques for managing complexity and preparing for future changes, following Dijkstra's earlier writings. These include abstraction, programming by contract, and design patterns. Programming techniques to avoid bugs and conventional software testing methods are taught as basic requirements, and students are exposed to certain mathematical tools, but formal verification methods are not included in the curriculum except perhaps as an advanced topic. So in some ways, Dijkstra's ideas have been adhered to; however, the ideas he felt most strongly about have not been.
Newly formed curricula in software engineering have adopted Dijkstra's recommendations. The focus of these programs is the formal specification of software requirements and design in order to facilitate the formal validation of system correctness. In Canada, they are often accredited engineering degrees with similar core competencies in physics-based engineering.
References
1988 documents
Computer science papers
Computer science education
Works by Edsger Dijkstra
|
38571713
|
https://en.wikipedia.org/wiki/PLA%20Unit%2061398
|
PLA Unit 61398
|
PLA Unit 61398 (also known as APT 1, Comment Crew, Comment Panda, GIF89a, and Byzantine Candor) (, Pinyin: 61398 bùduì) is the Military Unit Cover Designator (MUCD) of a People's Liberation Army advanced persistent threat unit that has been alleged to be a source of Chinese computer hacking attacks. The unit is stationed in Pudong, Shanghai.
History
2014 indictment
On 19 May 2014, the US Department of Justice announced that a Federal grand jury had returned an indictment of five 61398 officers on charges of theft of confidential business information and intellectual property from U.S. commercial firms and of planting malware on their computers. The five are Huang Zhenyu (黄振宇), Wen Xinyu (文新宇), Sun Kailiang (孙凯亮), Gu Chunhui (顾春晖), and Wang Dong (王东). Forensic evidence traces the base of operations to a 12-story building off Datong Road in a public, mixed-use area of Pudong in Shanghai. The group is also known by various other names including "Advanced Persistent Threat 1" ("APT1"), "the Comment group" and "Byzantine Candor", a codename given by US intelligence agencies since 2002.
A report by the computer security firm Mandiant stated that PLA Unit 61398 is believed to operate under the 2nd Bureau of the People's Liberation Army General Staff Department (GSD) Third Department (总参三部二局) and that there is evidence that it contains, or is itself, an entity Mandiant calls APT1, part of the advanced persistent threat that has attacked a broad range of corporations and government entities around the world since at least 2006. APT1 is described as comprising four large networks in Shanghai, two of which serve the Pudong New Area. It is one of more than 20 APT groups with origins in China. The Third and Fourth Department, responsible for electronic warfare, are believed to comprise the PLA units mainly responsible for infiltrating and manipulating computer networks.
The group often compromises internal software "comment" features on legitimate web pages to infiltrate target computers that access the sites, leading it to be known as "the Comment Crew" or "Comment Group". The collective has stolen trade secrets and other confidential information from numerous foreign businesses and organizations over the course of seven years such as Lockheed Martin, Telvent, and other companies in the shipping, aeronautics, arms, energy, manufacturing, engineering, electronics, financial, and software sectors.
Dell SecureWorks says it believed the group includes the same group of attackers behind Operation Shady RAT, an extensive computer espionage campaign uncovered in 2011 in which more than 70 organizations over a five-year period, including the United Nations, government agencies in the United States, Canada, South Korea, Taiwan and Vietnam, were targeted.
The attacks documented in the summer of 2011 represent a fragment of the Comment group's attacks, which go back at least to 2002, according to incident reports and investigators. FireEye, Inc. alone has tracked hundreds of targets in the last three years and estimates the group has attacked more than 1,000 organizations.
Most activity between malware embedded in a compromised system and the malware's controllers takes place during business hours in Beijing's time zone, suggesting that the group is professionally hired, rather than private hackers inspired by patriotic passions.
Public position of the Chinese government
Until 2013, the Government of China has consistently denied that it is involved in hacking. In response to the Mandiant Corporation report about Unit 61398, Hong Lei, a spokesperson for the Chinese foreign ministry, said such allegations were "unprofessional".
In 2013, China changed its position and openly admitted to having secretive cyber warfare units in both the military and the civilian part of the governmenthowever, the details of their activities were left to speculation. As a show of force towards the rest of the global community the Chinese government now openly lists their abilities when it comes to digital spying and network attack capabilities.
Cultural references
In the 2022 cyber thriller Rise of the Water Margin, which is a 21st century adaptation of the classic Water Margin Unit 61398 is commanded by Lin Chong. His team infiltrates semiconductor EDA tools in order to embed a back door into semiconductors.
See also
Titan Rain
Chinese espionage in the United States
National Security Agency of the United States
PLA Unit 61486
Signals intelligence
Tailored Access Operations of the United States
Mandiant
FireEye
References
Military units and formations of the People's Republic of China
Cyberwarfare in China
Chinese advanced persistent threat groups
Information operations units and formations
Hacking (computer security)
Injection exploits
Web security exploits
Sabotage
2002 establishments in China
Chinese intelligence agencies
|
1954948
|
https://en.wikipedia.org/wiki/Inkwell%20%28Macintosh%29
|
Inkwell (Macintosh)
|
Inkwell, or simply Ink, is the name of the handwriting recognition technology developed by Apple Inc. and built into the Mac OS X operating system. Introduced in an update to Mac OS X v10.2 "Jaguar", Inkwell can translate English, French, and German writing. The technology made its debut as "Rosetta", an integral feature of Apple Newton OS, the operating system of the short-lived Apple Newton personal digital assistant. Inkwell's inclusion in Mac OS X led many to believe Apple would be using this technology in a new PDA or other portable tablet computer. None of the touchscreen iOS devices – iPhone/iPod/iPad – has offered Inkwell handwriting recognition. However in iPadOS 14 handwriting recognition has been introduced, as a feature called Scribble.
Inkwell, when activated, appears as semi-transparent yellow lined paper, on which the user sees their writing appear. When the user stops writing, their writing is interpreted by Inkwell and pasted into the current application (wherever the active text cursor is), as if the user had simply typed the words. The user can also force Inkwell to not interpret their writing, instead using it to paste a hand-drawn sketch into the active window.
Inkwell was developed by Larry Yaeger, Brandyn Webb, and Richard Lyon.
In macOS 10.14 Mojave, Apple announced that Inkwell will remain 32-bit thus rendering it incompatible with macOS 10.15 Catalina. It was officially discontinued with the release of macOS Catalina on October 7, 2019.
References
External links
InkSpatter, a blog which discusses pros and cons of Inkwell
MacOS user interface
Handwriting recognition
|
50331884
|
https://en.wikipedia.org/wiki/Lalit%20Surajmal%20Kanodia
|
Lalit Surajmal Kanodia
|
Dr. Kanodia (born 30 March 1941) is an Indian business entrepreneur, is currently Chairman of Datamatics Group of Companies which he founded in 1975. He also holds the position of National President of the Indo - American Chamber of Commerce and Vice President of the Indian Merchants Chamber, both organizations of the Indian business community. He has also served as President of the Management Consultants Association of India.
Early life and education
Lalit Kanodia was born in Kolkata (then Calcutta) in West Bengal (India), the son of Shri Surajmal Kanodia, a bullion merchant and Smt. Chandravati Kanodia, a home maker. His family moved to Mumbai in 1942.
Kanodia attended Bombay Scottish School in Mumbai. Besides his academics, he participated in athletics and captained his schools’ football Team.
Kanodia studied science in Elphinstone College, Bombay University for 2 years. He was then admitted to Indian Institute of Technology Bombay, where he studied Mechanical Engineering. After graduating in 1963, Lalit secured admission to MIT, Cambridge, MA and completed his MS in Management in 1965 with the highest grade in the graduating class. He was awarded the Ford Foundation Fellowship while at MIT. He returned to MIT in 1966 for his PhD in management, which he completed in 1967. Lalit was a member of the Project MAC at MIT that built Compatible Time-Sharing System and MULTICS (the first two multi-user computer operating systems and precursors to UNIX).
Teaching
While at MIT, Lalit taught a course on statistical decision theory to MBA students (1964–65). Later, in India he taught MBA students for 2 years (during 1968–70) at the Jamnalal Bajaj Institute of Management Studies, Bombay University.
Tata Consultancy Services
In 1965, JRD Tata, the then Chairman of Tata Group was contemplating to start a software company. He chanced upon Lalit's CV and asked him to study the feasibility of computerization within the Tata Group. Lalit wrote three papers for the Tata Group which led to automation of the Load Dispatch System of the Tata Electric Companies by Westinghouse, Computerization of the electricity billing system of the company and formation of a software development center. Kanodia then returned to MIT for his doctorate. He returned to India to form and head the software development center for the Tata Group. Kanodia formed the Company in 1967 as Tata Computer Center, christened later as Tata Consultancy Services in 1968.
Consulting
While in the United States, Lalit consulted for Arthur D. Little and the Ford Motor Company. In India, he has been a consultant to the State Bank of India, the Somani Group and the Kamani Group of companies.
Datamatics
Lalit established his own Group of Companies under the Banner "Datamatics" in 1975. What he started with a modest team of 10 employees is now 8000 strong. In 1979 he set-up the first dedicated offshore development center for Wang Laboratories. He also established the first satellite link for Software development from India, between its software development center in Mumbai and AT&T Bell labs USA in 1991. This led to the foundation of BPO services in India and Kanodia formally started another company "Datamatics Technologies Limited" with 100% focus on BPO and KPO services. The start of BPO services helped Datamatics spread its wing globally and it acquired SAZTEC and CorPay, two US based companies in 1997 and 2003 respectively. Since then Datamatics has acquired other companies internationally. As of Lalit is Group Chairman of Datamatics which comprises:
Datamatics Global Services Ltd (A listed Company with BSE/NSE)
CIGNEX Datamatics Technologies Ltd
Lumina Datamatics Ltd
Datamatics Staffing Services Ltd
Personal life
Lalit has four children with his wife Asha Kanodia. Eldest son Rahul Kanodia is vice chairman and CEO of Datamatics Global Services and youngest son Sameer Kanodia is an Executive Director. His two daughters Aneesha and Amrita are married.
Recognition
Indian Affairs Indian of the year Award for IT, Consulting and BPO services
Special Achievement Award at Asia Pacific Entrepreneurship Awards
Global Achiever Award for Business Excellence
Award from Prime Minister of India for the most innovative software product
Kanodia was president of the Management Consultants' Association of India. He is National President of the Indo - American Chamber of Commerce. He is vice president of Indian Merchants Chamber and chairman of its IT committee.
He was a member of the executive committee of NASSCOM.
He has been Chairman of the Electronic & Computer Software Export Promotion Council (Western Region).
He joined the executive board of the Sloan School of Management, at MIT in 2008.
He served as the Honorary Consul General of Chile in India from 2002 to 2014.
References
Honorary Knights Grand Cross of the Order of the British Empire
MIT Sloan School of Management alumni
Businesspeople from Mumbai
Businesspeople from Haryana
20th-century Indian businesspeople
1941 births
Living people
Indian industrialists
IIT Bombay alumni
|
1776682
|
https://en.wikipedia.org/wiki/Custom%20software
|
Custom software
|
Customised software (also known as bespoke software or tailor-made software) is software that is specially developed for some specific organization or other user. As such, it can be contrasted with the use of software packages developed for the mass market, such as commercial off-the-shelf software, or existing free software.
Considerations
Since custom software is developed for a single customer it can accommodate that customer's particular preferences and expectations, which may not be the case for commercial off-the-shelf software. Custom software may be developed in an iterative processes, allowing all nuances and possible hidden risks to be taken into account, including issues which were not mentioned in the original requirement specifications (which are, as a rule, never perfect). In particular, the first phase in the software development process may involve many departments, including marketing, engineering, research and development and general management.
Large companies commonly develop custom software for critical functions, including content management, inventory management, customer management, human resource management, or otherwise to fill the gaps present in existing software packages. In many cases, such software is legacy software, developed before commercial off the shelf software or free software packages offering the required functionality with an acceptable level of quality or functionality became available or widely known. For example, the BBC spent a great deal of money on a project to develop its own custom digital media production and management software, but the project experienced troubles, and after many years of development, was cancelled. A key stated reason for the project cancellation was that it had become clear that commercial off-the-shelf software existed that was, by that point, adequate to the BBC's needs and available for a small fraction of the price.
Custom software development is often considered expensive compared to off-the-shelf solutions or products. This can be true if one is speaking of typical challenges and typical solutions. However, it is not always true. In many cases, commercial off the shelf software requires customization to correctly support the buyer's operations. The cost and delay of commercial off the shelf software customization can even add up to the expense of developing custom software.
Cost is also not the only consideration in the decision to develop custom software, as the requirements for a custom software project often includes the purchaser owning the source code, to secure the possibility of future improvement or modifications to the installed system to handle changing requirements. However, modern commercial off the shelf software often has application programming interfaces (APIs) for extensibility - or occasionally, as in the case of Salesforce.com, a domain-specific language (DSL) - meaning that commercial off the shelf software packages can sometimes accommodate quite a wide variety of customisations without the need to access source code of the core commercial off the shelf software system.
Additionally, commercial off the shelf software comes with upfront license costs which vary enormously, but sometimes run into the millions of US dollars. Furthermore, the big software houses that release commercial off the shelf software products revamp their product very frequently. Thus a particular customization may need to be upgraded for compatibility every two to four years. Given the cost of customization, such upgrades can also turn out to be expensive, as a dedicated product release cycle may have to be earmarked for them. However, in theory, the use of documented APIs and/or DSLs, as opposed to direct access to internal database tables and code modules, for customization can minimize the cost of these upgrades. This is because commercial off the shelf software vendors can opt to use techniques such as:
making "under the hood" changes while retaining backward compatibility with customizations written for older API or DSL version(s)
supporting old API version(s) and new API versions simultaneously in a new version of the software
publishing guidance warning that support for old API or DSL versions is to be removed from the product in a subsequent version, to give customers more time to adapt customizations.
The decision to build a custom software or go for a commercial off the shelf software implementation would usually rest on one or more of the following factors:
Finances - both cost and benefit: The upfront license cost for commercial off the shelf software products mean that a thorough cost-benefit analysis of the business case needs to be done. However it is widely known that large custom software projects cannot fix all three of scope, time/cost and quality constant, so either the cost or the benefits of a custom software project will be subject to some degree of uncertainty - even disregarding the uncertainty around the business benefits of a feature that is successfully implemented.
Supplier - In the case of commercial off the shelf software, is the supplier likely to remain in business long, and will there be adequate support and customisation available? Alternatively, will there be a realistic possibility of getting support and customisation from third parties? In the case of custom software, the software development may be outsourced or done in-house. If it is outsourced, the question is: is the supplier reputable, and do they have a good track record?
Time to market: commercial off the shelf software products usually have a quicker time to market
Size of implementation: commercial off the shelf software comes with standardization of business processes and reporting. For a global or national organisation, these can bring in gains in cost savings, efficiency and productivity, if the branch offices are all willing and able to use the same commercial off the shelf software without heavy customisations (which is not always a given).
Major fields
Construction
The construction industry uses custom software to manage projects, track changes, and report progress. Depending on the project, the software is modified to suit the particular needs of a project owner, the design team, and the general and trade contractors.
Project-specific data is used in other ways to suit the unique requirements of each project. Custom software accommodates a project team's particular preferences and expectations, making it suitable for most construction processes and challenges:
design development
tender calls
document control
shop drawing approvals
changes management
inspections and commissioning
way-finding
Custom software developers use various platforms, like FileMaker Pro, to design and develop custom software for the construction industry and for other industries.
Hospitals
Hospitals can keep electronic health records and retrieve them any time. This enables a doctor and his or her assistants to transfer the details of a patient through a network.
Keeping patients' blood groups in a hospital database makes the search for suitable blood quicker and easier.
Hospitals also use billing software, especially in their dispensaries.
Places of education
Schools use custom software to keep admission details of students. They produce Transfer Certificates also. Some governments develop special software for all of their schools. Sampoorna is a school management system project implemented by the Education Department of Government of Kerala, India to automate the system and process of over 15,000 schools in the state. These projects brings a uniformity for the schools.
Retail
Billing is a common use of custom software. Custom software is often used by small shops, supermarkets and wholesale sellers to handle inventory details and to generate bills.
Major project successes
Major project overruns and failures
Failures and cost overruns of government IT projects have been extensively investigated by UK Members of Parliament and officials; they have had a rich seam of failures to examine, including:
The NHS National Programme for IT
Rural Payments Agency computer systems. On 15 March 2006 the Chief Executive Johnson McNeil was sacked when a deadline of 14 February for calculating Single Payment Scheme entitlements was missed.
Universal Credit - the first trial could not even perform the most basic functions correctly; behind schedule and reportedly the project has been restarted.
1992 - LASCAD - the London Ambulance Service's new computer-aided despatch system - temporary crashes causing delays in routing ambulances. A previous attempt to develop a custom despatch system for the London Ambulance Service had also been scrapped.
Advantages and disadvantages
When a business is considering a software solution the options are generally between creating a spreadsheet (which is often done in Microsoft Excel), obtaining an off-the-shelf product, or having custom software created specifically to meet their needs. There are five main criteria involved in selecting the correct solution:
Although initial assessments of the options according to these criteria may deviate sharply from the reality of the eventual solution when put into practice, due to factors such as cost overruns, insufficient training, poor product fit, reliability of the solution, etc.
These factors need to take into account the running of the business, its industry, size and turnover. As such the decision can only be made on a business-by-business basis to determine if it warrants a custom development, as well as ownership of the software.
Advantages
Custom software will generally produce the most efficient system as it can provide support for the specific needs of the business, which might not be available in an off-the-shelf solution and will provide greater efficiency or better customer service.
Given a suitable approach to development, such as DSDM, custom software will also produce the best or most well-targeted service improvement. Businesses can tailor the software to what their customers want instead of having to choose a package that caters for a generic market. For example, one printing business may want software that responds in the shortest time, whereas another printing company may focus on producing the best results; as these two objectives often conflict, an off-the-shelf package will normally sit somewhere in the middle whereas with custom software each business can focus on their target audience.
Although not always the most suitable for larger or more complex projects, a spreadsheet allows less technical staff at a business to modify the software directly and get results faster. Custom software can be even more flexible than spreadsheets as it is constructed by software professionals that can implement functionality for a wide range of business needs.
Disadvantages
The main disadvantages of custom software are development time and cost. With a spreadsheet or an off-the-shelf software package, a user can get benefits quickly. With custom software, a business needs to go through a Software development process that may take weeks, months, or with bigger projects, years. Bugs accidentally introduced by software developers, and thorough testing to iron out bugs, may impede the process and cause it to take longer than expected. However, spreadsheets and off-the-shelf software packages may also contain bugs, and moreover because they may be deployed at a business without formal testing, these bugs may slip through and cause business-critical errors.
Custom software is often several times the cost of the other two options, and will normally include an ongoing maintenance cost. This will often make custom software infeasible for smaller businesses. These higher costs can be insignificant in larger businesses where small efficiency increases can relate to large labour cost savings or where custom software offers a large efficiency boost.
Hybrid model
Particularly with modern cloud software, a hybrid model of custom software is possible in which the main focus is on the commercial off the shelf software - mainly the mismatch between its features, functions and the business requirements, preferences and expectations. The idea here is to buy a commercial off the shelf software which satisfies maximum number of requirement and develop custom software (extensions or add-ons) to fill the gaps left by it. This is the standard approach used when implementing SAP ERP, for example.
See also
Bespoke
Software development
References
Computing terminology
|
290998
|
https://en.wikipedia.org/wiki/Broderbund
|
Broderbund
|
Broderbund Software, Inc. (stylized as Brøderbund) was an American maker of video games, educational software, and productivity tools. Broderbund is best known for the 8-bit video game hits Choplifter, Lode Runner, Karateka, and Prince of Persia (all of which originated on the Apple II), as well as The Print Shop—originally for printing signs and banners on dot matrix printers—and the Myst and Carmen Sandiego games. The company was founded in Eugene, Oregon, and moved to San Rafael, California, then later to Novato, California. Broderbund was purchased by The Learning Company (formerly SoftKey) in 1998.
Many of Broderbund's software titles, such as The Print Shop, PrintMaster, and Mavis Beacon, are still published under the name "Broderbund". Games released by the revived Broderbund are distributed by Encore, Inc. Broderbund is now the brand name for Riverdeep's graphic design, productivity, and edutainment titles such as The Print Shop, Carmen Sandiego, Mavis Beacon Teaches Typing, the Living Books series, and Reader Rabbit titles, in addition to publishing software for other companies, notably Zone Labs' ZoneAlarm.
The company would often release school editions of their games, which contained extra features to allow teachers to use the software to facilitate students' learning.
Etymology
The word "brøderbund" is not an actual word in any language but is a somewhat loose translation of "band of brothers" into a mixture of Danish, Dutch, German, and Swedish. The "ø" in "brøderbund" was used partially as a play on the letter ø from the Dano-Norwegian alphabet but was mainly referencing the slashed zero found in mainframes, terminals, and early personal computers. The three crowns above the logo are also a reference to the lesser national coat of arms of Sweden.
The company's name is pronounced instead of the popularly used .
History
Broderbund was founded by brothers Doug and Gary Carlston in 1980 for the purpose of marketing Galactic Empire, a strategy computer game that Doug Carlston had created in 1979. Before founding the company, Doug was a lawyer and Gary had held several jobs, including teaching Swedish at an American college. Their sister Cathy joined the company a year later from Lord & Taylor. Galactic Empire had many names taken from African languages; a group of merchants was named Broederbond, Afrikaans for "association of brothers". To emphasize its family origin while avoiding a connection with the ethnonationalist Afrikaner organization of the same name, the Carlstons altered the spelling when naming their company "Broderbund".
By 1982 Broderbund produced arcade games which, the company told Jerry Pournelle, sold much better than strategy games. Burr, Egan, Deleage & Co. invested in the company that year. In 1983 the Carlstons publicly discussed their plans to emphasize home utility software (Bank Street Writer and other "Bank Street" applications), computer literacy with The Jim Henson Company, and edutainment. By early 1984 InfoWorld estimated that Broderbund was tied with Human Engineered Software as the world's tenth-largest microcomputer-software company and largest entertainment-software company, with $13 million in 1983 sales. That year it took over the assets of the well-regarded but financially troubled Synapse Software. Although intending to keep it running as a business, they were unable to make money from Synapse's products, and closed it down after a year.
Broderbund's The Print Shop software produced signs and greeting cards. Broderbund started discussions with Unison World about creating an MS-DOS version. The two companies could not agree on a contract, but Unison World developed a product with similar function and a similar user interface. Broderbund sued for infringement of their copyright. Broderbund v. Unison (1986) became a landmark case in establishing that the look and feel of a software product could be subject to copyright protection.
Sierra On-Line and Broderbund ended merger discussions in March 1991. By that year Broderbund had about $50 million in revenue, and 25% share of the education market. It developed most of its software, as opposed to publishing software others had developed; Doug Carlston stated the company needed "to control our own sources, to control our future". After an unsuccessful initial public offering in 1987, Broderbund executed a private placement for 20% of shares with Jostens. It became a public company in November 1991; its NASDAQ symbol was BROD. When Broderbund went public The Print Shop comprised 33% of total revenue, and the Carmen Sandiego series 26%. The company's stock price and market capitalization climbed steadily to a maximum of nearly US$80/share in late 1995, and then fell steadily in the face of continued losses for several years.
Broderbund acquired PC Globe in July 1992. It had initially attempted to purchase the original The Learning Company in 1995, but was outbid by SoftKey, who purchased The Learning Company for $606 million in cash and then adopted its name. The company then bought Broderbund in 1998 for about US$420 million in stock and, in a move to rationalize costs, it terminated five hundred employees at Broderbund the same year (representing 42% of the company's workforce). Doug Carlston explained that in a bid to roll up Broderbund, SoftKey utilised one of their previous acquisitions to weaken the company's stronghold over the industry. They allegedly gave a rebate to Mindscape's PrintMaster, a direct competitor to Broderbund's Print Shop, that was more than the product was worth. In 1998, Broderbund inked a deal with Nickelodeon to develop CD-ROM games based on its animated cartoons, such as Rugrats.
In 1999, the combined company was purchased by Mattel for $3.6 billion. Mattel reeled from the financial impact of this transaction, and Jill E. Barad, the CEO, ended up being forced out in a climate of investor outrage. Mattel sold their game division Mattel Interactive as well as all its assets in September 2000 to Gores Technology Group, a private acquisitions firm, for a share of whatever Gores could obtain by selling the company. During this time, Broderbund products were owned by The Learning Company Deutschland GmbH, located in Oberhaching, Germany. Headed by Jean-Pierre Nordmann, the company was a subsidiary of The Learning Company (formerly SoftKey), which itself was a wholly owned subsidiary of Gores Technology Group. The company published games under two logos: Blue (Broderbund) and Red (The Learning Company). The "Broderbund" label was used for "high-quality infotainment, design and lifestyle titles such as Cosmopolitan My Style 2 and PrintMaster", while "The Learning Company" label was used for children's software.
In 2001, Gores sold The Learning Company's entertainment holdings to Ubi Soft, and most of the other holdings, including the Broderbund name, to Irish company Riverdeep. Many of Broderbund's games, such as the Myst series, are published by Ubisoft. The Broderbund line of products is published by Encore, Inc. under license from Riverdeep. Under the terms of the agreement, Encore now manages the Broderbund family of products as well as Broderbund's direct to consumer business. In May 2010 Encore acquired the assets of Punch! Software.
In 2014, Doug Carlston donated a collection of Broderbund's business records, software, and a collection of games that includes Myst, Prince of Persia, and Where in the World is Carmen Sandiego? to The Strong National Museum of Play. The Strong National Museum of Play forwarded the collection to the ICHEG museum for preservation.
As of 2017, Houghton Mifflin Harcourt is offering the Broderbund, The Print Shop, Calendar Creator, and ClickArt brands for licensing.
Products
Broderbund scored an early hit with the game Galactic Empire, written by Doug Carlston for the TRS-80. The company's first title for the Apple II, Tank Command, was written by the third Carlston brother, Professor Donal Carlston.
The company became a powerhouse in the educational and entertainment software markets with titles like Fantavision, Choplifter, Apple Panic, Lode Runner, Karateka, Wings of Fury, Prince of Persia, Where in the World is Carmen Sandiego?, The Guardian Legend, and Myst, which stayed the highest grossing home video game for years.
Broderbund became one of the most dominant publishers in the computer market of the 1980s, releasing video games for virtually all major computer systems in the United States. Like most early computer gaming developers, Broderbund began as an Apple II-focused company and began expanding to other platforms as time went along. They released IBM PC ports of a few games very early on, however, it was not until after 1985 that Broderbund would seriously develop for PC compatibles. Due to their strong focus on education titles, they were one of a few developers to actively support the Apple IIGS in the late 1980s. Some of the more popular Broderbund titles were licensed to Western European and Japanese developers and ported to systems in those regions. During the 1990s, Broderbund mostly concentrated on educational titles for PCs and Macintoshes with a few forays into RPGs and strategy games.
Broderbund published the Print Shop series of desktop publishing making programs; Family Tree Maker (a genealogy program supported by hundreds of CDs of public genealogy data); 3D Home Architect, a program for designing and visualizing family homes; and Banner Mania, a program for designing and printing multi-page banners. By the end of the 1980s, games represented only a few percent of Broderbund's annual sales, which by then were heavily focused in the productivity arena and early education and learning areas.
Just before being acquired by The Learning Company (formerly SoftKey), Broderbund spun off its Living Books series by forming a joint venture with Random House Publishing. Despite the success and quality of the Living Books series, the joint venture was only marginally successful and was dissolved with The Learning Company deal.
For a brief time, Broderbund was involved in the video game console market when it published a few games for the Nintendo Entertainment System (NES) through its New Ventures Division. All of Broderbund's games for the NES, including the port of its own franchises Lode Runner, Spelunker, and Raid on Bungeling Bay, were developed by third-party Japanese companies. Broderbund published some titles that were produced by companies that didn't have a North American subsidiary, such as Irem's Deadly Towers, Compile's The Guardian Legend, Imagineer's The Battle of Olympus, and Legacy of the Wizard, the fourth installment in Nihon Falcom's Dragon Slayer series. Broderbund also developed and marketed an ill-fated motion sensitive NES controller device called the U-Force, which was operated without direct physical contact between the player and the device. Broderbund also served as distributing agent of Irem's North American NES release of Sqoon, because Irem didn't yet have its own American operation. In 1990, Broderbund sold its New Ventures Division, including manufacturing equipment, inventory, and assets, to then-fledgling company THQ.
Broderbund released in the United States Arsys Software's 1986 third-person action RPG shooter WiBArm.
Broderbund briefly had a board game division, which published Don Carlston's Personal Preference, along with several board game versions of its video games.
See also
List of companies based in Oregon
Red Orb Entertainment — Broderbund's game publishing division, later supported by Mindscape
References
External links
Profile at MobyGames
Houghton Mifflin Harcourt
Defunct video game companies of the United States
Video game development companies
Video game publishers
Software companies based in Oregon
Software companies based in the San Francisco Bay Area
Defunct companies based in the San Francisco Bay Area
Video game companies established in 1980
Video game companies disestablished in 1998
Mattel
Software companies established in 1980
Software companies disestablished in 1998
1980 establishments in California
1998 disestablishments in California
Companies based in San Rafael, California
Companies based in Marin County, California
Novato, California
Companies based in Eugene, Oregon
1998 mergers and acquisitions
|
63973048
|
https://en.wikipedia.org/wiki/Ramsay%20Malware
|
Ramsay Malware
|
Ramsay, also referred to as Ramsay Malware, is a cyber espionage framework and toolkit that was discovered by ESET Research in 2020.
Ramsay is specifically tailored for Windows systems on networks that are not connected to the internet and that also isolated from intranets of companies, so called air-gapped networks, from which it steals sensitive documents like Word documents after first collecting them in a hidden storage folder.
ESET researchers found various versions of the malware, and believe that in May 2020 it was still under development. They numbered the versions Ramsay Version 1, Ramsay Version 2a and Ramsay Version 2b. The very first encounter with the malware was a sample that was uploaded from Japan to VirusTotal. The first version was compiled in September 2019. The last version that they found was most advanced.
The discovery of Ramsay was seen as significant as malware is rarely able to target physically isolated devices.
Authorship
While authorship has not been attributed, it has many common artefacts with Retro, a backdoor by hacking entity Darkhotel believed to operate in the interests of South Korea.
Workings of the malware
The three versions of Ramsay that ESET found have different workings.
Ramsay version 1 does not include a rootkit, whilst the later versions do.
Ramsay version 1 and 2.b exploit CVE-2017-0199, a "Microsoft Office/WordPad Remote Code Execution Vulnerability w/Windows API."
Version 2.b also uses exploit CVE-2017-11882 as an attack vector.
The way in which Ramsay can spread is via removable media like USB sticks and network shares. In this way, the malware can jump the air gap.
References
External links
WeLiveSecurity article on Ramsay as saved in the Internet Archive
ESET press release on Ramsay as saved in the Internet Archive
Rootkits
Windows trojans
Computer security exploits
Security breaches
Cybercrime
Cyberwarfare
|
40286713
|
https://en.wikipedia.org/wiki/DOS%20386
|
DOS 386
|
DOS 386 or DOS/386 may refer to:
Concurrent DOS 386, a Digital Research CP/M- and DOS-compatible multiuser multitasking operating system variant since 1987
FlexOS 386, a Digital Research FlexOS operating system variant since 1987
PC-MOS/386, a DOS-compatible multiuser, multitasking operating system produced by The Software Link since 1987
See also
DOS 3 (disambiguation)
DOS 286 (disambiguation)
DOS (disambiguation)
DOS/360
|
2845501
|
https://en.wikipedia.org/wiki/Keydata%20Corporation
|
Keydata Corporation
|
Keydata Corporation was one of the first companies in the time-sharing business in the 1960s. It was the brainchild of Charles W. Adams, an entrepreneur who had founded "Adams Associates" who were best remembered as the authors of computer equipment surveys during this period.
Keydata was located in Technology Square in Cambridge, Massachusetts (later moved to Watertown, Massachusetts), where Project MAC, the seminal venture sponsored by MIT which saw the development of MULTICS one of the earliest time sharing software systems. UNIX is a derivative of MULTICS. In addition, IBM's Cambridge Scientific Center was located in Technology Square and this R&D center developed the first IBM virtual memory system computer, CP/CMS. This was initially installed on a modified IBM System/360 Model 40 computer with the informal name of the "Cambridge box." Later IBM used modernized technology for the 360/67 and, today, all modern computers use "virtual memory."
The coincident location of the nexus of time sharing and virtual memory developers in Cambridge resulted in a heady climate of information technology state-of-the-art knowledge sharing which Keydata profited by, although its UNIVAC computer architecture permitted only software-based implementations. At the time, the fashion was the idea that computer power would be made available on a network connection of a "dumb" terminal to a "smart" mainframe computer utility, sharing mammoth computer power with thousands, if not millions, of users.
Keydata used a UNIVAC 490 computer using drum (secondary) memory to provide commercial applications such as inventory management and accounting applications on a network basis to slow Teletype-based terminals in customer locations and replaced in-house computers and other services with its highly customized parameter-driven distribution and manufacturing applications. The online transaction management application was monolithic, written in a proprietary high-level language; it consisted of hundreds of thousands of lines of code. The application was highly parametrized such that it could be customized to each customer's requirements just by tweaking the parameters. New parameters were introduced as needed. Networking to customers consisted of private, point to point connections through AT&T.
Other seminal services were initially implemented on this service, such as Instinet, a stock trading service now owned by Reuters which trades large block transactions on US securities markets, and a very early network inventory network application for Shell Oil company.
At its peak, Keydata had hundreds of customers on-line. As minicomputers arrived in the market, Keydata tried to adapt their applications to the DEC's VAX 780 and the Hewlett-Packard 3000 series, but this proved impossible due to the complexity of the project and the lack of resources.
References
American companies established in 1959
American companies disestablished in 1981
Companies based in Cambridge, Massachusetts
Companies based in Norfolk County, Massachusetts
Computer companies established in 1959
Computer companies disestablished in 1981
Defunct companies based in Massachusetts
Defunct computer companies of the United States
Time-sharing companies
|
39395330
|
https://en.wikipedia.org/wiki/EBAM
|
EBAM
|
Electronic Bank Account Management (abbreviated as eBAM) represents the automation, through software, of the following activities between banks and their corporate customers:
Opening bank accounts
Maintaining bank accounts such as changing account signatories or spending limits
Closing bank accounts
Generating reports as required by law or regulation
The technology that is commonly used to implement eBAM automation is defined by SWIFT and the ISO 20022 Standard for Financial Services Messaging.
Most medium to large companies use an average of six cash management banks, within this average about 30% use 6 or fewer, 40% use 7-20, and 30% use 21 or more. The number of bank accounts managed varies from a less than 50 to 100s or even 1,000s.This involves collecting and managing a vast amount of data describing the details of each bank account and controlling the delegation of authority to authorize financial transactions.
Any organization with more than 100 bank accounts needs a formal framework, policy and management processes for: 1) bank account reduction, and 2) overall management of banks and accounts.
Bank and Bank Account Reduction Policy
Each bank relationship and each bank account costs time and money to manage and administer. The number of banks and accounts should, wherever practical and the counter party risk is acceptable, be minimised. Many corporate treasury departments have had a bank reduction programme for many years, but since the bank crisis in 2008, some departments have been increasing the number of banks to spread their counter-party financial and operational risk. The need to reduce the number of bank accounts remains. One or two bank accounts per business units is a useful objective.
Implementing eBAM
There are three types of corporate eBAM solution available:
bank-centric solutions in which the company connects to a single, proprietary bank-owned and bank-hosted system
corporate-centric solutions in which the company develops or acquires a system for bank account management and connection to their banks
outsourced hub solutions in which banks and companies use a common hub to provide full inter-operability and a central repository for all bank account information and management.
eBAM software for banks
Software vendors that offer eBAM software to banks that need to offer eBAM services to their corporate customers
eBAM software for corporations
Software vendors that offer eBAM software to corporations that need to communicate with their banks.
Banks that offer eBAM services
Banks that offer eBAM services to their corporate customers.
References
Banking
|
5634704
|
https://en.wikipedia.org/wiki/Marine%20Air%20Control%20Squadron%201
|
Marine Air Control Squadron 1
|
Marine Air Control Squadron 1 (MACS-1) is a United States Marine Corps aviation command and control squadron . The squadron provides aerial surveillance, air traffic control, ground-controlled intercept, and aviation data-link connectivity for the I Marine Expeditionary Force. It was the first air warning squadron commissioned as part of the Marine Corps' new air warning program and is the second oldest aviation command and control unit in the Marine Corps. The squadron is based at Marine Corps Air Station Yuma and falls under Marine Air Control Group 38 and the 3rd Marine Aircraft Wing.
Subordinate units
Mission
Provide air surveillance, airspace management and the control of aircraft and surface-to-air weapons for anti-air warfare and offensive air support while independently or simultaneously providing continuous all-weather radar and non-radar ATC services as in integral part of the Marine Air Command and Control system (MACCS) in support of a Marine Air-Ground Task Force (MAGTF), and Joint Force Commander.
History
World War II
Formation and movement to Hawaii
Air Warning Squadron 1 was commissioned on September 1, 1943 at Marine Corps Air Station Cherry Point, North Carolina. It was the first early warning squadron organized under the newly established 1st Marine Air Warning Group The squadron's initial Table of organization and equipment had 14 officers and 192 enlisted Marines assigned. On November 15 the squadron boarded trains in North Carolina bound for the West Coast. It arrived on November 22, 1943 at Marine Corps Air Station El Toro, California and began a short period of additional training prior to deployment.
On December 29, AWS-1 personnel boarded the USS White Plains (CVE-66) headed for the Territory of Hawaii. It arrived at Pearl Harbor on January 4, 1944 and was transported to Marine Corps Air Station Ewa. Upon arrival it was reassigned to Marine Aircraft Group 22, 4th Marine Base Defense Aircraft Wing and began training for combat missions in support of the World War II Pacific Campaign. After a short period of time at MCAS Ewa the squadron boarded the USS Mormacport on February 12 and sailed west for its first combat operation.
Eniwetok
On February 20 1944, AWS-1 landed on Engebi as part of the larger Gilbert and Marshall Islands campaign. The squadron set up its SCR-270 and SCR-527 radars and took control of the airspace over Eniwetok on March 1, 1944. During its time on Engebi the squadron worked closely with the 10th Defense Battalion to ensure the aircraft it controlled were properly deconflicted from the battalion's air defense fires.
The first Japanese air raids against the Marines on Engebi occurred on the evening on March 8, 1944. Twenty Japanese aircraft departed Truk Atoll at 0230 inbound Engebi from the southwest. Twelve of the aircraft acted as decoys to draw American interceptors away while eight Japanese aircraft, successfully employing chaff to deceive American radars, made three bombing runs over the course of an hour and half. The first bombing run destroyed AWS-1's VHF radio transmitter necessitating immediate repair so aircraft control could continue. SSgt Jacob Marty was killed during the second bombing run while attempting to restore VHF communications. He was the first Marine from a Marine air warning squadron to be killed in action. Another seven Marines were injured during this raids.
Okinawa
AWS-1 arrived off of Okinawa on April 19 and landed on Ie Shima on April 21, 1945 and began setting up its radars and air defense control centers. The squadron was operational by the end of the month. During its first 36 days of operations, AWS-1 plotted more than 200 Japanese raids and aircraft under its control scored a total of 149 enemy aircraft destroyed or damaged.On July 9 the squadron sent a long range radar detachment to Iheya Island to further expand the radar coverage around Okinawa.
Following the war the squadron remained on Ie Shima until February 1946. The squadron's forward echelon departed on February 23, 1946 onboard USS LST-690 arriving back in the States on March 29, 1946. Main body personnel and equipment were loaded onto USS LST-970 for transport back to the United States. With stops in Guam and Pearl Harbor en route, the main body did not arrive back at Marine Corps Air Station Miramar, CA until April 14, 1946. Upon arrival at MCAS Miramar the squadron was administratively assigned to Marine Air Warning Group 2. On August 1, 1946 the squadron was re-designated as Marine Ground Control Intercept Squadron 1 and in July 1947 it moved to Marine Corps Base Camp Pendleton, California. In October 1947 the squadron was reassigned to Marine Air Control Group 2.
Korean War
MGCIS-1 was alerted for duty in Korea on 5 July 1950 and reassigned to Marine Aircraft Group 33 the following day. At the outbreak of the war, MGCIS-1 was severely under strength. Additional Marines were joined from other squadrons within Marine Air Control Group 2 to fill out the squadron's ranks prior to deployment. The squadron departed Long Beach Harbor on 14 July 1950 on board . They arrived in Kobe, Japan on 1 August 1950 and set up operations at Itami Air Force Base, Honshu, Japan to be co-located with VMF(N)-513.
On 10 September, MGCIS-1 personnel boarded the USS George Clymer (APA-27) departed Kobe. While enroute they established a secondary Tactical Air Control Center on board in case any of the primary control ships were knocked out during the upcoming assault. Following the Inchon landings on 15 September, the squadron came ashore on 17 September and established radars and a control center at Kimpo Air Base. They were partially operating by 20 September. While at Kimpo, MGCIS-1 controlled combat air patrol aircraft in the airspace and cleared cargo aircraft into the field. The squadron secured operations on 10 October and returned to the port at Inchon to prepare for follow on tasking. Personnel and gear were loaded onto the USS Alshain and the USNS Marine Phoenix (T-AP-195) and departed the harbor on 17 September.
The squadron was administratively transferred to Marine Aircraft Group 12 in October 1950. MGCIS-1 secured operations in Hungnam on 11 December and all personnel boarded an LST on December 13 as part of the Hungnam evacuation. While afloat, squadron controllers assisted their US Navy counterparts controlling hundreds of aircraft daily during the operation. The squadron sailed for Pusan, Korea and set up its equipment at Pusan West AB (K-1) as it prepared for follow-on tasking. In April 1951 MGCIS-1 was again administratively transferred to under the control of MACG-2. MGCIS-1 participated in the defense of the Korean Demilitarized Zone from July 1953 through March 1955. On February 15, 1954 the squadron received its current moniker of Marine Air Control Squadron 1.. In April 1955 the unit redeployed to Naval Air Facility Atsugi, Japan, and was reassigned to Marine Aircraft Group 11 (MAG-11).
1960 through 1972
The squadron was reduced to cadre status during March–April 1960. It was relocated during May 1960 to Marine Corps Air Station Yuma, Arizona and reassigned to Marine Wing Headquarters Group, 3rd Marine Aircraft Wing. On February 1, 1972, the squadron was decommissioned.
Reactivation, 1980s & 1990s
11 years later in October 1983, the squadron was reactivated at Marine Corps Base Camp Pendleton, California, as Marine Air Control Squadron 1, Marine Air Control Group 38, 3rd Marine Aircraft Wing. It participated in Operation Desert Shield in Southwest Asia from August until October 1990, though some elements of MACS-1 remained in Saudi Arabia in support of MACS-2.
MACS-1 Relocated during Jun 1998 to Marine Corps Air Station Yuma, Arizona. Elements supported Operation Southern Watch, Iraq, March–April 2000 and in November–December 2000, and May–June 2001.
Global War on Terror
Elements of MACS-1 supported Operation Enduring Freedom, in Afghanistan from January–May 2002. This was followed by a deployment to Kuwait in February 2003 and participating in Operation Iraqi Freedom from March 2003 to present, both as an Air Control agency, and subsequently standing up several Security Companies.
From 2009 through 2014, MACS-1, in concert with MACS-2, supported sustained TAOC operations at Camp Leatherneck, in Helmand Province, Afghanistan. Utilizing the AN/TPS-59 radar as its primary sensor, these units were responsible for controlling 70,000 square miles of airspace in support of Regional Command Southwest operations. From 2009 through 2014, both MACS-1 and MACS-2 coordinated more than 320,000 fixed-wing operations, 80,000 aerial refueling operations, and more than 7,000 rotary wing operations. The TAOC's mission in Afghanistan ended in November 2014 as the Marine Corps withdrew its presence in Southern Afghanistan and turned over control of the area to United States Air Force's 71st Expeditionary Air Control Squadron.
Notable former members
Lee Harvey Oswald – fatally shot President John F. Kennedy in 1963.
Unit awards
A unit citation or commendation is an award bestowed upon an organization for the action cited. Members of the unit who participated in said actions are allowed to wear on their uniforms the awarded unit citation. Marine Air Control Squadron 1 has been presented with the following awards:
See also
AN/TPS-59
United States Marine Corps Aviation
Organization of the United States Marine Corps
List of United States Marine Corps aviation support units
Citations
References
Bibliography
Web
External links
Radar
1943 establishments in North Carolina
|
14368188
|
https://en.wikipedia.org/wiki/Luv%27%20Hitpack
|
Luv' Hitpack
|
Luv' Hitpack is the seventeenth single by the Dutch girl group Luv' released in 1989 by Mercury/Phonogram Records and is a Megamix conceived by Peter Slaghuis. It appears on the compilation Greatest Hits. The long version of this medley is included as a bonus track on the Box set Completely In Luv'.
Song history
Marga Scheide accompanied by two vocalists Diana van Berlo and Michelle Gold reformed Luv' in 1989 and promoted new material released by the Dutch label Dureco/High Fahion Music. Meanwhile, the group's first record company Philips Records/Phonogram Records and its sister label (Mercury Records) decided to repackage Luv's old repertoire. That's why a "Greatest Hits" album came out, including successful hit singles, album songs and a bonus track: Luv' Hitpack, a megamix conceived by the Dutch DJ-remixer-producer Peter Slaghuis. Slaghuis is known for the 1985 hit Woodpeckers from Space by Video Kids, the 1988 hit Jack To the Sound of the Underground by Hithouse and the numerous remixes he did for world-famous acts: Nu Shooz, Madonna, Petula Clark, Technotronic, Mel & Kim...The strategy of Mercury Records to release this medley was inspired by the example of Boney M. whose megamix and remixes entered the European charts.
Commercial performance
Luv' Hitpack didn't enter any record chart due to a lack of promotion by Luv'.
Track listings and formats
Luv' Hitpack came out in three formats.
7" Vinyl Single
"Luv' Hitpack" (Single Version) — 4:32
Casanova/Life Is On My Side/U.O.Me/Casanova/You're The Greatest Lover/Life Is On My Side/Trojan Horse/Everybody's Shakin' Hands On Broadway/Casanova
"Luv' Stuff" — 3:18
12" Vinyl Single
"Luv' Hitpack" (Long version) - 5:28
"Luv' Stuff" - 4:50
CD Single
"Luv' Hitpack" (Single Version) — 4:32
Casanova/Life Is On My Side/U.O.Me/Casanova/You're The Greatest Lover/Life Is On My Side/Trojan Horse/Everybody's Shakin' Hands On Broadway/Casanova
"Luv' Hitpack" (Long Version) — 5:29
Casanova/Life Is On My Side/U.O.Me/Casanova/You're The Greatest Lover/Life Is On My Side/Trojan Horse/Everybody's Shakin' Hands On Broadway/Casanova
"Luv' Stuff" — 3:18
References
1989 singles
Luv' songs
Songs written by Hans van Hemert
Songs written by Piet Souer
1989 songs
Phonogram Records singles
Mercury Records singles
|
218445
|
https://en.wikipedia.org/wiki/Lean%20manufacturing
|
Lean manufacturing
|
Lean manufacturing (also known as lean production, just-in-time manufacturing and just-in-time production, or JIT) is a production method aimed primarily at reducing times within the production system as well as response times from suppliers and to customers.
It is derived from Toyota's 1930 operating model "The Toyota Way" (Toyota Production System, TPS). The term "Lean" was coined in 1988 by John Krafcik, and defined in 1996 by James Womack and Daniel Jones to consist of five key principles: "Precisely specify value by specific product, identify the value stream for each product, make value flow without interruptions, let customer pull value from the producer, and pursue perfection."
Companies employ the strategy to increase efficiency. By receiving goods only as they need them for the production process, it reduces inventory costs and wastage, and increases productivity and profit. The downside is that it requires producers to forecast demand accurately as the benefits can be nullified by minor delays in the supply chain. It may also impact negatively on workers due to added stress and inflexible conditions. A successful operation depends on a company having regular outputs, high-quality processes, and reliable suppliers.
History
Fredrick Taylor and Henry Ford documented their observations relating to these topics, and Shigeo Shingo and Taiichi Ohno applied their enhanced thoughts on the subject at Toyota in the 1930s. The resulting methods were researched from the mid-20th century and dubbed "Lean" by John Krafcik in 1988, and then were defined in The Machine that Changed the World and further detailed by James Womack and Daniel Jones in Lean Thinking (1996).
Evolution in Japan
The exact reasons for adoption of JIT in Japan are unclear, but it has been suggested it started with a requirement to solve the lack of standardization. Plenert offers four reasons, paraphrased here. During Japan's post–World War II rebuilding of industry:
1. Japan's lack of cash made it difficult for industry to finance the big-batch, large inventory production methods common elsewhere
2. Japan lacked space to build big factories loaded with inventory.
3. The Japanese islands lack natural resources with which to build products.
4. Japan had high unemployment, which meant that labor efficiency methods were not an obvious pathway to industrial success.
Thus, the Japanese "leaned out" their processes. "They built smaller factories ... in which the only materials housed in the factory were those on which work was currently being done. In this way, inventory levels were kept low, investment in in-process inventories was at a minimum, and the investment in purchased natural resources was quickly turned around so that additional materials were purchased." Plenert goes on to explain Toyota's key role in developing this lean or JIT production methodology.
American industrialists recognized the threat of cheap offshore labor to American workers during the 1910s, and explicitly stated the goal of what is now called lean manufacturing as a countermeasure. Henry Towne, past President of the American Society of Mechanical Engineers, wrote in the foreword to Frederick Winslow Taylor's Shop Management (1911), "We are justly proud of the high wage rates which prevail throughout our country, and jealous of any interference with them by the products of the cheaper labor of other countries. To maintain this condition, to strengthen our control of home markets, and, above all, to broaden our opportunities in foreign markets where we must compete with the products of other industrial nations, we should welcome and encourage every influence tending to increase the efficiency of our productive processes."
Continuous production improvement and incentives for such were documented in Taylor's Principles of Scientific Management (1911):
"... whenever a workman proposes an improvement, it should be the policy of the management to make a careful analysis of the new method, and if necessary conduct a series of experiments to determine accurately the relative merit of the new suggestion and of the old standard. And whenever the new method is found to be markedly superior to the old, it should be adopted as the standard for the whole establishment."
"...after a workman has had the price per piece of the work he is doing lowered two or three times as a result of his having worked harder and increased his output, he is likely entirely to lose sight of his employer's side of the case and become imbued with a grim determination to have no more cuts if soldiering [marking time, just doing what he is told] can prevent it."
Shigeo Shingo cites reading Principles of Scientific Management in 1931 and being "greatly impressed to make the study and practice of scientific management his life's work".,
Shingo and Taiichi Ohno were key to the design of Toyota's manufacturing process. Previously a textile company, Toyota moved into building automobiles in 1934. Kiichiro Toyoda, founder of Toyota Motor Corporation, directed the engine casting work and discovered many problems in their manufacturing, with wasted resources on repair of poor-quality castings. Toyota engaged in intense study of each stage of the process. In 1936, when Toyota won its first truck contract with the Japanese government, the processes encountered new problems, to which Toyota responded by developing "Kaizen" improvement teams, into what has become the Toyota Production System (TPS), and subsequently The Toyota Way.
Levels of demand in the postwar economy of Japan were low; as a result, the focus of mass production on lowest cost per item via economies of scale had little application. Having visited and seen supermarkets in the United States, Ohno recognised that scheduling of work should not be driven by sales or production targets but by actual sales. Given the financial situation during this period, over-production had to be avoided, and thus the notion of "pull" (or "build-to-order" rather than target-driven "push") came to underpin production scheduling.
Evolution in the rest of the world
Just-in-time manufacturing was introduced in Australia in the 1950s by the British Motor Corporation (Australia) at its Victoria Park plant in Sydney, from where the idea later migrated to Toyota. News about JIT/TPS reached other western countries from Japan in 1977 in two English-language articles: one referred to the methodology as the "Ohno system", after Taiichi Ohno, who was instrumental in its development within Toyota. The other article, by Toyota authors in an international journal, provided additional details. Finally, those and other publicity were translated into implementations, beginning in 1980 and then quickly multiplying throughout industry in the United States and other developed countries. A seminal 1980 event was a conference in Detroit at Ford World Headquarters co-sponsored by the Repetitive Manufacturing Group (RMG), which had been founded 1979 within the American Production and Inventory Control Society (APICS) to seek advances in manufacturing. The principal speaker, Fujio Cho (later, president of Toyota Motor Corp.), in explaining the Toyota system, stirred up the audience, and led to the RMG's shifting gears from things like automation to JIT/TPS.
At least some of audience's stirring had to do with a perceived clash between the new JIT regime and manufacturing resource planning (MRP II), a computer software-based system of manufacturing planning and control which had become prominent in industry in the 1960s and 1970s. Debates in professional meetings on JIT vs. MRP II were followed by published articles, one of them titled, "The Rise and Fall of Just-in-Time". Less confrontational was Walt Goddard's, "Kanban Versus MRP II—Which Is Best for You?" in 1982. Four years later, Goddard had answered his own question with a book advocating JIT. Among the best known of MRP II's advocates was George Plossl, who authored two articles questioning JIT's kanban planning method and the "japanning of America". But, as with Goddard, Plossl later wrote that "JIT is a concept whose time has come".
JIT/TPS implementations may be found in many case-study articles from the 1980s and beyond. An article in a 1984 issue of Inc. magazine relates how Omark Industries (chain saws, ammunition, log loaders, etc.) emerged as an extensive JIT implementer under its US home-grown name ZIPS (zero inventory production system). At Omark's mother plant in Portland, Oregon, after the work force had received 40 hours of ZIPS training, they were "turned loose" and things began to happen. A first step was to "arbitrarily eliminate a week's lead time [after which] things ran smoother. 'People asked that we try taking another week's worth out.' After that, ZIPS spread throughout the plant's operations 'like an amoeba.'" The article also notes that Omark's 20 other plants were similarly engaged in ZIPS, beginning with pilot projects. For example, at one of Omark's smaller plants making drill bits in Mesabi, Minnesota, "large-size drill inventory was cut by 92%, productivity increased by 30%, scrap and rework ... dropped 20%, and lead time ... from order to finished product was slashed from three weeks to three days." The Inc. article states that companies using JIT the most extensively include "the Big Four, Hewlett-Packard, Motorola, Westinghouse Electric, General Electric, Deere & Company, and Black and Decker".
By 1986, a case-study book on JIT in the U.S. was able to devote a full chapter to ZIPS at Omark, along with two chapters on JIT at several Hewlett-Packard plants, and single chapters for Harley-Davidson, John Deere, IBM-Raleigh, North Carolina, and California-based Apple Inc., a Toyota truck-bed plant, and New United Motor Manufacturing joint venture between Toyota and General Motors.
Two similar, contemporaneous books from the U.K. are more international in scope. One of the books, with both conceptual articles and case studies, includes three sections on JIT practices: in Japan (e.g., at Toyota, Mazda, and Tokagawa Electric); in Europe (jmg Bostrom, Lucas Electric, Cummins Engine, IBM, 3M, Datasolve Ltd., Renault, Massey-Ferguson); and in the US and Australia (Repco Manufacturing-Australia, Xerox Computer, and two on Hewlett-Packard). The second book, reporting on what was billed as the First International Conference on just-in-time manufacturing, includes case studies in three companies: Repco-Australia, IBM-UK, and 3M-UK. In addition, a day two keynote address discussed JIT as applied "across all disciplines, ... from accounting and systems to design and production".
Rebranding as "lean"
John Krafcik coined the term "Lean" in his 1988 article, "Triumph of the Lean Production System". The article states: (a) Lean manufacturing plants have higher levels of productivity/quality than non-Lean and (b) "The level of plant technology seems to have little effect on operating performance" (page 51). According to the article, risks with implementing Lean can be reduced by: "developing a well-trained, flexible workforce, product designs that are easy to build with high quality, and a supportive, high-performance supplier network" (page 51).
Middle era and to the present
Three more books which include JIT implementations were published in 1993, 1995, and 1996, which are start-up years of the lean manufacturing/lean management movement that was launched in 1990 with publication of the book, The Machine That Changed the World. That one, along with other books, articles, and case studies on lean, were supplanting JIT terminology in the 1990s and beyond. The same period, saw the rise of books and articles with similar concepts and methodologies but with alternative names, including cycle time management, time-based competition, quick-response manufacturing, flow, and pull-based production systems.
There is more to JIT than its usual manufacturing-centered explication. Inasmuch as manufacturing ends with order-fulfillment to distributors, retailers, and end users, and also includes remanufacturing, repair, and warranty claims, JIT's concepts and methods have application downstream from manufacturing itself. A 1993 book on "world-class distribution logistics" discusses kanban links from factories onward. And a manufacturer-to-retailer model developed in the U.S. in the 1980s, referred to as quick response, has morphed over time to what is called fast fashion.
Methodology
The strategic elements of lean can be quite complex, and comprise multiple elements. Four different notions of lean have been identified:
Lean as a fixed state or goal (being lean)
Lean as a continuous change process (becoming lean)
Lean as a set of tools or methods (doing lean/toolbox lean)
Lean as a philosophy (lean thinking)
The other way to avoid market risk and control the supply efficiently is to cut down in stock. P&G has completed their goal to co-operate with Walmart and other wholesales companies by building the response system of stocks directly to the suppliers companies.
In 1999, Spear and Bowen identified four rules which characterize the "Toyota DNA":
All work shall be highly specified as to content, sequence, timing, and outcome.
Every customer-supplier connection must be direct, and there must be an unambiguous yes or no way to send requests and receive responses.
The pathway for every product and service must be simple and direct.
Any improvement must be made in accordance with the scientific method, under the guidance of a teacher, at the lowest possible level in the organization.
This is a fundamentally different approach from most improvement methodologies, and requires more persistence than basic application of the tools, which may partially account for its lack of popularity. The implementation of "smooth flow" exposes quality problems that already existed, and waste reduction then happens as a natural consequence, a system-wide perspective rather focusing directly upon the wasteful practices themselves.
Sepheri provides a list of methodologies of JIT manufacturing that "are important but not exhaustive":
Housekeeping: physical organization and discipline.
Make it right the first time: elimination of defects.
Setup reduction: flexible changeover approaches.
Lot sizes of one: the ultimate lot size and flexibility.
Uniform plant load: leveling as a control mechanism.
Balanced flow: organizing flow scheduling throughput.
Skill diversification: multi-functional workers.
Control by visibility: communication media for activity.
Preventive maintenance: flawless running, no defects.
Fitness for use: producibility, design for process.
Compact plant layout: product-oriented design.
Streamlining movements: smoothing materials handling.
Supplier networks: extensions of the factory.
Worker involvement: small group improvement activities.
Cellular manufacturing: production methods for flow.
Pull system: signal [kanban] replenishment/resupply systems.
Key principles and waste
Womack and Jones define Lean as "...a way to do more and more with less and less—less human effort, less equipment, less time, and less space—while coming closer and closer to providing customers exactly what they want" and then translate this into five key principles:
Value: Specify the value desired by the customer. "Form a team for each product to stick with that product during its entire production cycle", "Enter into a dialogue with the customer" (e.g. Voice of the customer)
The Value Stream: Identify the value stream for each product providing that value and challenge all of the wasted steps (generally nine out of ten) currently necessary to provide it
Flow: Make the product flow continuously through the remaining value-added steps
Pull: Introduce pull between all steps where continuous flow is possible
Perfection: Manage toward perfection so that the number of steps and the amount of time and information needed to serve the customer continually falls
Lean is founded on the concept of continuous and incremental improvements on product and process while eliminating redundant activities. "The value of adding activities are simply only those things the customer is willing to pay for, everything else is waste, and should be eliminated, simplified, reduced, or integrated".
On principle 2, waste, see seven basic waste types under The Toyota Way. Additional waste types are:
Faulty goods (manufacturing of goods or services that do not meet customer demand or specifications, Womack et al., 2003. See Lean services)
Waste of skills (Six Sigma)
Under-utilizing capabilities (Six Sigma)
Delegating tasks with inadequate training (Six Sigma)
Metrics (working to the wrong metrics or no metrics) (Mika Geoffrey, 1999)
Participation (not utilizing workers by not allowing them to contribute ideas and suggestions and be part of Participative Management) (Mika Geoffrey, 1999)
Computers (improper use of computers: not having the proper software, training on use and time spent surfing, playing games or just wasting time) (Mika Geoffrey, 1999)
Implementation
One paper suggests that an organization implementing Lean needs its own Lean plan as developed by the "Lean Leadership". This should enable Lean teams to provide suggestions for their managers who then makes the actual decisions about what to implement. Coaching is recommended when an organization starts off with Lean to impart knowledge and skills to shop-floor staff. Improvement metrics are required for informed decision-making.
Lean philosophy and culture is as important as tools and methodologies. Management should not decide on solutions without understanding the true problem by consulting shop floor personnel.
The solution to a specific problem for a specific company may not have generalised application. The solution must fit the problem.
Value-stream mapping (VSM) and 5S are the most common approaches companies take on their first steps to Lean. Lean can be focused on specific processes, or cover the entire supply chain. Front-line workers should be involved in VSM activities. Implementing a series of small improvements incrementally along the supply chain can bring forth enhanced productivity.
Naming
Alternative terms for JIT manufacturing have been used. Motorola's choice was short-cycle manufacturing (SCM). IBM's was continuous-flow manufacturing (CFM), and demand-flow manufacturing (DFM), a term handed down from consultant John Constanza at his Institute of Technology in Colorado. Still another alternative was mentioned by Goddard, who said that "Toyota Production System is often mistakenly referred to as the 'Kanban System'", and pointed out that kanban is but one element of TPS, as well as JIT production.
The wide use of the term JIT manufacturing throughout the 1980s faded fast in the 1990s, as the new term lean manufacturing became established as "a more recent name for JIT". As just one testament to the commonality of the two terms, Toyota production system (TPS) has been and is widely used as a synonym for both JIT and lean manufacturing.
Objectives and benefits
Objectives and benefits of JIT manufacturing may be stated in two primary ways: first, in specific and quantitative terms, via published case studies; second, general listings and discussion.
A case-study summary from Daman Products in 1999 lists the following benefits: reduced cycle times 97%, setup times 50%, lead times from 4 to 8 weeks to 5 to 10 days, flow distance 90%. This was achieved via four focused (cellular) factories, pull scheduling, kanban, visual management, and employee empowerment.
Another study from NCR (Dundee, Scotland) in 1998, a producer of make-to-order automated teller machines, includes some of the same benefits while also focusing on JIT purchasing: In switching to JIT over a weekend in 1998, eliminated buffer inventories, reducing inventory from 47 days to 5 days, flow time from 15 days to 2 days, with 60% of purchased parts arriving JIT and 77% going dock to line, and suppliers reduced from 480 to 165.
Hewlett-Packard, one of western industry's earliest JIT implementers, provides a set of four case studies from four H-P divisions during the mid-1980s. The four divisions, Greeley, Fort Collins, Computer Systems, and Vancouver, employed some but not all of the same measures. At the time about half of H-P's 52 divisions had adopted JIT.
Use in other sectors
Lean principles have been successfully applied to various sectors and services, such as call centers and healthcare. In the former, lean's waste reduction practices have been used to reduce handle time, within and between agent variation, accent barriers, as well as attain near perfect process adherence. In the latter, several hospitals have adopted the idea of lean hospital, a concept that priorizes the patient, thus increasing the employee commitment and motivation, as well as boosting medical quality and cost effectiveness.
Lean principles also have applications to software development and maintenance as well as other sectors of information technology (IT). More generally, the use of lean in information technology has become known as Lean IT. Lean methods are also applicable to the public sector, but most results have been achieved using a much more restricted range of techniques than lean provides.
The challenge in moving lean to services is the lack of widely available reference implementations to allow people to see how directly applying lean manufacturing tools and practices can work and the impact it does have. This makes it more difficult to build the level of belief seen as necessary for strong implementation. However, some research does relate widely recognized examples of success in retail and even airlines to the underlying principles of lean. Despite this, it remains the case that the direct manufacturing examples of 'techniques' or 'tools' need to be better 'translated' into a service context to support the more prominent approaches of implementation, which has not yet received the level of work or publicity that would give starting points for implementors. The upshot of this is that each implementation often 'feels its way' along as must the early industrial engineering practices of Toyota. This places huge importance upon sponsorship to encourage and protect these experimental developments.
Lean management is nowadays implemented also in non-manufacturing processes and administrative processes. In non-manufacturing processes is still huge potential for optimization and efficiency increase.
Criticism
According to Williams, it becomes necessary to find suppliers that are close by or can supply materials quickly with limited advance notice. When ordering small quantities of materials, suppliers' minimum order policies may pose a problem, though.
Employees are at risk of precarious work when employed by factories that utilize just-in-time and flexible production techniques. A longitudinal study of US workers since 1970 indicates employers seeking to easily adjust their workforce in response to supply and demand conditions respond by creating more nonstandard work arrangements, such as contracting and temporary work.
Natural and man-made disasters will disrupt the flow of energy, goods and services. The down-stream customers of those goods and services will, in turn, not be able to produce their product or render their service because they were counting on incoming deliveries "just in time" and so have little or no inventory to work with. The disruption to the economic system will cascade to some degree depending on the nature and severity of the original disaster. The larger the disaster the worse the effect on just-in-time failures. Electrical power is the ultimate example of just-in-time delivery. A severe geomagnetic storm could disrupt electrical power delivery for hours to years, locally or even globally. Lack of supplies on hand to repair the electrical system would have catastrophic effects.
The COVID-19 pandemic has caused disruption in JIT practices, with various quarantine restrictions on international trade and commercial activity in general interrupting supply while lacking stockpiles to handle the disruption; along with increased demand for medical supplies like personal protective equipment (PPE) and ventilators, and even panic buying, including of various domestically manufactured (and so less vulnerable products) like panic buying of toilet paper, disturbing regular demand. This has led to suggestions that stockpiles and diversification of suppliers should be more heavily focused.
Critics of Lean argue that this management method has significant drawbacks, especially for the employees of companies operating under Lean. Common criticism of Lean is that it fails to take in consideration the employee's safety and well-being. Lean manufacturing is associated with an increased level of stress among employees, who have a small margin of error in their work environment which require perfection. Lean also over-focuses on cutting waste, which may lead management to cut sectors of the company that are not essential to the company's short-term productivity but are nevertheless important to the company's legacy. Lean also over-focuses on the present, which hinders a company's plans for the future.
Critics also make negative comparison of Lean and 19th century scientific management, which had been fought by the labor movement and was considered obsolete by the 1930s. Finally, lean is criticized for lacking a standard methodology: "Lean is more a culture than a method, and there is no standard lean production model."
After years of success of Toyota's Lean Production, the consolidation of supply chain networks has brought Toyota to the position of being the world's biggest carmaker in the rapid expansion. In 2010, the crisis of safety-related problems in Toyota made other carmakers that duplicated Toyota's supply chain system wary that the same recall issue might happen to them.
James Womack had warned Toyota that cooperating with single outsourced suppliers might bring unexpected problems.
Lean manufacturing is different from lean enterprise. Recent research reports the existence of several lean manufacturing processes but of few lean enterprises. One distinguishing feature opposes lean accounting and standard cost accounting. For standard cost accounting, SKUs are difficult to grasp. SKUs include too much hypothesis and variance, i.e., SKUs hold too much indeterminacy. Manufacturing may want to consider moving away from traditional accounting and adopting lean accounting. In using lean accounting, one expected gain is activity-based cost visibility, i.e., measuring the direct and indirect costs at each step of an activity rather than traditional cost accounting that limits itself to labor and supplies.
See also
A3 problem solving
Cellular manufacturing
CONWIP
Efficiency Movement
Just In Case
Production flow analysis
Takt time
Notes
References
Ker, J.I., Wang, Y., Hajli, M.N., Song, J., Ker, C.W. (2014) Deploying Lean in Healthcare: Evaluating Information Technology Effectiveness in US Hospital Pharmacies
MacInnes, Richard L. (2002) The Lean Enterprise Memory Jogger.
Mika, Geoffrey L. (1999) Kaizen Event Implementation Manual
Page, Julian (2003) Implementing Lean Manufacturing Techniques.
Anderson, Barry (ed.) 2012. Building Cars in Australia: Morris, Austin, BMC and Leyland 1950-1976. Sydney: Halstead Press.
Billesbach, Thomas J. 1987. Applicability of Just-in-Time Techniques in the Administrative Area. Doctoral dissertation, University of Nebraska. Ann Arbor, Mich., University Microfilms International.
Goddard, W.E. 2001. JIT/TQC—identifying and solving problems. Proceedings of the 20th Electrical Electronics Insulation Conference, Boston, October 7–10, 88–91.
Goldratt, Eliyahu M. and Fox, Robert E. (1986), The Race, North River Press,
Hall, Robert W. 1983. Zero Inventories. Homewood, Ill.: Dow Jones-Irwin.
Hall, Robert W. 1987. Attaining Manufacturing Excellence: Just-in-Time, Total Quality, Total People Involvement. Homewood, Ill.: Dow Jones-Irwin.
Hay, Edward J. 1988. The Just-in-Time Breakthrough: Implementing the New Manufacturing Basics. New York: Wiley.
Lubben, R.T. 1988. Just-in-Time Manufacturing: An Aggressive Manufacturing Strategy. New York: McGraw-Hill.
Monden, Yasuhiro. 1982. Toyota Production System. Norcross, Ga: Institute of Industrial Engineers.
Ohno, Taiichi (1988), Toyota Production System: Beyond Large-Scale Production, Productivity Press,
Ohno, Taiichi (1988), Just-In-Time for Today and Tomorrow, Productivity Press, .
Schonberger, Richard J. 1982. Japanese Manufacturing Techniques: Nine Hidden Lessons in Simplicity. New York: Free Press.
Suri, R. 1986. Getting from 'just in case' to 'just in time': insights from a simple model. 6 (3) 295–304.
Suzaki, Kyoshi. 1993. The New Shop Floor Management: Empowering People for Continuous Improvement. New York: Free Press.
Voss, Chris, and David Clutterbuck. 1989. Just-in-Time: A Global Status Report. UK: IFS Publications.
Wadell, William, and Bodek, Norman (2005), The Rebirth of American Industry, PCS Press,
External links
Lean Enterprise Institute
Manufacturing
Freight transport
Inventory
Working capital management
Inventory optimization
|
321157
|
https://en.wikipedia.org/wiki/Model%20checking
|
Model checking
|
In computer science, model checking or property checking is a method for checking whether a finite-state model of a system meets a given specification (also known as correctness). This is typically associated with hardware or software systems, where the specification contains liveness requirements (such as avoidance of livelock) as well as safety requirements (such as avoidance of states representing a system crash).
In order to solve such a problem algorithmically, both the model of the system and its specification are formulated in some precise mathematical language. To this end, the problem is formulated as a task in logic, namely to check whether a structure satisfies a given logical formula. This general concept applies to many kinds of logic and many kinds of structures. A simple model-checking problem consists of verifying whether a formula in the propositional logic is satisfied by a given structure.
Overview
Property checking is used for verification when two descriptions are not equivalent. During refinement, the specification is complemented with details that are unnecessary in the higher-level specification. There is no need to verify the newly introduced properties against the original specification since this is not possible. Therefore, the strict bi-directional equivalence check is relaxed to a one-way property check. The implementation or design is regarded as a model of the system, whereas the specifications are properties that the model must satisfy.
An important class of model-checking methods has been developed for checking models of hardware and software designs where the specification is given by a temporal logic formula. Pioneering work in temporal logic specification was done by Amir Pnueli, who received the 1996 Turing award for "seminal work introducing temporal logic into computing science". Model checking began with the pioneering work of E. M. Clarke, E. A. Emerson, by J. P. Queille, and J. Sifakis. Clarke, Emerson, and Sifakis shared the 2007 Turing Award for their seminal work founding and developing the field of model checking.
Model checking is most often applied to hardware designs. For software, because of undecidability (see computability theory) the approach cannot be fully algorithmic, apply to all systems, and always give an answer; in the general case, it may fail to prove or disprove a given property. In embedded-systems hardware, it is possible to validate a specification delivered, i.e., by means of UML activity diagrams or control interpreted Petri nets.
The structure is usually given as a source code description in an industrial hardware description language or a special-purpose language. Such a program corresponds to a finite state machine (FSM), i.e., a directed graph consisting of nodes (or vertices) and edges. A set of atomic propositions is associated with each node, typically stating which memory elements are one. The nodes represent states of a system, the edges represent possible transitions that may alter the state, while the atomic propositions represent the basic properties that hold at a point of execution.
Formally, the problem can be stated as follows: given a desired property, expressed as a temporal logic formula , and a structure with initial state , decide if . If is finite, as it is in hardware, model checking reduces to a graph search.
Symbolic model checking
Instead of enumerating reachable states one at a time, the state space can sometimes be traversed more efficiently by considering large numbers of states at a single step. When such state space traversal is based on representations of a set of states and transition relations as logical formulas, binary decision diagrams (BDD) or other related data structures, the model-checking method is symbolic.
Historically, the first symbolic methods used BDDs. After the success of propositional satisfiability in solving the planning problem in artificial intelligence (see satplan) in 1996, the same approach was generalized to model checking for linear temporal logic (LTL): the planning problem corresponds to model checking for safety properties. This method is known as bounded model checking. The success of Boolean satisfiability solvers in bounded model checking led to the widespread use of satisfiability solvers in symbolic model checking.
Example
One example of such a system requirement:
Between the time an elevator is called at a floor and the time it opens its doors at that floor, the elevator can arrive at that floor at most twice. The authors of "Patterns in Property Specification for Finite-State Verification" translate this requirement into the following LTL formula:
Here, should be read as "always", as "eventually", as "until" and the other symbols are standard logical symbols, for "or", for "and" and for "not".
Techniques
Model-checking tools face a combinatorial blow up of the state-space, commonly known as the state explosion problem, that must be addressed to solve most real-world problems. There are several approaches to combat this problem.
Symbolic algorithms avoid ever explicitly constructing the graph for the finite state machines (FSM); instead, they represent the graph implicitly using a formula in quantified propositional logic. The use of binary decision diagrams (BDDs) was made popular by the work of Ken McMillan and the development of open-source BDD manipulation libraries such as CUDD and BuDDy.
Bounded model checking algorithms unroll the FSM for a fixed number of steps, , and check whether a property violation can occur in or fewer steps. This typically involves encoding the restricted model as an instance of SAT. The process can be repeated with larger and larger values of until all possible violations have been ruled out (cf. Iterative deepening depth-first search).
Abstraction attempts to prove properties of a system by first simplifying it. The simplified system usually does not satisfy exactly the same properties as the original one so that a process of refinement may be necessary. Generally, one requires the abstraction to be sound (the properties proved on the abstraction are true of the original system); however, sometimes the abstraction is not complete (not all true properties of the original system are true of the abstraction). An example of abstraction is to ignore the values of non-boolean variables and to only consider boolean variables and the control flow of the program; such an abstraction, though it may appear coarse, may, in fact, be sufficient to prove e.g. properties of mutual exclusion.
Counterexample guided abstraction refinement (CEGAR) begins checking with a coarse (i.e. imprecise) abstraction and iteratively refines it. When a violation (i.e. counterexample) is found, the tool analyzes it for feasibility (i.e., is the violation genuine or the result of an incomplete abstraction?). If the violation is feasible, it is reported to the user. If it is not, the proof of infeasibility is used to refine the abstraction and checking begins again.
Model-checking tools were initially developed to reason about the logical correctness of discrete state systems, but have since been extended to deal with real-time and limited forms of hybrid systems.
First-order logic
Model checking is also studied in the field of computational complexity theory. Specifically, a first-order logical formula is fixed without free variables and the following decision problem is considered:
Given a finite interpretation, for instance, one described as a relational database, decide whether the interpretation is a model of the formula.
This problem is in the circuit class AC0. It is tractable when imposing some restrictions on the input structure: for instance, requiring that it has treewidth bounded by a constant (which more generally implies the tractability of model checking for monadic second-order logic), bounding the degree of every domain element, and more general conditions such as bounded expansion, locally bounded expansion, and nowhere-dense structures. These results have been extended to the task of enumerating all solutions to a first-order formula with free variables.
Tools
Here is a list of significant model-checking tools:
Alloy (Alloy Analyzer)
BLAST (Berkeley Lazy Abstraction Software Verification Tool)
CADP (Construction and Analysis of Distributed Processes) a toolbox for the design of communication protocols and distributed systems
CPAchecker: an open-source software model checker for C programs, based on the CPA framework
ECLAIR: a platform for the automatic analysis, verification, testing, and transformation of C and C++ programs
FDR2: a model checker for verifying real-time systems modelled and specified as CSP Processes
ISP code level verifier for MPI programs
Java Pathfinder: an open-source model checker for Java programs
Libdmc: a framework for distributed model checking
mCRL2 Toolset, Boost Software License, Based on ACP
NuSMV: a new symbolic model checker
PAT: an enhanced simulator, model checker and refinement checker for concurrent and real-time systems
Prism: a probabilistic symbolic model checker
Roméo: an integrated tool environment for modelling, simulation, and verification of real-time systems modelled as parametric, time, and stopwatch Petri nets
SPIN: a general tool for verifying the correctness of distributed software models in a rigorous and mostly automated fashion
TAPAs: a tool for the analysis of process algebra
TAPAAL: an integrated tool environment for modelling, validation, and verification of Timed-Arc Petri Nets
TLA+ model checker by Leslie Lamport
UPPAAL: an integrated tool environment for modelling, validation, and verification of real-time systems modelled as networks of timed automata
Zing – experimental tool from Microsoft to validate state models of software at various levels: high-level protocol descriptions, work-flow specifications, web services, device drivers, and protocols in the core of the operating system. Zing is currently being used for developing drivers for Windows.
See also
References
Further reading
Model Checking, Doron A. Peled, Patrizio Pelliccione, Paola Spoletini, Wiley Encyclopedia of Computer Science and Engineering, 2009.
Model Checking, Edmund M. Clarke, Orna Grumberg and Doron A. Peled, MIT Press, 1999, .
Systems and Software Verification: Model-Checking Techniques and Tools, B. Berard, M. Bidoit, A. Finkel, F. Laroussinie, A. Petit, L. Petrucci, P. Schnoebelen,
Logic in Computer Science: Modelling and Reasoning About Systems, Michael Huth and Mark Ryan, Cambridge University Press, 2004. .
The Spin Model Checker: Primer and Reference Manual, Gerard J. Holzmann, Addison-Wesley, .
Julian Bradfield and Colin Stirling, Modal logics and mu-calculi, Inf.ed.ac.uk
Specification Patterns KSU.edu
Property Pattern Mappings for RAFMC Inria.fr
Radu Mateescu and Mihaela Sighireanu Efficient On-the-Fly Model-Checking for Regular Alternation-Free Mu-Calculus, page 6, Science of Computer Programming 46(3):255–281, 2003
Müller-Olm, M., Schmidt, D.A. and Steffen, B. Model checking: a tutorial introduction. Proc. 6th Static Analysis Symposium, G. File and A. Cortesi, eds., Springer LNCS 1694, 1999, pp. 330–354.
Baier, C., Katoen, J.: Principles of Model Checking. 2008.
E.M. Clarke: "The birth of model checking"
(this is also a very good introduction and overview of model checking)
|
23423534
|
https://en.wikipedia.org/wiki/Australian%20Artificial%20Intelligence%20Institute
|
Australian Artificial Intelligence Institute
|
In Australia, the Australian Artificial Intelligence Institute (Australian AI Institute, AAII, or A2I2) was a government-funded research and development laboratory for investigating and commercializing Artificial Intelligence, specifically Intelligent Software Agents.
History
The AAII was started in 1988 as an initiative by the Hawke government and closed in 1999. It was backed by support from the Computer Power Group, SRI International and the Victorian State Government. The director of the group was Michael Georgeff who came from SRI, contributing his experience with the PRS and vision in the domain of Intelligent agents. It was located in the Melbourne suburb of Carlton before moving to more spacious premises in the city centre of Melbourne, Victoria. At its peak it had more than 40 staff and took up two floors of an office building on the corner of Latrobe and Russell Streets.
In the late 1990s, the AAII spun out Agentis International (Agentis Business Solutions) to address the commercialization of the developed technology. Another company, Agent Oriented Software (AOS) was formed by a number of ex-AAII staff to pursue agent technology developing JACK Intelligent Agents. After the AAII shutdown, those staff that remained and the intellectual property were transferred to Agentis International.
Projects
This section summarizes a selection of the software and commercial projects that came out of the AAII:
Procedural Reasoning System (PRS) ongoing development and application of PRS in collaboration with SRI International
Distributed Multi-Agent Reasoning System (dMARS) an agent-oriented development and implementation environment for building complex, distributed, time-critical systems. Developed as a C++ extension to PRS.
Smart Whole AiR Mission Model (SWARMM) an agent-oriented simulation system developed by AAII in conjunction with and for the Air Operations Division (AOD) of the DSTO.
Optimal Aircraft Sequencing using Intelligent Scheduling (OASIS) an air traffic management system written in the PRS that accurately estimated aircraft arrival time, determined an optima sequence for landings and alerted operators as to the actions required to achieve the sequence. It was designed to reduce air traffic congestion and maximize the use of runways. A prototype was developed for Sydney Airport using dMARS called HORIZON.
Single Point of Contact (SPOC) was a system developed for Optus to assist customer service representatives to meet the objective to meet 98% of customer enquirers with a single point of contact with the company. The system was built using dMARS and involved a multilayer architecture.
Technical Notes
Over the course of its existence, the AAII released more than 75 of public technical notes . This section lists an available selection of these notes.
See also
Distributed Multi-Agent Reasoning System
Belief-Desire-Intention software model
Procedural Reasoning System
References
Further reading
Michael Peter Georgeff, Anand S. Rao, "A profile of the Australian Artificial Intelligence Institute," IEEE Intelligent Systems, vol. 11, no. 6, pp. 89–92, Dec. 1996
M. Georgeff, A. Rao, "Rational software agents: from theory to practice", in "Agent technology: foundations, applications, and markets", pages 139-160, Springer-Verlag New York, Inc., Secaucus, NJ, 1998
Making space for big ideas, The Age, 18 November 2004
External links
AAII Website on the Internet Archive
Agent Oriented Software Pty. Ltd.
OASIS (Optimal Aircraft Sequencing using Intelligent Scheduling)
Research institutes in Australia
1988 establishments in Australia
Companies established in 1988
Artificial intelligence
|
858524
|
https://en.wikipedia.org/wiki/IBM%20System/4%20Pi
|
IBM System/4 Pi
|
The IBM System/4 Pi is a family of avionics computers used, in various versions, on the F-15 Eagle fighter, E-3 Sentry AWACS, Harpoon Missile, NASA's Skylab, MOL, and the Space Shuttle, as well as other aircraft. Development began in 1965, deliveries in 1967.
It descends from the approach used in the System/360 mainframe family of computers, in which the members of the family were intended for use in many varied user applications. (This is expressed in the name: there are 4π steradians in a sphere, just as there are 360 degrees in a circle.) Previously, custom computers had been designed for each aerospace application, which was extremely costly.
Models
System/4 Pi consisted of basic models:
Model TC (Tactical Computer) - A briefcase-size computer for applications such as missile guidance, helicopters, satellites and submarines. Weight: about
Model CP (Customized Processor/Cost Performance) - An intermediate-range processor for applications such as aircraft navigation, weapons delivery, radar correlation and mobile battlefield systems. Weight: total
Model CP-2 (Cost Performance - Model 2), weight
Model EP (Extended Performance) - A large-scale data processor for applications requiring real-time processing of large volumes of data, such as crewed spacecraft, airborne warning and control systems and command and control systems. Weight:
System/360 connections
Connections with System/360:
Main storage arrays of System/4 Pi were assembled from core planes that were militarized versions of those used in IBM System/360 computers
Software was for both 360 and 4 Pi
Model EP used an instruction subset of IBM System/360 (Model 44) - user programs could be checked on System/360
Uses
The Skylab space station employed the model TC-1, which had a 16-bit word length and 16,384 words of memory with a custom input/output assembly.
AP-101
The AP-101, being the top-of-the-line of the System/4 Pi range, shares its general architecture with the System/360 mainframes. It has 16 32-bit registers, and uses a microprogram to define an instruction set of 154 instructions. Originally only 16 bits were available for addressing memory; later this was extended with four bits from the program status word register, allowing a directly addressable memory range of 1M locations. This avionics computer has been used in the U.S. Space Shuttle, the B-52 and B-1B bombers, and other aircraft. It is a repackaged version of the AP-1 used in the F-15 fighter. When it was designed, it was a high-performance pipelined processor with core memory. While its specifications are exceeded by most of the modern microprocessors, it was considered high-performance for its era as it could process 480,000 instructions per second (0.48 MIPS; compared to the 7,000 instructions per second (0.007 MIPS) of the computer used on Gemini spacecraft, while top-of-the line microprocessors as of 2020 are capable of performing more than 2,000,000 MIPS). It remained in service on the Space Shuttle because it worked, was flight-certified, and developing a new system would have been too expensive. The Space Shuttle AP-101s were augmented by glass cockpit technology.
The B-1B bomber employs a network of eight model AP-101F computers.
The AP-101B originally used in the Shuttle had core memory. The AP-101S upgrade in the early 1990s used semiconductor memory. Each AP-101 on the Shuttle was coupled with an Input-Output Processor (IOP), consisting of one Master Sequence Controller (MSC) and 24 Bus Control Elements (BCEs). The MSC and BCEs executed programs from the same memory system as the main CPU, offloading control the Shuttle's serial data bus system from the CPU.
The Space Shuttle used five AP-101 computers as general-purpose computers (GPCs). Four operated in sync, for redundancy, while the fifth was a backup running software written independently. The Shuttle's guidance, navigation and control software was written in HAL/S, a special-purpose high-level programming language, while much of the operating system and low-level utility software was written in assembly language. AP-101s used by the US Air Force are mostly programmed in JOVIAL, such as the system found on the B-1B Lancer bomber.
References
Bibliography
External links
IBM Archive: IBM and the Space Shuttle
IBM Archive: IBM and Skylab
NASA description of Shuttle GPCs
NASA history of AP-101 development
Space Shuttle Computers and Avionics
Guidance computers
4999System 4 Pi
Military computers
|
59727160
|
https://en.wikipedia.org/wiki/Carol%20Frieze
|
Carol Frieze
|
Carol Frieze works in the School of Computer Science at Carnegie Mellon University as director of the Women@SCS and SCS4ALL professional organizations.
She is co-author of a book on the successful efforts to attract and retain women in computing at Carnegie Mellon, where women represented 50% of the incoming class to the computer science major in fall 2018. She has been recognized by the A. Nico Habermann Award of the Computing Research Association and the AccessComputing Capacity Building Award.
Education and career
Frieze studied English literature for a while at the University of London before moving into cultural studies at Carnegie Mellon, eventually earning her Ph.D. from Carnegie Mellon University in Cultural Studies in Computer Science. Her 2007 dissertation, The critical role of culture and environment as determinants of women's participation in computer science,
was supervised by Lenore Blum.
She has taught at the Royal National Orthopaedic Hospital School in England and in the English department at Carnegie Mellon before coming to work for the School of Computer Science.
Women@SCS, one of the organizations Frieze directs at Carnegie Mellon, is based on the guiding premise of leveling the playing field, working to ensure that women receive the same social, networking, mentoring, and professional opportunities, that are more readily available to the majority male peers. She also works on diversity and inclusion through BiasBusters@CMU, an academic interactive program aiming to raise awareness of bias and mitigate the harmful effects of unconscious bias on campus.
Books
With Jeria Quesenberry, Frieze is a co-author of the book Kicking Butt in Computer Science: Women in Computing at Carnegie Mellon University (Dog Ear Publishing, 2015). The book describes Carnegie Mellon's successful work to attract and retain female students in Carnegie Mellon's computer science major by focusing on the culture of computing rather than by making changes to the computer science curriculum.
Frieze and Quesenberry are co-editors of the book Cracking the Digital Ceiling: Women in Computing Around the World, Cambridge University Press, 2020. This collection of global perspectives challenges the view that men are more suited to computing fields than women, a belief often perpetuated as an explanation for women’s low participation in computing in the USA. By providing an insider look at how different cultures from all continents around the world impact the experiences of women in computing, the book introduces readers to theories and evidence that support the need to turn to cultural and environmental factors, rather than innate potential, to understand what determines women’s participation in computing. The book is a wakeup call to examine the obstacles and catalysts within various cultures and environments that help determine women's participation in this rapidly growing field.
Recognition
Frieze won the A. Nico Habermann Award of the Computing Research Association in 2017. The award citation commended her for "devoting nearly two decades to promoting diversity and inclusiveness in computing", for publishing "valuable research towards understanding the challenges diverse populations face", and for helping to bring the number of women majoring in computer science at Carnegie Mellon close to 50%, "far above the national average".
Personal life
Frieze grew up in a coal mining village in Nottinghamshire, England, and was the first in her family to go to college. Frieze is married to mathematician Alan M. Frieze. They have two adult children and four grandchildren.
References
External links
Home page
Year of birth missing (living people)
Living people
American computer scientists
British computer scientists
American women computer scientists
Carnegie Mellon University faculty
|
2567417
|
https://en.wikipedia.org/wiki/Digital%20darkroom
|
Digital darkroom
|
Digital "darkroom" is the hardware, software and techniques used in digital photography that replace the darkroom equivalents, such as enlarging, cropping, dodging and burning, as well as processes that don't have a film equivalent.
All photographs benefit from being developed. With film this could be done at the print lab, or an inexpensive home darkroom. With digital, many cameras are set up to do basic photo enhancement (contrast, color saturation) immediately after a picture is exposed, and to deliver a finished product. Higher end cameras, however, tend to give a flatter, more neutral image that has more data but less "pop," and needs to be developed in the digital darkroom.
Setting up a film darkroom was primarily an issue of gathering the right chemicals and lighting; a digital darkroom consists of a powerful computer, a high-quality monitor setup (dual monitors are often used) and software. A printer is optional; many photographers still send their images to a professional lab for better results and, in some cases, a better price.
While each implementation is unique, most share several traits: an image editing workstation as the cornerstone, often a database-driven digital asset management system like Media Pro 1 to manage the collection as a whole, a RAW conversion tool like Adobe Photoshop Lightroom or Capture One, and in many cases the software that came with the camera is used as an automated tool to "upload" photos to the computer. The machine itself is almost always outfitted with as much RAM as possible and a large storage subsystem - big hard drives. RAID and external USB and FireWire drives are popular for storage. Most photographers consider a DVD-burner essential for making long term backups, and keep at least one set off-site.
The term was coined by Gerard Holzmann of Bell Labs for a book entitled Beyond Photography: The Digital Darkroom, in which he describes his pico image manipulation language (not to be confused with the pico programming language).
Software
The software employed in a digital darkroom varies greatly depending on the photographer's needs, budget and skill. The following are general areas and examples of software.
Image Acquisition: entails downloading images from a camera or removable storage device or importing from a scanner. Windows XP and Windows Vista both include an inbuilt wizard for importing images, including scanning images. Many professionals however may choose an importation tool built into image management software such as Adobe Photoshop Lightroom, Apple Aperture, ACDSee, Capture One, or darktable.
Image Library Management: involves managing images in a photographer's library and may extend to backing up images. Software such as Adobe Photoshop Lightroom, Apple Aperture and Media Pro 1 are examples of major image management software.
RAW Software: software, either stand alone or as part of image library management software that is designed to import and process RAW images. Most digital cameras capable of outputting RAW images will include a program or plug-in for this purpose such as Canon RAW Image Converter. Commercial Programs such as Adobe Photoshop Lightroom, Apple Aperture and Capture One, as well as open source projects such as darktable and RawTherapee, include extensive support for RAW importation and processing.
Image Editing: There are countless image editing suites, programs and tools available. Adobe Photoshop is among the most highly used in professional circles as are programs from Apple Inc., Microsoft, Macromedia (now Adobe), ACD, Phase One and various open source projects. Consumers may use professional software or choose less expensive options such as Adobe Photoshop Elements, Capture One Express or free open source options such as The GIMP.
Camera Control Software: software that can remotely control a camera from a computer connected via USB (tethered shooting). Normally included as utilities with camera, these allow photographers to control the camera from a nearby computer. Cameras such as the Canon 40D include such software and a live view mode so that a user may use a computer to control numerous functions of the camera while seeing a virtual viewfinder onscreen.
Capture One is one of the pioneer software programs used for tethered shooting which is very useful especially for studio photographers.
Capture Pilot is a photography app with camera control. The photographer is able to use a virtual camera display on an iPhone or iPad to remotely fire the camera and control capture parameters such as ISO sensitivity, exposure mode, shutter speed, aperture, exposure compensation. Capture Pilot requires Capture One to function.
Hardware
As image size and resolution increases, so do the requirements for hardware in a digital darkroom.
Computer: A computer in a digital darkroom typically have a generous amount of RAM, often 4GB or more, coupled with discrete graphics and a powerful multicore processor. For much of the 1980s and 90s, Macintosh based systems were dominant in the digital imaging market as Adobe's powerful new Photoshop software had only then been developed for the Mac. However, Windows-based systems such as Dell's high-end Precision range have become increasingly popular in recent times; better value for money than Apple's high-end Mac Pro and a more familiar Operating System are both factors that affect the choice of many prospective buyers of photo-editing systems.
Cameras/Scanners: Digital cameras and image scanners are increasing in quality, including the amount of colour they can capture and output. Many newer digital cameras can support wider colour spaces such as Adobe RGB and have higher resolution analog-to-digital converters; 14 bits rather than the common 12 bits.
Displays: Professionals may use premium displays from companies such as EIZO and Dell which are capable of displaying a wider range of colours than consumer oriented devices.
Printers: In addition to computers and displays, digital darkrooms may include printing equipment, ranging from smaller size printers for proofing to large format productions printers. Scanner and studio photographic equipment may also be included.
Digimage Arts program
Digital Darkroom is also the trademarked name of an image editing program for the Macintosh published by Digimage Arts.
References
Digital photography
Photo software
|
27633533
|
https://en.wikipedia.org/wiki/Vernier%20Software%20%26%20Technology
|
Vernier Software & Technology
|
Vernier Software & Technology is an educational software and equipment company situated in Beaverton, Oregon that produces sensors and graphing software for use in scientific education. Vernier is one of the first companies to popularize the use of computers and sensor technology, known as "probeware" or "Microcomputer Based Labs" (MBL), during laboratory experiments.
History
Vernier Software & Technology was founded in Portland, Oregon in 1981 in the home of David Vernier, a high-school physics teacher, and Christine Vernier, a local business manager.
The first software programs developed by David Vernier were scientific simulations for Apple II computers. In 1982, David developed the program Graphical Analysis, which allowed an individual to manually enter data into a table and display the data as a graph. That year the company started producing data-acquisition software and providing instructions for individuals to build their own sensors. Gradually the company expanded the product line to include software for other computers using the DOS operating system and for Macintosh computers.
In the late 1980s, the company started producing assembled temperature sensors, and many other types of sensors, such as photogates and motion detectors for studying moving objects. In 1990, the Universal Lab Interface (ULI), the MultiPurpose Lab Interface (MPLI), and software to run on Macintosh and Windows computers, was introduced, quickly followed by the introduction of the Serial Box Interface.
In 1994, Vernier began collaboration with Texas Instruments to support data collection on graphing calculators, after Texas Instruments introduced the Calculator-Based Laboratory (CBL). In 1996, Vernier developed Logger Pro, a general-purpose data collection and analysis computer program, which after many revisions is now called Logger Pro 3.
Current products
Vernier Software & Technology exports products to over 120 countries through Vernier International, based in Sarasota, Florida. The company produces 75 sensors and six data-collection interfaces.
Logger Pro 3 software collects and analyzes data from Vernier interfaces, plus
Electronic balances from Ohaus
GPS from Garmin
Spectrometers from Ocean Optics
Gas chromatographs developed in collaboration with Seacoast Science
Other strategic partnerships for STEM education include the development of an adapter with Lego Education, which allows sensors to be used with Lego Mindstorms NXT; and SensorDAQ, a sensor interface developed with National Instruments that may be used with LabVIEW.
Company
The company is located in a LEED gold-certified building and employs approximately 100 individuals. David Vernier serves as CEO of Vernier Software & Technology and oversees product development and Christine Vernier serves as COO and oversees company operations.
United States President Barack Obama, then a presidential candidate, visited the company in May 2008.
The company has won many awards, including Fastest Growing Private Company in Oregon from Portland Business Journal, 100 Best Places to Work in Oregon from Oregon Business Magazine for the last 11 straight years, and Best Green Companies in Oregon.
Vernier Software & Technology has been engaged in many philanthropic endeavors, including the funding of the Vernier Technology Laboratory at the Oregon Museum of Science and Industry and many other educational, environmental, and social service organizations.
The Vernier Technology Awards are presented every year to seven teachers at the National Science Teachers Association convention.
References
External links
Vernier Software & Technology website — homepage.
Oregon Business
Association of Fundraising Professionals
U.S. Green Building Council
National Science Teachers Association
Vernier to invest $2.8 million under Beaverton Enterprise Zone program - The Oregonian
Educational technology companies of the United States
Educational software companies
Companies based in Beaverton, Oregon
Software companies established in 1981
1981 establishments in Oregon
|
60784315
|
https://en.wikipedia.org/wiki/2018%20Google%20data%20breach
|
2018 Google data breach
|
The 2018 Google data breach was a major data privacy scandal in which the Google+ API exposed the private data of over five million users.
Google+ managers first noticed harvesting of personal data in March 2018, during a review following the Facebook–Cambridge Analytica data scandal. The bug, although it was fixed immediately, exposed the private data of approximately 500,000 Google+ users to the public. Google did not reveal the leak to the network's users. In November 2018, another data breach occurred following an update to the Google+ API. Although Google found no evidence of failure, approximately 52.5 million personal profiles were potentially exposed. In August 2019, Google declared a shutdown of Google+ due to low use and technological challenges.
Overview of Google+
Google+ was launched in June 2011 as an invite-only social network, but was opened for public access later in the year. It was managed by Vic Gundotra.
Similar to Facebook, Google+ also included key features Circles, Hangouts and Sparks.
Circles let users personalize their social groups by sorting friends into different categories. Once allowed into a Circle, users could regulate information in their individual spaces.
Hangouts included video chatting and instant messaging between users.
Sparks allowed Google to track users' past searches to find news and content related to their interests.
Google+ was linked to other Google services, such as YouTube, Google Drive and Gmail, giving it access to roughly 2 billion user accounts. However, less than 400 million consumers actively used Google+, with 90% of those users using it for less than five seconds.
The breaches
In March 2018, Google developers found a data breach within the Google+ People API in which external apps acquired access to Profile fields that were not marked as public. 500,000 Google+ accounts were included in the breach, which allowed 438 external apps unauthorized access to private users' names, emails, addresses, occupations, genders and ages. This information was available between 2015 and 2018. Google found no evidence of any user's personal information being misused, nor that any third-party app developers were aware of the leak.
In November 2018, a software update created another data breach within the Google+ API. The bug impacted 52.5 million users, where, similarly to the March breach, unauthorized apps were able to access Google+ profiles, including users' names, email addresses, occupations and ages. Apps could not access financial information, national identification, numbers, or passwords. Blog posts, messages and phone numbers also remained inaccessible if marked as private. Unlike the previous breach, access was only available for six days before Google+ learned of the breach. Once more, Google+ found no evidence data being misused by third-party developers.
Responses
In October 2018, the Wall Street Journal published an article outlining the initial breach and Google's decision to not disclose it to users. At the time, there was no federal law that required Google to inform their consumers of data breaches. Google+ originally did not disclose the breach out of fears of being compared to Facebook's recent data leak and subsequent loss of consumer confidence. In response to the Wall Street Journal article, Google announced the shutdown of Google+ in August 2019. After the second data leak, the date was moved to April 2019. In response to the data breach, enterprise consumers were notified of the bug's impact and given instructions on how to save, download and delete their data prior to the Google+ shut down. Google's Privacy and Data Protection Office found no misuse of user data.
Prior to the Google+ shutdown, Google set a 10-month period in which users could download and migrate their data. After the 10-month period, user content was deleted. On 4 February 2019, consumers were no longer able to create new Google+ profiles. Google shut down Google+ APIs on 7 March 2019 to ensure that developers did not continue to rely on the APIs prior to the Google+ shutdown.
Google is the principal entity of its parent company, Alphabet Inc. After the data breach, Alphabet Inc. share prices fell by 1% to $1,157.06 on 9 October 2018 after an earlier drop of $1,135.40 that morning, the lowest price since 5 July 2018. After the publication of The Wall Street Journal article, share prices dropped as low as 2.1% in two days on 10 October 2018. Share prices steadily increased from this point and met the 8 October 2018 share price on 5 February 2019.
Google planned to rebuild Google+ as a corporate enterprise network. Google Play will now assess which apps can ask for permission to access the user's SMS data. Only the default app for telephone distribution is able to make requests. Prior to the data breaches, apps were able to request access to all of a consumer's data simultaneously. Now, each app must request permission for each aspect of a consumer's profile.
References
Google
Data breaches
2018 in computing
|
57004467
|
https://en.wikipedia.org/wiki/Daniel%20M.%20Russell
|
Daniel M. Russell
|
Daniel M. Russell is an American computer scientist who is a senior research scientist at Google. He teaches on the subject of effective web-search strategies, using large-scale teaching systems developed by him at Google. Russell sometimes refers to himself as a search anthropologist for his focus on user experience in web search and improving sensemaking of information with technology. Russell is also a Resident Futurist at University of Maryland, where he works for the Office of the Vice-President for Research.
Russell has held research positions with IBM (as a senior research scientist, and briefly at a startup that developed tablet computers a few years before the iPad. ), Apple Inc. (where he wrote the first 100 web pages for www.Apple.com using SimpleText.) and Xerox over the course of his career.
Education
Russell graduated from University of California at Irvine with a B.S. in Information and Computer Science (1977). He received his M.S (1979) and Ph.D. (1985) in Computer Science from University of Rochester. His doctoral work was titled "Schema-Based Problem Solving" which was based on "using recombinations of pre-stored plans in sophisticated ways". While at University of Rochester, Russell did research work in "the neuropsychology of laterality, models of apraxia and aphasia, coordinated motor movements and computer vision".
Career
Russell joined Xerox Corporation in 1981 where he worked as a consultant at the Webster Research Center in New York. Russell then became a Research Associate where he engaged in AI research and the development of Interlisp-D courses. In 1982, he joined the research staff at PARC. Until 1991, he led a project called "Instructional Design Environment" (IDE) with Richard Burton and Thomas P. Moran to "develop a practical computer-aided design and analysis system for use in ill-structured design tasks". He then worked in the User Interface Research group, led by Stuart Card, which studied the uses of information visualization techniques.
Russell worked at Apple in the Advanced Technology Group sector from 1993 to 1997. He managed research within the User Experience Research Group which studied issues of sensemaking, cognitive modelling of analysis tasks, synchronous and asynchronous collaboration, shared awareness of individual state, joint work coordination, and knowledge-based use of complex, heterogenous information. Alongside his research, he developed applications such as Knowbots and AI planner-based assistants for Macintosh OS. Russell subsequently became the Director of the Knowledge Management Technologies laboratory where he led the research efforts in five areas: Intelligent Systems, Spoken Language, User Experience, Interaction Design, and Information Technology. As Director, Russell also worked alongside the Apple CEO Tim Cook and founder Steve Jobs in corporation on Network Computing.
Russell was an adjunct lecturer on the Engineering and Computer Science faculty of the University of Santa Clara (1998), and has taught special topics classes in Artificial Intelligence at Stanford University (1994).
Russell returned to Xerox in 1997 where he worked as a manager in the User Experience Research area through 2000. From 1998 to 1999, Russell led the Madcap project, a system to capture, organize and render large amounts of complex presentation materials into an understandable whole. The project is implemented in Java and Quicktime.
Russell joined IBM in 2000, where he managed a research group in the User Sciences and Experience Research (USER) lab at the IBM Almaden Research Center. He subsequently became a senior manager where he led larger research groups in areas covering user experience design of large systems. Until 2005, he engaged in understanding sensemaking behaviour of people dealing with mass information collection. He has also contributed in the design and use case studies in the BlueBoard system as a collaboration tool. BlueBoard is a large interactive system and display surface for collaboration whose primary goal was to support quick information access and sharing through shoulder-to-shoulder collaboration. The tool was also used to explore computer interfaces in public spaces. The project's success led it to be installed in the main lobby of the IBM Watson Research Lab as well as the boardroom of IBM CEO Lou Gerstner per his request.
Russell joined Google in 2005; as of 2018, he is a senior research scientist and lead for the research area of "Search Quality & User Happiness". He coordinated the development of two Massive Open Online Courses (MOOC) on effective searching skills which were launched in 2012 on PowerSearchingWithGoogle.com, which had more than 3 million students. In September of 2019, MIT Press and Dr. Russell 4.4M people have taken Power Searching with Google MOOC
He also led a Search Education team that developed A-Google-Day, launched April 11, 2011, a large-scale teaching system in which users can practice their searching skills with Google. The software package for it was later re-packaged and offered on Google's Course Builder.
Russell have also given commencement address, lectures and keynote conferences about different topics in Academy.
Russell have helped the world to search better, investing time with random people studying how they search for stuff. Russell added in the interview: "One statistic blew my mind. 90 percent of people in their studies don't know how to use CTRL/Command + F to find a word in a document or web page!" This information has been also translated to other languages, like in Spanish And by creating videos ("1 Minute Morceaux" on his YouTube Channel) and giving tips to improve Web Literacy.
On March 26, 2019. Russell mentioned on Twitter that his quest is to teach the world to be better at online research In the same tweet and also on his blog, Russell, shared his most recent publication, this one on Scientific American called:"How to Be a Better Web Searcher: Secrets from Google Scientists", done with Mario Callegaro.
On March 27, 2019, Russell also announced on his blog, the Amazon ordering entry for his book:The Joy of Search, published by MIT Press
September 24th, 2019, his book, The Joy of Search is on sale Which have been reviewed and presented in different venues and by different people. For example, Jill O'Neill, Director of Content for NISO (National Information Standards Organization) And, Keynote Conference ("The Joy of Search: Augmenting intelligence by teaching people how to Search"), in the International Conference of Education, Research and Innovation (ICERI)
Research
Russell's research focus has been human experience with search engines and in large, complex collections of information. He aims to design comprehensible and intuitive ways for users to engage with information effectively. Particular topics include the design of information experience; sensemaking; intelligent agents; knowledge-based assistance; information visualization; multimedia documents; advanced design and development environments; design rationale; planning; intelligent tutoring; hypermedia; and human–computer interfaces.
While developing AI technology at Xerox PARC, Russell realized that sophisticated technology was useless if people did not intuitively know how to use it. This motivated him to shift his focus to the sciences of user experience.
In 2011, Russell taught effective searching skills and enhancing learning efficacy. Russell also investigated new approaches to dealing with the growing amount of available information.
Authored publications and cited works
Russell's authored publications in topics including education innovation, human–computer interaction and visualization, information retrieval and the web, and mobile systems can be found on the Google AI website. His works are widely cited by other authors.
Personal life
Russell blogs on effective searching skills as well as his own investigations in sensemaking and information foraging.
Russell started writing his blog, SearchResearch, in 2010.
On October 23, 2013, Russell appeared on Lifehacker's "The How I Work series", interviewed by Tessa Miller. On it, Russell describes a day in his life, how he works and some personal topics
On August 9, 2017, his blog arrived to the first 1,000 posts.
Awards
Russell was inducted into the CHI Academy (ACM) in 2016. He was added to the UC Irvine Information & Computer Science Department Hall of Fame in 2015. In 2013, Russell received the UC Irvine Bren School's 2013 Lauds & Laurels Distinguished Alumnus award.
References
Living people
Date of birth missing (living people)
University of California, Irvine alumni
University of Rochester alumni
American computer scientists
Google employees
Year of birth missing (living people)
|
51665202
|
https://en.wikipedia.org/wiki/Software%20monetization
|
Software monetization
|
Software monetization is a strategy employed by software companies and device vendors to maximize the profitability of their software. The software licensing component of this strategy enables software companies and device vendors to simultaneously protect their applications and embedded software from unauthorized copying, distribution, and use, and capture new revenue streams through creative pricing and packaging models. Whether a software application is hosted in the cloud, embedded in hardware, or installed on premises, software monetization solutions can help businesses extract the most value from their software. Another way to achieve software monetization is through paid advertising and the various compensation methods available to software publishers. Pay-per-install (PPI), for example, generates revenue by bundling third-party applications, also known as adware, with either freeware or shareware applications.
History
The exact origin of the term 'software monetization' is unknown, however, it has been in use in the information security industry since 2010. It was first used to articulate the value of licensing for cloud-hosted applications, but later came to encompass applications embedded in hardware and installed on premises. Today, software monetization broadly applies to software licensing, protection, and entitlement management solutions. In the digital advertising space, the term refers to solutions that increase revenue through installs, traffic, display ads, and search.
Key areas of software monetization
IP protection
Software constitutes a significant part of a software company or device vendor's intellectual property (IP) and, as such, may benefit from strong security, encryption, and digital rights management (DRM). Depending on a company's particular use case, they can choose to implement a hardware, software, or cloud-based licensing solution, or by open sourcing software and relying on donations and/or compensation for support, customization or enhancements.
A hardware-based protection key, or dongle, is best suited to software publishers concerned about the security of their product as it offers the highest level of copy protection and IP protection. Although a key must be physically connected in order to access or run an application, end users are not required to install any device drivers on their machines. A software-based protection key is ideal for software publishers who require flexible license delivery. The virtual nature of software keys eliminates the need to ship a physical product, thus enabling end users to quickly install and use an application with minimal fuss. Cloud-based licensing, on the other hand, provides automatic and immediate license enablement, so users can access software from any device including virtual machines and mobile devices.
It is in the best interests of software companies and device vendors to take the necessary measures to protect their code from software piracy, a problem that costs the global software industry more than $100 billion annually. However, software protection is not just about preventing revenue loss; it is also about an organization's ability to protect the integrity of its product or service and brand reputation.
Pricing and packaging
An independent report by Vanson Bourne found that software vendors are losing revenue due to rigid licensing and delivery options. Since the demands of enterprise and end users are constantly evolving, software companies and device vendors must be able to adapt their pricing and packaging strategies on the fly. Separating an application's features and selling them individually at a premium is a highly effective way to reach new market segments. Customers have come to expect the freedom to consume a software offering on their own terms, which is why software companies and device vendors are increasingly turning to flexible licensing solutions.
Entitlement management
An entitlement management solution makes it possible to activate and provision cloud, on-premises, and embedded software applications from a single platform. Having the ability to manage homegrown or third-party licensing systems from one, centralized interface is conducive to an operationally efficient back office. With such a solution in place, time-consuming manual tasks can be automated for greater accuracy and reduced costs. Self-service web portals allow end users to perform a variety of tasks themselves, cutting down on support calls and improving customer satisfaction.
Usage tracking
Usage tracking provides essential business insight into end-user entitlements, as well as the consumption of products and features. Advanced data collection and reporting tools help optimize investment in the product roadmap and drive future business strategies. Furthermore, making usage data accessible to users helps them stay in compliance with their license agreements
Advertising
The use of commercial advertisements and contextual advertisements have been a foundation of software monetization since free software first hit the market. Advertisements can come out in many different ways such as text ads, banners, short commercial videos and other types of software advertisements.
Emerging trends
Many traditional device vendors still see themselves as hardware providers, first and foremost, even though the most valuable component of their offering is the embedded software driving it. However, since the advent of the Internet of Things (IoT), that paradigm is shifting toward a more software-centric focus, as device vendors large and small make the inevitable business transformation into software companies. The need to license software, manage entitlements, and protect trade secrets cuts across all industries; from medtech to industrial automation and telecommunications.
Antitrust compliance of software monetization
A number of software companies are some of the most profitable businesses in the world. For example, Amazon is the dominant market leader in e-commerce with 50% of all online sales in the United States going through the platform. Another highly-successful software company, Apple shares a duopoly with Alphabet in the field of mobile operating systems: 27% of the market share belonging to Apple (iOS) and 72% to Google (Android). Alphabet, Facebook and Amazon have been referred to as the "Big Three" of digital advertising.
In most jurisdictions around the world, is an essential legal obligation for any software company to utilize their software monetization strategy in compliance with antitrust laws. Unfortunately, the e-commerce is highly susceptible for antitrust violations that often have to do with improper software monetization. Some software companies systematically utilize price fixing, kickbacks, dividing territories, tying agreements and anticompetitive product bundling (although, not all product bundling is anticompetitive), refusal to deal and exclusive dealing, vertical restraints, horizontal territorial allocation, and similar anticompetitive practices to limit competition and to increase the opportunity for monetization.
In 2019 and 2020, the Big Tech industry become center of antitrust attention from the United States Department of Justice and the United States Federal Trade Commission that included requests to provide information about prior acquisitions and potentially anticompetitive practices. Some Democratic candidates running for president proposed plans to break up Big Tech companies and regulate them as utilities. "The role of technology in the economy and in our lives grows more important every day," said FTC Chairman Joseph Simons. "As I’ve noted in the past, it makes sense for us to closely examine technology markets to ensure consumers benefit from free and fair competition."
In June 2020, the European Union opened two new antitrust investigations into practices by Apple. The first investigation focuses on issues including whether Apple is using its dominant position in the market to stifle competition using its Apple music and book streaming services. The second investigation focuses on Apple Pay, which allows payment by Apple devices to brick and mortar vendors. Apple limits the ability of banks and other financial institutions to use the iPhones' near field radio frequency technology.
Fines are insufficient to deter anti-competitive practices by high tech giants, according to European Commissioner for Competition Margrethe Vestager. Commissioner Vestager explained, "fines are not doing the trick. And fines are not enough because fines are a punishment for illegal behaviour in the past. What is also in our decision is that you have to change for the future. You have to stop what you're doing."
Gig economy online marketplaces like Uber, Lyft, Handy, Amazon Home Services, DoorDash, and Instacart have perfected a process where workers deal bilaterally with gigs whose employers have none of the standard obligations of employers, while the platform operates the entire labor market to its own benefit – what some antitrust experts call a "for-profit hiring hall." Gig workers, such as Uber drivers are not employees, and hence Uber setting the terms on which they transact with customers, including fixing the prices charged to customers, constitutes a violation of the ban on restraints of trade in the Sherman Antitrust Act of 1890. In the United States, the issue of whether companies such as Uber is a price-fixing conspiracy, and whether that price fixing is horizontal has yet to be resolved at trial. In response to price fixing allegations, Uber publicly stated that: "we believe the law is on our side and that"s why in four years no anti-trust agency has raised this as an issue and there has been no similar litigation like it in the U.S."
The spirit of the antitrust law is to protect consumers from the anticompetitive behavior of businesses that have either monopoly power in their market or companies that have banded together to exert cartel market behavior. Monopoly or cartel collusion creates market disadvantages for consumers. However, the antitrust law clearly distinguishes between purposeful monopolies and businesses that found themselves in a monopoly position purely as the result of business success. The purpose of the antitrust law is to stop businesses from deliberately creating monopoly power.
Discussions of antitrust policy in software are often clouded by common myths about this widely misunderstood area of the law. For example, the United States federal Sherman Antitrust Act of 1890 criminalizes monopolistic business practices, specifically agreements that restraint of trade or commerce. At the same time, the Sherman Act allows organic creation of legitimately successful businesses that gain honest profits from consumers. The Act's main function is to preserve a competitive marketplace. The Big Tech companies are large and successful companies, but success alone is not reason enough for antitrust action. A legitimate breach of antitrust law must be the cause of any action against a business.
Antitrust law doesn't condemn a firm for developing a universally popular search engine, such as Google, even if that success leads to market dominance. It's how a monopoly is obtained or preserved that matters — not its mere existence.
See also
Business models for open-source software
License manager
Software protection dongle
Big Tech
Gig economy
Competition law
References
Software industry
|
25781518
|
https://en.wikipedia.org/wiki/Yousef%20Saad
|
Yousef Saad
|
Yousef Saad (born 1950) is an I.T. Distinguished Professor of Computer Science in the Department of Computer Science and Engineering at the University of Minnesota. He holds the William Norris Chair for Large-Scale Computing since January 2006. He is known for his contributions to the matrix computations, including the iterative methods for solving large sparse linear algebraic systems, eigenvalue problems, and parallel computing. Saad is listed as an ISI highly cited researcher in mathematics and is the author of the highly cited book Iterative Methods for Sparse Linear Systems. Yousef Saad is a SIAM fellow (class of 2010) and a fellow of the AAAS (2011).
Education and career
Saad received his B.S. degree in mathematics from the University of Algiers, Algeria in 1970. He then joined University of Grenoble for the doctoral program and obtained a junior doctorate, 'Doctorat de troisieme cycle' in 1974 and a higher doctorate, 'Doctorat d’Etat' in 1983. During the course of his academic career he has held various positions, including Research Scientist in the Computer Science Department at Yale University (1981–1983), Associate Professor in the University of Tizi-Ouzou in Algeria (1983–1984), Research Scientist in the Computer Science Department at Yale University (1984–1986), and Associate Professor in the Mathematics Department at University of Illinois at Urbana-Champaign (1986–1988). He also worked as a Senior Scientist in the Research Institute for Advanced Computer Science (RIACS) during 1980–1990.
Saad joined University of Minnesota as a Professor in the Department of Computer Science in 1990. At Minnesota, he held the position of Head of the Department of Computer Science and Engineering between January 1997 and June 2000. Currently, he is the I. T. Distinguished Professor of Computer Science at University of Minnesota.
Books
Saad is the author of a couple of influential books in linear algebra and matrix computation which include
Numerical Methods for Large Eigenvalue Problems, Halstead Press, 1992.
Iterative Methods for Sparse Linear Systems, 2nd ed., Society for Industrial and Applied Mathematics, Philadelphia, 2003.
"Parallel Algorithms for Irregularly Structured Problems": Third International Workshop., Lecture Notes in Computer Science 1117, IRREGULAR '96 Santa Barbara, CA, USA, August 19–21, 1996 Proceedings [1 ed.]
"High-Performance Scientific Computing: Algorithms and Applications [1 ed.]"., Springer-Verlag London
He has also co-edited the following conference proceedings:
A. Ferreira, J. Rolim, Y. Saad, and T. Yang, Parallel Algorithms for Irregularly Structured Problems, Proceedings of Third International Workshop, IRREGULAR’96 Santa Barbara, CA USA, 1996. Lecture Notes in Computer Science, No 1117. Springer Verlag, 1996.
D. E. Keyes, Y. Saad, and D. G. Truhlar, Domain-Based Parallelism and Problem Decomposition Methods in Computational Science and Engineering. SIAM, Philadelphia, 1995.
D. L. Boley, D. G. Truhlar, Y. Saad, R. E. Wyatt, and L. E. Collins, Practical Iterative Methods for Large Scale Computations. North Holland, Amsterdam, 1989.
References
External links
University of Minnesota faculty
Numerical analysts
Algerian mathematicians
Algerian computer scientists
20th-century American mathematicians
21st-century American mathematicians
University of Algiers alumni
Grenoble Alpes University alumni
Living people
Fellows of the Society for Industrial and Applied Mathematics
Computational chemists
1950 births
21st-century Algerian people
|
49108842
|
https://en.wikipedia.org/wiki/Ntiva
|
Ntiva
|
Ntiva is an information technology company providing IT consulting, managed IT services and cyber security services with headquarters in McLean, VA. The company was founded by Steven Freidkin when he was a teen in high school. Ntiva was formed after Freidkin graduated in 2004. The company was named one of Inc. magazine's "500's Fastest Growing Companies in America" five years in a row. Ntiva has several branch office locations including Washington DC, Chicago, New York City and Long Island, NY. The company in 2018, 2019, 2020, and 2021 was awarded by CRN, a brand of The Channel Company, the Triple Crown Award, recognizing notable information technology solution providers and technology integrators.
References
External links
Companies based in McLean, Virginia
Companies established in 2004
Information technology companies of the United States
|
2721221
|
https://en.wikipedia.org/wiki/Vitalic
|
Vitalic
|
Pascal Arbez-Nicolas (; born 18 May 1976), better known by his stage name Vitalic (), is a French electronic music producer.
History
His first singles were released in 1996 and 1997, but were confined to underground electronic music scene. However, he became good friends with techno producer The Hacker, whom he met in Le Rex Club, the "techno temple" of Laurent Garnier. The Hacker suggested that he should send his new tracks to DJ Hell, head of International DeeJay Gigolo Records in Munich. Pascal did so, and International DeeJay Gigolo Records released the well known Poney EP in 2001, which was a huge success shortly after its release. With the track "La Rock 01", Vitalic created a club anthem which was a hit in the summer of 2001. The track was also included on many compilation albums, even rock compilations. Miss Kittin included "La Rock 01" on her DJ mix album On the Road.
In 2005, Vitalic's released his debut album, OK Cowboy, on Different/PIAS Recordings. Pascal states that all of the instruments used in the album are synthesized. His official website states that "the only thing he can't fake is the emotion that galvanizes his music."
His song "Trahison" from OK Cowboy was used in the trailer for the 2007 French film Naissance des Pieuvres. His song "Poney Part 1" was featured in the Pleix film Birds. It was announced by Festival Republic that Vitalic would be playing both the Reading and Leeds Festivals in the UK in August 2009.
Vitalic's second studio album Flashmob was released on 28 September 2009. The first single, "Your Disco Song" was available for streaming at Vitalic's MySpace page. He has spoken a great deal about the new disco influence on Flashmob. The song "Poison Lips" from Flashmob was used in the 2012 film Dredd, and for a 2016 TV advertisement for Amazon. Flashmob also provided the soundtrack for the film La leggenda di Kaspar Hauser.
Vitalic's third studio album, Rave Age, was released on 5 November 2012. On Metacritic, Rave Age was a combined "metascore" of 66 out of 100, indicating "generally favorable reviews."
In late 2016, Vitalic began a new live tour across Europe ahead of the release of his fourth studio album, Voyager, which was released on 20 January 2017. On Metacritic, Voyager has a combined "metascore" of 71 out of 100, indicating "generally favorable reviews."
In 2021, he released a new album Dissidaence Episode 1: it was also issued on vinyl via Vitalic's official website. He also did a one-off collaboration with singer Emel Mathlouthi for a concert in Paris at Théâtre du Châtelet : they created together a new music around the poetry of Ghada Al-Samman for an event called Variations. The show was filmed for Culturebox channel and uploaded on YouTube. Vitalic has started a worldwide tour to support Dissidaence Episode 1.
Discography
Albums
OK Cowboy (2005)
OK Cowboy (two-disc collector's edition) (2006)
V Live (2007)
Résumé (DJ mix album, previously titled This Is the Sound of Citizen) (2007)
Flashmob (2009)
Rave Age (2012)
Voyager (2017)
Dissidaence Episode 1 (2021)
Singles / EPs
Poney EP (2001)
"To L'An-fer From Chicago" (2003)
"Fanfares" (2004)
"My Friend Dario" (2005)
"No Fun" (2005)
"Bells" (2006) with Linda Lamb
"Disco Terminateur EP" (2009)
"Poison Lips" (2009)
"Second Lives" (2010)
"Remix del Blankito from Turiaso" (2011)
"Stamina" (2013)
"Fade Away" (2013)
"Film Noir" (2016)
"Waiting For The Stars" (2017)
"tu conmigo" (2017)
RemixesUnder the alias DIMA:'
"Fuckeristic EP" Poetry, Soaked and Mobile Square 1999
"Take A Walk", Bolz Bolz
"Fadin' Away", The Hacker
"The Realm", C'hantal
"You Know", Hustler Pornstar
"The Essence Of It", Elegia
"U Know What U Did Last Summer", Hustler Pornstar
"Ice Breaker", Scratch Massive
"My Friend Dario", Vitalic (2005)
"Red X", Useless
Associated projects
Dima
Hustler Pornstar
The Silures, with Linda Lamb and Mount Sims
Vital Ferox, with Al Ferox
KOMPROMAT, with Rebeka Warrior
References
External links
Vitalic's website
Vitalic Club
Label Different Recordings
Label Citizen Records
Gigolo Records
1976 births
Ableton Live users
French electronic musicians
French house musicians
French people of Italian descent
French people of Spanish descent
Living people
People from Dijon
Remixers
|
1112764
|
https://en.wikipedia.org/wiki/SquirrelMail
|
SquirrelMail
|
SquirrelMail is a project that aims to provide both a web-based email client and a proxy server for the IMAP protocol.
The latest stable version 1.4.23-svn is tested with PHP up to version 8.1 and replaces version 1.4.22 which can only run on PHP version 5.0-5.4. The svn part in the version name points out that bugfixes and minor improvements are no longer published as new versions, but instead are maintained within Apache Subversion version control system.
History
The webmail portion of the project was started by Nathan and Luke Ehresman in 1999 and is written in PHP. SquirrelMail can be employed in conjunction with a LAMP "stack", and any other operating systems that support PHP are supported as well. The web server needs access to the IMAP server hosting the email and to an SMTP server to be able to send mails.
SquirrelMail webmail outputs valid HTML 4.0 for its presentation, making it compatible with a majority of current web browsers. SquirrelMail webmail uses a plugin architecture to accommodate additional features around the core application, and over 200 plugins are available on the SquirrelMail website.
The SquirrelMail IMAP proxy server product was created in 2002 by Dave McMurtrie while at the University of Pittsburgh (where it was named "up-imapproxy", although it has become more commonly known as "imapproxy") and adopted by the SquirrelMail team in 2010. It is written in C and is primarily made to provide stateful connections for stateless webmail client software to an IMAP server, thus avoiding new IMAP logins for every client action and in some cases significantly improving webmail performance.
Both SquirrelMail products are free and open-source software subject to the terms of the GNU General Public License version 2 or any later version.
SquirrelMail webmail was included in the repositories of many major Linux distributions
and is independently downloaded by thousands of people every month.
Platforms
SquirrelMail webmail is available for any platform supporting PHP. Most commonly used platforms include Linux, FreeBSD, macOS and the server variants of Microsoft Windows. SquirrelMail IMAP Proxy compiles on most flavors of Unix, and can generally be used on the same platforms that the webmail product can with the exception of Microsoft Windows, unless used in a Cygwin or similar environment. Apple shipped SquirrelMail as their supported web mail solution in Mac OS X Server.
Plugins
The SquirrelMail webmail client itself is a complete webmail system, but extra features are available in the form of plugins. There are over 200 third-party plugins available for download from the SquirrelMail website and SquirrelMail ships with several "standard" or "core" plugins.
Internationalization
SquirrelMail webmail has been translated into over 50 languages including Arabic, Chinese, French, German, and Spanish.
Notable installations
SquirrelMail has been implemented as the official email system of the Prime Minister's Office of the Republic of India for its security advantages over Microsoft's Outlook Express.
In 2004 HEC Montréal business school deployed SquirrelMail as part of a comprehensive webmail solution, to support thousands of users.
See also
Comparison of e-mail clients
Internet Messaging Program
Roundcube
References
External links
Email clients
Proxy server software for Linux
Free email software
Free software programmed in PHP
Free software webmail
|
5695480
|
https://en.wikipedia.org/wiki/Ubique%20%28company%29
|
Ubique (company)
|
Ubique was a software company based in Israel.
In 1994 the company launched the first social-networking software, which included instant messaging, voice over IP (Commonly known as VoIP), chat rooms, web-based events, collaborative browsing. It is best known for the Virtual Places software product and the technology used by
Lotus Sametime. It is now part of IBM Haifa Labs.
Technology
Virtual Places
Ubique's best-known product is Virtual Places, a presence-based chat program in which users explore web sites together. It is used by providers such as VPChat and Digital Space and eventually evolved into Lotus Sametime.
Virtual Places requires a server and client software. Users start Virtual Places along with a web
browser and sign into the Virtual Places server. Avatars are overlaid onto the web browser and
users are able to collaborate with each other while they all visit web sites in real time.
Some Virtual Places consumer-oriented communities are still alive on the Web and are using the old version of it.
Instant Messaging and Chat
With the technology developed for Virtual Places, Ubique created an instant messaging and
presence technology platform which evolved into Lotus Sametime.
History
1994 – Ubique Ltd was founded in Israel by Ehud Shapiro and a group of scientists from
the Weizmann Institute to develop real-time, distributed computing products. The
company developed a presence-based chat system known as Virtual Places along with real-time
instant messaging and presence technology software. These were the very early days of the web, which at the time had only static data. Ubique's mission was "to add people to the web".
1995 – America Online Inc. purchased Ubique with the intention to use Ubique's Virtual
Places technology to enhance and expand its existing live online interactive communication for both the AOL consumer online service and the new GNN brand service. Only the GNN-branded Virtual Places product was ever released.
1996 – GNN was discontinued in 1996. Ubique's management, with the support of AOL, decided to look for other markets for Virtual Places technology. The outcome was that Ubique shifted Virtual Places from the consumer market to focus on presence technology and instant messaging for the corporate market. AOL divested Ubique but remained as a principal investor while Ubique sought a new owner.
1998 - Ubique was acquired by Lotus/IBM to integrate the core
technology of instant messaging and presence functions into a software product integrated with Lotus/IBM.
2000 - Lotus announced Lotus Sametime using Ubique's technology.
2006 - Elements of Ubique along with other Israeli-based companies were integrated into the
newly created IBM Haifa Labs. The Lab develops Session Initiation Protocol (SIP) infrastructure and features of real-time collaboration, including session management, presence awareness, subscriptions and notifications, text messaging, developer toolkits, and mobile real-time messaging infrastructure.
References
External links
IBM Haifa Labs website
Instant messaging
Software companies of Israel
Israeli companies established in 1994
IBM acquisitions
|
2878626
|
https://en.wikipedia.org/wiki/Optical%20computing
|
Optical computing
|
Optical computing or photonic computing uses photons produced by lasers or diodes for computation. For decades, photons have shown promise to enable a higher bandwidth than the electrons used in conventional computers (see optical fibers).
Most research projects focus on replacing current computer components with optical equivalents, resulting in an optical digital computer system processing binary data. This approach appears to offer the best short-term prospects for commercial optical computing, since optical components could be integrated into traditional computers to produce an optical-electronic hybrid. However, optoelectronic devices consume 30% of their energy converting electronic energy into photons and back; this conversion also slows the transmission of messages. All-optical computers eliminate the need for optical-electrical-optical (OEO) conversions, thus reducing electrical power consumption.
Application-specific devices, such as synthetic-aperture radar (SAR) and optical correlators, have been designed to use the principles of optical computing. Correlators can be used, for example, to detect and track objects, and to classify serial time-domain optical data.
Optical components for binary digital computer
The fundamental building block of modern electronic computers is the transistor. To replace electronic components with optical ones, an equivalent optical transistor is required. This is achieved using materials with a non-linear refractive index. In particular, materials exist where the intensity of incoming light affects the intensity of the light transmitted through the material in a similar manner to the current response of a bipolar transistor. Such an optical transistor can be used to create optical logic gates, which in turn are assembled into the higher level components of the computer's central processing unit (CPU). These will be nonlinear optical crystals used to manipulate light beams into controlling other light beams.
Like any computing system, an optical computing system needs three things to function well:
optical processor
optical data transfer, e.g. fiber-optic cable
optical storage,
Substituting electrical components will need data format conversion from photons to electrons, which will make the system slower.
Controversy
There are some disagreements between researchers about the future capabilities of optical computers; whether or not they may be able to compete with semiconductor-based electronic computers in terms of speed, power consumption, cost, and size is an open question. Critics note that real-world logic systems require "logic-level restoration, cascadability, fan-out and input–output isolation", all of which are currently provided by electronic transistors at low cost, low power, and high speed. For optical logic to be competitive beyond a few niche applications, major breakthroughs in non-linear optical device technology would be required, or perhaps a change in the nature of computing itself.
Misconceptions, challenges, and prospects
A significant challenge to optical computing is that computation is a nonlinear process in which multiple signals must interact. Light, which is an electromagnetic wave, can only interact with another electromagnetic wave in the presence of electrons in a material, and the strength of this interaction is much weaker for electromagnetic waves, such as light, than for the electronic signals in a conventional computer. This may result in the processing elements for an optical computer requiring more power and larger dimensions than those for a conventional electronic computer using transistors.
A further misconception is that since light can travel much faster than the drift velocity of electrons, and at frequencies measured in THz, optical transistors should be capable of extremely high frequencies. However, any electromagnetic wave must obey the transform limit, and therefore the rate at which an optical transistor can respond to a signal is still limited by its spectral bandwidth. In fiber-optic communications, practical limits such as dispersion often constrain channels to bandwidths of 10s of GHz, only slightly better than many silicon transistors. Obtaining dramatically faster operation than electronic transistors would therefore require practical methods of transmitting ultrashort pulses down highly dispersive waveguides.
Photonic logic
Photonic logic is the use of photons (light) in logic gates (NOT, AND, OR, NAND, NOR, XOR, XNOR). Switching is obtained using nonlinear optical effects when two or more signals are combined.
Resonators are especially useful in photonic logic, since they allow a build-up of energy from constructive interference, thus enhancing optical nonlinear effects.
Other approaches that have been investigated include photonic logic at a molecular level, using photoluminescent chemicals. In a demonstration, Witlicki et al. performed logical operations using molecules and SERS.
Unconventional approaches
Time delays optical computing
The basic idea is to delay light (or any other signal) in order to perform useful computations. Of interest would be to solve NP-complete problems as those are difficult problems for the conventional computers.
There are 2 basic properties of light that are actually used in this approach:
The light can be delayed by passing it through an optical fiber of a certain length.
The light can be split into multiple (sub)rays. This property is also essential because we can evaluate multiple solutions in the same time.
When solving a problem with time-delays the following steps must be followed:
The first step is to create a graph-like structure made from optical cables and splitters. Each graph has a start node and a destination node.
The light enters through the start node and traverses the graph until it reaches the destination. It is delayed when passing through arcs and divided inside nodes.
The light is marked when passing through an arc or through an node so that we can easily identify that fact at the destination node.
At the destination node we will wait for a signal (fluctuation in the intensity of the signal) which arrives at a particular moment(s) in time. If there is no signal arriving at that moment, it means that we have no solution for our problem. Otherwise the problem has a solution. Fluctuations can be read with a photodetector and an oscilloscope.
The first problem attacked in this way was the Hamiltonian path problem.
The simplest one is the subset sum problem. An optical device solving an instance with 4 numbers {a1, a2, a3, a4} is depicted below:
The light will enter in Start node. It will be divided into 2 (sub)rays of smaller intensity. These 2 rays will arrive into the second node at moments a1 and 0. Each of them will be divided into 2 subrays which
will arrive in the 3rd node at moments 0, a1, a2 and a1 + a2. These represents the all subsets of the set {a1, a2}. We expect fluctuations in the intensity of the signal at no more than 4 different moments. In the destination node we expect fluctuations at no more than 16 different moments (which are all the subsets of the given). If we have a fluctuation in the target moment B, it means that we have a solution of the problem, otherwise there is no subset whose sum of elements equals B. For the practical implementation we cannot have zero-length cables, thus all cables are increased with a small (fixed for all) value k. In this case the solution is expected at moment B+n*k.
Wavelength-based computing
Wavelength-based computing can be used to solve the 3-SAT problem with n variables, m clauses and with no more than 3 variables per clause. Each wavelength, contained in a light ray, is considered as possible value-assignments to n variables. The optical device contains prisms and mirrors are used to discriminate proper wavelengths which satisfy the formula.
Computing by xeroxing on transparencies
This approach uses a Xerox machine and transparent sheets for performing computations. k-SAT problem with n variables, m clauses and at most k variables per clause has been solved in 3 steps:
Firstly all 2^n possible assignments of n variables have been generated by performing n xerox copies.
Using at most 2k copies of the truth table, each clause is evaluated at every row of the truth table simultaneously.
The solution is obtained by making a single copy operation of the overlapped transparencies of all m clauses.
Masking optical beams
The travelling salesman problem has been solved by Shaked et al (2007) by using an optical approach. All possible TSP paths have been generated and stored in a binary matrix which was multiplied with another gray-scale vector containing the distances between cities. The multiplication is performed optically by using an optical correlator.
Optical Fourier co-processors
Many computations, particularly in scientific applications, require frequent use of the 2D discrete Fourier transform (DFT) – for example in solving differential equations describing propagation of waves or transfer of heat. Though modern GPU technologies typically enable high-speed computation of large 2D DFTs, techniques have been developed that can perform continuous Fourier transform optically by utilising the natural Fourier transforming property of lenses. The input is encoded using a liquid crystal spatial light modulator and the result is measured using a conventional CMOS or CCD image sensor. Such optical architectures can offer superior scaling of computational complexity due to the inherently highly interconnected nature of optical propagation, and have been used to solve 2D heat equations.
Ising machines
Physical computers whose design was inspired by the theoretical Ising model are called Ising machines.
Yoshihisa Yamamoto's lab at Stanford pioneered building Ising machines using photons. Initially Yamamoto and his colleagues built an Ising machine using lasers, mirrors, and other optical components commonly found on an optical table.
Later a team at Hewlett Packard Labs developed photonic chip design tools and used them to build an Ising machine on a single chip, integrating 1,052 optical components on that single chip.
See also
Linear optical quantum computing
Optical interconnect
Optical neural network
Photonic integrated circuit
Photonic molecule
Photonic transistor
Silicon photonics
References
Further reading
D. Goswami, "Optical Computing", Resonance, June 2003; ibid July 2003. Web Archive of www.iisc.ernet.in/academy/resonance/July2003/July2003p8-21.html
K.-H. Brenner, Alan Huang: "Logic and architectures for digital optical computers (A)", J. Opt. Soc. Am., A 3, 62, (1986)
NASA scientists working to improve optical computing technology, 2000
Optical solutions for NP-complete problems
Speed-of-light computing comes a step closer New Scientist
External links
This Laser Trick's a Quantum Leap
Photonics Startup Pegs Q2'06 Production Date
Stopping light in quantum leap
High Bandwidth Optical Interconnects
https://www.youtube.com/watch?v=4DeXPB3RU8Y (Movie: Computing by xeroxing on transparencies)
Photonics
Classes of computers
Emerging technologies
|
685032
|
https://en.wikipedia.org/wiki/Security-evaluated%20operating%20system
|
Security-evaluated operating system
|
In computing, security-evaluated operating systems have achieved certification from an external security-auditing organization, the most popular evaluations are Common Criteria (CC) and FIPS 140-2.
Oracle Solaris
Trusted Solaris 8 was a security-focused version of the Solaris Unix operating system. Aimed primarily at the government computing sector, Trusted Solaris adds detailed auditing of all tasks, pluggable authentication, mandatory access control, additional physical authentication devices, and fine-grained access control(FGAC). Versions of Trusted Solaris through version 8 are Common Criteria certified. See and
Trusted Solaris Version 8 received the EAL4 certification level augmented by a number of protection profiles. See for explanation of The Evaluation Assurance Levels.
BAE Systems' STOP
BAE Systems' STOP version 6.0.E received an EAL4+ in April 2004 and the 6.1.E version received an EAL5+ certification in March 2005. STOP version 6.4 U4 received an EAL5+ certification in July 2008. Versions of STOP prior to STOP 6 have held B3 certifications under TCSEC. While STOP 6 is binary compatible with Linux, it does not derive from the Linux kernel. See for an overview of the system.
Red Hat Enterprise Linux 5
Red Hat Enterprise Linux 5 achieved EAL4+ in June 2007.
Red Hat Enterprise Linux 6
Red Hat Enterprise Linux Version 6.2 on 32 bit x86 Architecture achieved EAL4+ in December 2014.
Red Hat Enterprise Linux Version 6.2 with KVM Virtualization for x86 Architectures achieved EAL4+ in October 2012.
Novell SUSE Linux Enterprise Server
Novell's SUSE Linux Enterprise Server 9 running on an IBM eServer was certified at CAPP/EAL4+ in February 2005. See News release at heise.de
Microsoft Windows
The following versions of Microsoft Windows have received EAL 4 Augmented ALC_FLR.3 certification:
Windows 2008 Server (64-bit), Enterprise (64-bit) and Datacenter, as well as Windows Vista Enterprise (both 32-bit and 64-bit) attained EAL 4 Augmented (colloquially referred to as EAL 4+) ALC_FLR.3 status in 2009.
Windows 2000 Server, Advanced Server, and Professional, each with Service Pack 3 and Q326886 Hotfix operating on the x86 platform were certified as CAPP/EAL 4 Augmented ALC_FLR.3 in October 2002. (This includes standard configurations as Domain Controller, Server in a Domain, Stand-alone Server, Workstation in a Domain, Stand-alone Workstation)
Windows XP Professional and Embedded editions, with Service Pack 2, and Windows Server 2003 Standard and Enterprise editions (32-bit and 64-bit), with Service Pack 1, were all certified in December 2005.
Mac OS X
Apple's Mac OS X and Mac OS X Server running 10.3.6 both with the Common Criteria Tools Package installed were certified at CAPP/EAL3 in January 2005.
Apple's Mac OS X & Mac OS X Server running the latest version 10.4.6 have not yet been fully evaluated however the Common Criteria Tools package is available.
GEMSOS
Gemini Multiprocessing Secure Operating System is a TCSEC A1 system that runs on x86 processor type COTS hardware.
OpenVMS and SEVMS
The SEVMS enhancement to VMS was a CC B1/B3 system formerly of Digital Equipment Corporation (DEC). A standard OpenVMS installation is rated as CC C2.
Green Hills INTEGRITY-178B
Green Hills Software's INTEGRITY-178B real-time operating system was certified at Common Criteria EAL6+ in September 2008. running on an embedded PowerPC processor on a Compact PCI card.
Unisys MCP
The Unisys MCP operating system includes an implementation of the DoD Orange Book C2 specification, the controlled access protection sub-level of discretionary protection. MCP/AS obtained the C2 rating in August, 1987.
Unisys OS 2200
The Unisys OS 2200 operating system includes an implementation of the DoD Orange Book B1, Labeled security protection level specification. OS 2200 first obtained a successful B1 evaluation in September, 1989.
Unisys maintained that evaluation until 1994 through the National Computer Security Center Rating Maintenance Phase (RAMP) of the Trusted Product Evaluation Program.
See also
Comparison of operating systems
Security-focused operating system
Trusted operating system
External links
The common criteria portal's products list has an "Operating Systems" category containing CC certification results
References
Operating system security
Computer security procedures
|
25379636
|
https://en.wikipedia.org/wiki/Activities%20of%20the%20Air%20Training%20Corps
|
Activities of the Air Training Corps
|
Within the framework of the training programme Air Training Corps cadets have the opportunity of taking part in many activities. On most Squadrons the only compulsory activities in the ATC year are attendance at various church parades, usually ATC Sunday (to celebrate the founding of the Air Training Corps on 5 February 1941, see below) and Remembrance Sunday. Many wings also insist that attending Wing Parade is compulsory.
Parade nights
Squadrons usually meet or parade during the evening, twice a week. Parade nights always begin and end with a parade. First parade is usually used as an opportunity for uniform inspection and to instruct cadets on the evening's activities, while final parade is usually used as an opportunity to inform cadets of upcoming events that they may wish (or may be required) to take part in. On some squadrons subscriptions, or 'subs,' are paid on a per-parade night basis. On other squadrons, subs are paid monthly either in person or by automated standing order. Subs vary from squadron to squadron and are set by the civilian committee in consultation with the squadron's Officer Commanding and other staff. Each night's activities, between first and final parade, are normally structured into two sessions with a break in between. The activities are normally pre-planned and may include lessons, drill, aviation-type activities such as aero-modelling, radio communications and map reading. Some squadrons will include physical training. Some nights are used for fieldcraft training or exercises – sometimes colloquially referred to as 'greens nights'.
Flying
Air Experience
Cadets from both the Air Training Corps and CCF(RAF) are offered opportunities to fly in light aircraft, gliders and other RAF and civil aircraft.
Cadets can take part in regular flights in the Grob Tutor at one of 12 Air Experience Flights (AEFs) around the UK. These flights typically last 30 minutes; as part of a structured syllabus of training it is usual for the cadet to be offered the chance of flying the aircraft or of experiencing aerobatics. The staff are all qualified service pilots, usually serving or retired RAF officers. Prior to the introduction of the Tutor, AEFs were equipped with Bulldogs as a temporary measure following the retirement of the Chipmunk in 1996. The Chipmunk was introduced in 1957 and during its service flew many thousands of cadets. Prior to the Chipmunk and established AEFs, cadet flying was a more ad hoc affair, although during the 1940s and 1950s, Airspeed Oxfords and Avro Ansons were used specifically to fly cadets. Cadets were most often used to manually pump the landing gear up or down when flying in the Ansons. Some Cadets who stand out from the rest may also get the opportunity to fly on a civil airliner or go on an overseas flight in an RAF Tri-Star, VC10 or Hercules. A few cadets have also had the opportunity to fly in a variety of other aircraft including fast jets and the Red Arrows. In general, every cadet will be given opportunities to fly during their time as an active member of an ATC or CCF squadron.
Gliding
Cadets can also undertake elementary flying training at a Volunteer Gliding Squadron (VGS) in Air Cadet gliders. The staff are all qualified service gliding instructors, usually made up of a mixture of regulars, reservists and Civilian Instructors.
At age 16 onwards, cadets can apply for gliding scholarships through their squadron staff. If selected, the cadet will receive up to 40 instructional launches on the Viking conventional glider (although if the student is close to solo standard it is not unusual for this limit to be exceeded). Cadets who successfully complete either of these programmes will be awarded blue wings. Cadets who show the required aptitude and ability may go on to perform a solo flight and be awarded silver wings. Further training is available to a select few cadets who show potential to progress onto Advanced Gliding Training (AGT) where on completion they are awarded gold wings. Usually these cadets will be enrolled as Flight Staff Cadets (FSCs) and further training to instructor categories is possible.
A FSC can achieve a Grade 2 award, which recognises them as a competent solo pilot, a Grade 1 award, allowing them to carry passengers in the air and perform the basic teaching tasks involved in the GIC courses, a C category instructors rating which is a probationary instructor who is qualified to teach the Gliding Scholarship course, and possibly a B category instructors rating which allows them to perform the duties of a 'B cat' explained below, with the exception that they cannot perform the role of duty instructor (DI) who is in control of the day's flying and decisions for the time that they are in that role.
Once a cadet reaches 20 years of age, he can no longer be a FSC and must become a Civilian (Gliding) Instructor, CGI, (although a FSC has this option at age 18) or a commissioned officer. Once either of these adult statuses has been gained, 'B cat' and 'A cat' is possible. B cats can carry out AGT flying training. An A cat is able to send first solos, whereas a B cat can only send subsequent solos. Both can perform SCT (Staff Continuation Training) to keep other members of staff well trained and current in their flying categories.
Marksmanship
Cadets at all levels of the Air Training Corps have the opportunity to participate in the sport of rifle shooting. Since the ATC was originally a recruiting organisation for the Royal Air Force it made good sense for marksmanship to be on the training syllabus. Shooting remains one of the most popular cadet activities.
Cadets have the opportunity of firing a variety of rifles on firing ranges. Cadets first train with and fire either the L144A1 Cadet Small Bore Target Rifle (CSBTR) .22 rifle or .177 air rifles. They can then progress to the L98A2GP (GP standing for Cadet General Purpose), a variant of the 5.56 mm L85A2, with the selector switch removed and locked on repetition. The 7.62 mm Parker Hale L81A2 Cadet Target Rifle is also used at long ranges for competition shooting such as ISCRM. Although safety has always been the main concern when shooting, with everything done by the book, recent years have seen the introduction of a wider range of training courses for staff involved in shooting to improve quality and safety even further. There are many competitions, from postal smallbore competitions to the yearly Cadet-Inter-Service-Skill-at-Arms meet (CISSAM or commonly pronounced "siss-amm")and the annual Inter-Service-Cadet-Rifle-Meet (ISCRM or commonly pronounced "iss-crum") at Bisley, the home of UK shooting.
There are currently four types of marksman award that a cadet can achieve, ranging from "Trained Shot", through "Marksman" and "Advanced Marksman", up to "Competition Marksman". To achieve these awards the cadet needs to undergo a special shooting "marksman" practice and then achieve a high enough qualifying score depending on the award specified.
The top 100 cadets in the ISCRM competition are awarded the prestigious "Cadet 100" marksman award, and the top 50 cadets in the CISSAM competition are awarded the equally prestigious the "Cadet 50" award.
Drill
The Air Cadets, as a uniformed youth organization, sets itself and its members very high standards, including dress and behaviour. Drill is a vital part of encouraging teamwork. All ATC squadrons practice drill as a means of instilling discipline and teamwork and a means for Officers and NCOs to develop the ability to command and control. It is also used in formal parades and for moving around military bases and moving cadets in a smart, uniform manner. There are also drill competitions: inter-Squadron, inter-Wing, and inter-Region Exhibition drill competitions. Air Cadet drill is taken from the RAF Drill Manual (AP818). All drill instruction should be solely conducted by a qualified Drill Instructor (DI). However, as not all units have access to a DI other WOs, SNCO (ATC) will assume this responsibility. More often than not, cadet NCOs will instruct drill within their squadron and drill competition squads must consist of only cadets, led by a cadet drill co-ordinator.
Cadets participate in various forms of drill, some of which include:
Static Drill
Basic Drill - Quick & Slow Time
Banner Drill
Ceremonial Parades
Band Drill
Rifle Drill
Drill & discipline is the responsibility of the WOs and/or NCOs on a squadron. Once a cadet has gained a few years of experience and has attained NCO rank, the cadet will pass on knowledge and experience to other cadets, such as instructing cadets how to participate in a drill squad, taking charge of a drill squad or flight or even taking a major part in ceremonial drill such as a Standard Bearer at Remembrance Day Parades. Some Wings run Drill courses in order to improve NCO drill.
Adventure Training
Adventure Training is defined as:
"Challenging outdoor training for Personnel in specified adventurous activities, involving controlled exposure to risk, to develop leadership, teamwork, physical fitness, moral and physical courage, among other personal attributes and skills vital to operational capability."
It is an important part of ATC activities and can help develop teamwork as well as leadership skills.
Within the ATC there are many opportunities to take part in adventure training, such as hillwalking, canoeing, kayaking, camouflage and concealment expeditions, hiking, and camping. All activities of this kind are supervised by appropriately qualified staff (Mountain Leader for Hill walking, British Canoe Union (BCU) instructors for canoeing and kayaking). There are also nationally run courses such as Parachuting, Basic Winter Training and Nordic Skiing to name a few. Adventure training can take place as part of regular squadron parade nights, weekend and week-long centres. There are also two national ATC adventure training camps. NACATC (National Air Cadet Adventure Training Centre) Llanbedr in Snowdonia and NACATC Windermere in the English Lake District. Here cadets stay for a week participating in various activities in adventure training. There is a wide-ranging Adventure Training syllabus in the ACO – although specifics depend upon the squadron.
Camps
Most camps are run at wing level and are usually 1 week in duration. The activities at these camps vary. Most Wings will hold 1 or 2 annual camps that consist primarily of "blues" activities which could include museum visits, drill and flying or "greens" camps which consist more of adventure training, climbing, shooting and flying.
There are also overseas camps, usually in Germany, Gibraltar or Cyprus. These camps tend to include museum visits and local activities. Most camps carry a small cost to cover messing (food) while other costs are subsidised by a cadet's monthly payments and by the MOD.
Climbing
Climbing is a physical challenge for the cadet, and also helps develop concentration and judgement as well as Teamwork. Many squadrons go on climbing trips regularly – a few even have their own climbing walls. All climbing is supervised by professionally qualified instructors (either internal or external staff members).
Fieldcraft
Fieldcraft is an exciting part of any squadron's training programme, and the promise of a good exercise is always guaranteed to get good attendance. Fieldcraft is, to put it simply, the art of living and moving in the field. Although the ACO is generally focused on different activities, fieldcraft does play a part in most squadrons' training programmes.
Fieldcraft is taught from a single manual, common to all squadrons, so the basic lessons are very similar across the ATC; however, 'Consolidated Practical Training' (CPT) and full exercises differ greatly depending on local resources, staffing and skill levels. Exercises and CPT place emphasis on different aspects of fieldcraft – some might need you and your team to move slowly and quietly while approaching an 'enemy' installation, others require speed as well as stealth, and a quick decision on how much of the one to trade off against the other.
As of 2019, the ATC has introduced blank-firing and pyrotechnics into its fieldcraft syllabus; an activity that was previously reserved for advanced level courses such as the Junior Leaders course.
A generally acknowledged advantage of fieldcraft exercises is that it forces cadets to use their initiative. A relatively junior member of the squadron could find themselves in a decision-making position. For this reason, fieldcraft is often used by squadrons as a method of assessing cadets' leadership qualities, as it forces cadets to make quick decisions and perhaps to effectively lead a team, even when they're unsure of exactly what's going on or what they're supposed to be doing. For this reason, fieldcraft forms the core of the ATC's Junior Leaders course.
Leadership training
Leadership training is an important part of many squadrons' training programmes, with training available at higher levels too. Most wings run NCO courses (also run on a regional basis), designed to help newly promoted NCOs to perform their duties well, or to train those eligible for promotion, these are normally two days in length.
There are also a number of courses run centrally by the ATC. These include:
Air Cadet Leadership Course
This is run by the Combined Cadet Force (RAF) at RAF Cranwell. Successful completion of this leads to the award of the Cadet Leadership badge.
Air Cadet Leadership Courses
In the Air cadets, there are four levels of leadership, Blue, Bronze, Silver and Gold (ACLC). With each stage the leadership tasks get more difficult and more knowledge is expected. Both Blue and Bronze can be taught and assessed on the squadron, and will consist of a number of lessons spread out over the syllabus. Silver is taught and assessed at wing level, having one day with multiple lessons. Gold is taught and assessed on Corps (national) level, Gold leadership or the Air Cadet Leadership Course (ACLC) is a 10-day course, that comprises taught lessons, practical lessons and demonstrations, multiple high level Command tasks/practical leadership exercises (PTEs) Physical exercises intended to tire you and strain you (to see how you lead when your tired, worn out and under stress) two nights camping and assessments. ACLC is taught at RAFC Cranwell by leadership experts.
Junior Leaders
Cadets over the age of 17 and of the rank of at least Cadet Corporal can complete a leadership course called Junior Leaders, abbreviated simply to, 'JL'. This course requires over a hundred hours of planning and a high degree of physical fitness. Upon completion, the cadet is awarded a maroon lanyard (which replaces the yellow Instructor Cadet lanyard on the cadet's uniform) and a green and wedgwood blue DZ Flash for wearing on the DPM or MTP uniform as well as qualifying for the ILM level 3 diploma in Team Leading.
This course is run in three phases and split into 8 separate training weekends and a 10-day test phase which covers the following areas: The Course demands the highest of commitment and dedication as a collection of the ATC's best Cadet NCO's tirelessly and continuously challenge themselves from weekend to weekend all with the goal of earning the all important Junior Leader's DZ Flash and Junior Leader Maroon Lanyard.
Some JL's once completing the course, come back to the following year's Course as a Qualified Junior Leader, (QJL).
Phase one
Core Skills
Management
Life Skills
Armed Forces Knowledge
L85 Weapon Handling
Interview skills
Social skills
Public speaking
Project Management
Elementary infantry tactics
Fitness
Teamwork
Phase two
Tactics and Leadership Development (TLD). During this phase Cadets have to be able to use their skills in real life fieldcraft scenarios and receive coaching by members of the Regular and Reserve Forces. The emphasis is on leadership under high pressure, within the context of combat scenarios.
Phase three
10 days at a graduation camp (eight days field exercise, followed by a day of R&R and then a day of awards with a graduation dinner). All participants have to have a twelve-hour period to lead a section. It is during these days that the skills and knowledge gained by the cadet over the previous months is put to the test.
Extra assessments
The Junior Leaders will also be assessed on a number of other elements. At the end of Phase One, they are tested on their knowledge on air power which they will have to study for before hand. They will also have to complete a presentation on a pre-chosen topic (for JL Course XV, the topic was World War I) and pass an L85 weapons handling test, and remain physically fit throughout the whole course. By the end of Phase Two, they must have also submitted the workbook for the ILM Level 3 diploma. Finally by the end of Phase Three, they must have raised at least £250 for the John Thornton Young Achievers Foundation (JTYAF).
Duke of Edinburgh's Award
The Air Training Corps is the single largest operating authority of the Duke of Edinburgh's Award system and celebrated its 50th year of providing this opportunity to its cadets in 2006.
The Duke of Edinburgh's Award Scheme is a voluntary, non-competitive programme of practical, cultural and adventurous activities for young people aged 14–25.
The Award programme consists of three levels, Bronze, Silver and Gold. Each has differing criteria for entry and the level of achievement necessary to complete each award.
Air Cadets who meet the age criteria can join the award scheme.
Each award is broken down into four areas (five for gold) which participants must complete successfully to receive their award. These are:
Service
Helping others in the local community.
Expeditions
Training for, and planning, a journey.
Skills
Demonstrate ability in almost any hobby, skill or interest
Physical Recreation
Sport, dance and fitness.
Residential Project (Gold Award only)
A purposeful enterprise with young people not previously known to the participant.
Cadets are often encouraged to achieve the Bronze, Silver and Gold awards as they progress through their cadet careers. Some cadets aged 16 or over were formerly able to participate in the Duke of Edinburgh's Millennium Volunteers Award. , this has been taken over by another authority and whether cadets will be able to undertake it is under review.
The Award is widely recognised by employers as it helps demonstrate that award holders are keen to take on new challenges, have a higher level of self-confidence than many of their peers, leadership qualities and the experience of teamwork.
Sport
Sport plays a key part in the activities of every squadron. Seven sports are played competitively between squadrons. Cadets who show talent can be selected to represent their Wing, Region or the Corps in competitive matches; these cadets are awarded wing, regional or corps 'Blues'. The main sports played are:
Rugby Union
Hockey
Netball
Association Football
Swimming
Athletics
Cross-country running
Orienteering
Other sports are also played, sometimes in matches between squadrons, including volleyball, five-a-side football, table tennis, etc. Cadets also use various sports to take part in the physical recreation section of the Duke of Edinburgh's Award. Orienteering in the ATC only came about in 2006 where cadets from the different wings go to the cadet orienteering championships.
Various units send two teams to the annual Nijmegen Vierdaagse Marches where on successful completion of the event they are awarded a medal.
Qualified Aerospace Instructor’s Course (QAIC)
QAIC aims to “deliver an aerospace course to senior cadets of the ACO that combines academic and synthetic training in aerospace based subjects combined with personal development training”. The course is split into QAIC North, at RAF Linton-on-Ouse and QAIC South at MoD Boscombe Down.
The course consists of one training weekend per month over a six month period, followed by an examination and graduation week. During the course, cadets will learn how to plan and run an aerospace camp, and will undertake modules in Aviation Studies, Leadership, Air Power, Aerodynamics, Air Traffic Control, Navigation, Instructional Technique, RT and Basic Synthetic Flight Training.
Communications
An extensive range of communication training is offered where appropriately skilled instructors and equipment are available. This can range from hand-held radio operating procedures to long-distance HF radio and networked digital communication, and even encompasses publishing online (such as this Wiki).
The Basic 'blue' Radio Certificate is the first step, followed by the Full Radio Operator 'bronze' Certificate. These qualifications have been part of the curriculum since 2000. Cadets are then encouraged to pursue this training further across a range of mediums and technologies. Once a sufficiently broad spectrum of skills have been mastered and validated by the Wing Radio Communications Officer the cadet is awarded the Air Cadet Communicator Certificate and the Communicator Badge, which is worn on the brassard.
Communication training provides valuable practical lessons in information handling and management, develops interpersonal skills and meets one of the Corps' prime objectives: 'providing training useful in both civilian and military life'.
Cyber Security
A very extensive and interesting curriculum of Cyber Security Courses is available to Cadets. In joint award with the Blue Radio Operator, 'blue' Cyber Security is also awarded.
The Cyber Security Courses can range from basic Anti-Virus protection to large scale hacking programmes. Most Cyber Courses take place on the Squadron HQ however the more advanced courses take place at the No.2 Radio School situated in RAF Cosford.
Community volunteering
Cadets often volunteer to help at various national and local events. For their services, a small payment is usually offered to their squadron's funds. Typical examples of such work include car parking duties at events and delivering copies of Gateway Magazine to RAF married quarters.
The largest example of cadets involved in volunteer work is at the Royal International Air Tattoo, an annual air display held at RAF Fairford. Each year several hundred air cadets volunteer to stay on the base in temporary accommodation. During the course of the event they help with duties such as selling programmes, crowd control and clearing litter.
Band
Members of squadron bands may be entitled to wear specific band badges, subject to passing the appropriate assessment as per ACP 1812.
A Drummer's badge is a blue, bronze, silver or gold drum, displayed in the middle of the brassard.
A Piper's badge depicts a blue, bronze, silver or gold set of pipes, again displayed in the middle of the brassard.
A Buglers badge depicts two blue, bronze, silver or gold crossed trumpets, displayed in the middle of the brassard.
A Bandsman's badge, is a blue, bronze, silver or gold bell lyre, displayed in the middle of the brassard. It is given to any other bandsmember that doesn't play drums, bagpipes or bugle.
The Pipe Major's badge, composed of four inverted chevrons surmounted by a bagpipes is not permitted to be worn at any level. However, the standard Royal Air Force Blue Drum Major rank slides, consisting of four inverted chevrons surmounted by a drum, may be worn by Drum Major's when acting as such.
Music camps
There are also specific music camps, which is where a cadet of musical proficiency applies to attend, and they are selected depending on the musical skill (grades) and their other qualities. About 35 - 40 cadets are selected for this each year. The annual national Air Cadet music camp is held at RAF College Cranwell, HQ of the ATC. Upon attending this camp, cadets are rewarded by receiving a gold-coloured band badge, to replace the silver-coloured badges worn by ATC band members.
The National Concert Band of the Air Cadets, composed of attendees of the National camp, have recently performed at some very prestigious events. These include The Royal International Air Tattoo (RIAT), held at RAF Fairford. A Garden Party at
Buckingham Palace, playing the National Anthem for the arrival of HRH Prince Charles and Camilla.
The band has also performed at The Mansion House, London for the Royal Centenary Banquet of the Air League in the presence of distinguished guests such as the Lord Mayor of London, and HRH Princess Royal.
Towards the second half of 2008, the ACO Music Services agreed to establish a corps marching band, formed of cadets from all 6 regions throughout the Air Training Corps. The first National Marching Band camp was held in October 2008 at Browndown Battery, with a performance being made in front of HMS Victory.
The Nation Marching Band of the Air Cadets, now uses Fort Blockhouse as its training ground. The band's most recent performances have been at Royal Air Force Museum London, the previous Hendon Aerodrome. On 13 July 2010, the 72 cadets forming the band marched down Pall Mall and into Buckingham Palace in a contingent featuring 575 other Air Cadets, celebrating the 150th Anniversary of Cadet Forces.
First aid
Many squadrons offer a number of first aid courses, such as the St John Ambulance Youth First Aid course. Courses may be provided by individual squadron units, or by the wings and regions. The course can be completed over a weekend, or over a series of parade nights. Either way, the course is assessed by a practical exam, where cadets have to deal with three situations: a conscious, breathing casualty; an unconscious, breathing casualty; and an unconscious non-breathing casualty, involving CPR on a Resusci Anne manequin. .
A series of first aid topics are covered during the course such as fainting, bleeding, head injuries and bites and stings. These are taught by qualified staff, often qualified to the level of First Aid at Work. Upon completion, cadets receive a red Youth First Aid badge for sewing onto the brassard as well as a certificate. Some squadrons also offer the 'Heartstart' course, which is a basic first aid course in Emergency Life Support, coordinated by the British Heart Foundation
In addition to the Youth First Aid course, some cadets have the opportunity to undertake the St John Ambulance Activity First Aid Course, a much more detailed course for more senior cadets over the age of sixteen. Upon completing this course cadets will receive a Silver Activity First Aid badge for sewing onto the brassard. In the case a cadet already wears a Young Lifesaver Plus badge, the Activity First Aid badge should be sewn in its place. Completion of the Activity First Aid Course trains cadets to the level of first aid required for many adult 'outdoor' qualifications such as the Mountain Leader Award. The qualification also makes it possible for cadets to teach the Youth First Aid course to less experienced cadets. The Progressive Training Syllabus has also introduced a Blue first aid badge on completion of the British Heart Foundation Heartstart Scheme and a gold 'instructor' first aid badge.
Other awards
Cadets can also qualify for various other BTEC awards through the training that is carried out at their squadrons. There are many additional courses and awards that can be gained.
The recognised qualifications are:
BTEC Level 2 Diploma in Aviation Studies for Air Cadets - equivalent to 4 GCSE A-C grade (administered by HQAC).
BTEC Level 2 Extended Certificate in Aviation Studies for Air Cadets - equivalent to 2 GCSE A-C grade (administered by HQAC)
BTEC First Diploma in Public Services - equivalent to 4 GCSEs A-C grades (administered by CVQO).
BTEC First Diploma in Music - equivalent to 4 GCSEs A-C grades (administered by CVQO).
BTEC Certificate in Aviation Studies - equivalent to 2 GCSEs A-C grades (administered by HQAC)
ILM Certificate in Team Leading - Level 2 (administered by CVQO).
References
Royal Air Force Air Cadets
Air Cadet organisations
|
28892349
|
https://en.wikipedia.org/wiki/KTechLab
|
KTechLab
|
KTechLab is an IDE for electronic and PIC microcontroller circuit design and simulation; it is a circuit designer with auto-routing and a simulator of common electronic components and logic elements.
KTechLab is free and open-source software licensed under the terms of the GNU GPL.
History
KTechLab was first developed by David Saxton, who worked on it until 2007. The design ideas and a lot of the current code have been developed by him. He released various versions, up to version 0.3.6.
When David Saxton stated that he wouldn't be able to continue developing the software, KTechLab stalled for a while before
Julian Bäume, Jason Lucas, Zoltan Padrah, Alan Grimes and several others continued his work, releasing version 0.3.7, with more components and bug fixes.
In January 2019, KTechLab was ported to Qt and KDELibs4. The new priority changed to port KTechLab to Qt5 and KF5, accomplished by version 0.50.0.
See also
Comparison of EDA software
List of free electronics circuit simulators
References
External links
Note that, at 2021-06-22, the KDE git repository (https://invent.kde.org/sdk/ktechlab.git, 2 weeks ago, and containing GitHub's latest commit e0bb9ff) is more recent that the GitHub git repository (https://github.com/ktechlab/ktechlab.git, 6 months ago).
KTechLab on KDE Community Wiki
KTechlab users guide
KDE software
Free electronic design automation software
Electronic design automation software for Linux
Electronic circuit simulators
Engineering software that uses Qt
Free simulation software
|
41595520
|
https://en.wikipedia.org/wiki/2014%20in%20Bellator%20MMA
|
2014 in Bellator MMA
|
2014 in Bellator MMA was the tenth season for Bellator MMA, a mixed martial arts promotion. It began on February 28, 2014 and aired on Spike TV.
The season included tournaments for the Heavyweight, Welterweight, Featherweight, Light Heavyweight, and Lightweight weight classes. At the end of the season, Bellator held its first pay-per-view event, Bellator 120, on May 17, 2014.
Bellator 110
Bellator 110 took place on February 28, 2014 at the Mohegan Sun in Uncasville, Connecticut. The event aired live in prime time on Spike TV.
Background
Bellator 110 featured the opening round of the Light Heavyweight and Featherweight tournament.
A bout between Josh Diekmann and Chris Birchler was initially planned for this card, but later cancelled.
Pat Schultz was scheduled to face Dave Roberts in a Light Heavyweight bout on this card. However, on the day of the weigh ins, Roberts came in overweight at 212 pounds and the bout was eventually removed from the card.
Results
Bellator 111
Bellator 111 took place on March 7, 2014 at the WinStar World Casino in Thackerville, Oklahoma. The event aired live in prime time on Spike TV.
Background
Bellator 111 was to feature a Bellator Bantamweight Championship bout between Eduardo Dantas and 2013 Summer Series Tournament winner Rafael Silva. However, Silva was forced to pull out of the bout due to injury, and replaced by Anthony Leone.
The card also featured the opening round of the Heavyweight tournament.
Results
Bellator 112
Bellator 112 took place on March 14, 2014 at The Horseshoe in Hammond, Indiana. The event aired live in prime time on Spike TV.
Background
Bellator 112 featured the first Bellator Featherweight Championship title defense for Daniel Straus. He faced former champion Pat Curran in a rematch. This move drew criticism for Bellator from MMA pundits and fans, as many felt that Curran, who had previously lost his last match to Straus and not won a tournament for a rematch, had not done enough to earn a title shot over waiting tournament winners Patrício Pitbull and Magomedrasul Khasbulaev.
The card also featured the opening round of the Welterweight tournament. On March 8, 2014, it was announced that War Machine, Mark Scanlon, and Joe Riggs pulled out of their tournament bouts and were replaced by Paul Bradley, Nathan Coy, and Cristiano Souza.
Results
Bellator 113
Bellator 113 took place on March 21, 2014 at the Kansas Star Arena in Mulvane, Kansas. The event aired live in prime time on Spike TV.
Background
Bellator 113 featured a Bellator Light Heavyweight Championship unification bout between champion Attila Vegh and interim champion Emanuel Newton.
The card also featured the opening round of the Lightweight tournament.
UK fighter Terry Etim was forced to withdraw from the Lightweight tournament due to an ACL injury. He was replaced by Tim Welch. Donnie Bell, Welch's previous opponent, instead faced Eric Wisely.
Brian Rogers was scheduled to face Gary Tapusoa in a Middleweight bout. However, Tapusoa was unable to make the weight requirements and the fight was cancelled.
Results
Bellator 114
Bellator 114 took place on March 28, 2014 at the Maverik Center in West Valley City, Utah. The event aired live in prime time on Spike TV.
Background
Bellator 114 featured the third Bellator Middleweight Championship title defense for Alexander Shlemenko as he faced Season 9 tournament winner Brennan Ward.
Ron Keslar and Jordan Smith were scheduled to face each other in a welterweight match; however, the bout did not materialize due to undisclosed reasons.
Aaron Wilkinson and Michael Arrant were also scheduled to face each other in a welterweight match, but it was cancelled.
Results
Bellator 115
Bellator 115 took place on April 4, 2014 at the Reno Events Center in Reno, Nevada. The event aired live in prime time on Spike TV.
Background
Bellator 115 featured the first Bellator Heavyweight Championship title defense for Vitaly Minakov as he took on Season 9 tournament winner Cheick Kongo.
Doug Marshall was originally announced as one of the participants in the Middleweight tournament. However, he was pulled from the bout due to a current suspension and was replaced by Jeremy Kimball. His opponent Dan Cramer was then scheduled to face Jeremy Kimball in a Middleweight Tournament Semifinal. Kimball, however, missed weight badly and was pulled from the bout.
Andrey Koreshkov and Sam Oropeza were scheduled to meet in the Welterweight Tournament Semifinals on this card. However, on the day of the weigh ins, the bout was cancelled due to Koreshkov having flu-like symptoms.
Additionally, a lightweight bout between Jimmy Jones and Rudy Morales that was scheduled to take place at World Series of Fighting 9 was rescheduled for this card.
Results
Bellator 116
Bellator 116 took place on April 11, 2014 at the Pechanga Resort & Casino in Temecula, California. The event aired live in prime time on Spike TV.
Background
Bellator 116 featured the semifinals of the Season 10 Heavyweight Tournament and one of the semifinals for the Middleweight tournament.
The event also featured the final fight for Vladimir Matyushenko, as he retired from MMA after his fight.
Results
Bellator 117
Bellator 117 took place on April 18, 2014 at the Mid-American Center in Council Bluffs, Iowa. The event aired live in prime time on Spike TV.
Background
Bellator 117 featured a bout between Douglas Lima and Rick Hawn for the vacant Bellator Welterweight title as well as the semifinals of the Season 10 Lightweight Tournament.
Results
Bellator 118
Bellator 118 took place on May 2, 2014 in Revel Atlantic City, New Jersey. The event aired live in prime time on Spike TV.
Background
Eduardo Dantas was originally scheduled to defend his Bantamweight title against Joe Warren in the main event. However, on April 26, 2014 it was revealed that Dantas was injured head and withdrew from the fight. Warren was to face Rafael Silva in an Interim Bantamweight title fight. Silva, however, missed weight and the promotion made the interim title available only if Warren were to win.
The Welterweight semifinals bout between Andrey Koreshkov and Sam Oropeza originally set for Bellator 115 was rescheduled to this card. Oropeza was eventually replaced by Justin Baesman.
Results
Bellator 119
Bellator 119 took place on May 9, 2014 in Rama, Ontario, Canada
. The event aired live in prime time on Spike TV.
Background
Bellator 119 was originally set to feature the Bellator season 10 Heavyweight tournament final. However the Bellator season 10 Featherweight tournament final headlined the card instead.
The Middleweight tournament final of Brett Cooper against Brandon Halsey was originally scheduled for this event, but was cancelled when Cooper injured himself in training.
Fabricio Guerreiro and Shahbulat Shamhalaev were also scheduled to face each other on this event, but that bout was moved to the following week's event.
John Alessio was originally scheduled to face Guillaume DeLorenzi at the event, however, DeLorenzi withdrew from the bout due to injury and was replaced by Eric Wisely.
Results
Bellator 120
Bellator 120 took place on May 17, 2014.
Background
The event served as Bellator MMA's inaugural pay-per-view event.
Bellator 120 was expected to be headlined by Eddie Alvarez defending his Bellator Lightweight Championship against the former champion Michael Chandler in a trilogy fight. However, a week before the fight, it was announced that Alvarez had suffered a concussion and was forced to pull out of the fight. Chandler instead faced Will Brooks for the Interim Lightweight title.
Tito Ortiz made his Bellator MMA debut at this event against Bellator Middleweight Champion Alexander Shlemenko in a Light Heavyweight bout.
The Season 10 Lightweight tournament final between Patricky Freire and Marcin Held was originally scheduled to take place on the Spike TV portion of this event. However, Freire was injured and the bout was pushed back to another card.
Results
Tournaments
Heavyweight tournament bracket
Light Heavyweight tournament bracket
Middleweight tournament bracket
(*) Replaced Jeremy Kimball vs. Dan Cramer
Welterweight tournament bracket
(*) Replaced Oropeza
Lightweight tournament bracket
Featherweight tournament bracket
Bellator 121
Bellator 121 took place on June 6, 2014 at the WinStar World Casino in Thackerville, Oklahoma. The event aired live in prime time on Spike TV.
Background
Bellator 121 was to feature the rematch between Pat Curran and Patricio Freire for Bellator Featherweight Championship. However, on May 21, it was announced that Curran had pulled out of the bout due to a calf injury.
Results
Bellator 122
Bellator 122 took place on July 25, 2014 at the Pechanga Resort & Casino in Temecula, California. The event aired live in prime time on Spike TV.
Background
Bellator 122 featured the Season 10 Middleweight and Welterweight Tournament Finals. A Heavyweight bout between Dmitrity Sosnovskiy and Manny Lara was cancelled due to an illness of Manny Lara.
This was also the first show under the management of new President Scott Coker.
Results
Tournaments
Light Heavyweight tournament bracket
Bellator 123
Bellator 123 took place on September 5, 2014 at the Mohegan Sun Arena in Uncasville, Connecticut. The event aired live in prime time on Spike TV.
Background
Bellator 123 was headlined by a Featherweight Championship rematch between Pat Curran and Patricio "Pitbull" Freire. The two originally met in a closely contested fight at Bellator 85 on January 17, 2013, with Curran winning the bout via split decision. The rematch was initially scheduled to take place at Bellator 121, however, it was announced on May 21, 2014 that Curran had pulled out of the bout due to a calf injury.
This event marked the first time Bellator MMA and their rival the Ultimate Fighting Championship have had live shows go against each other. Additionally, both were held in the same state in venues located within miles of each other.
In the night's co-main event former tournament champion, and former Strikforce champion, Muhammed Lawal was originally scheduled to face Tom DeBlass. However, on August 11, it was revealed DeBlass suffered a knee injury and was replaced by Marcus Sursa. In turn, Sursa was also injured and Lawal faced Dustin Jacoby.
Results
|-
! colspan="8" style="background-color: #ccf; color: #000080; text-align: center;" | Preliminary Card (Spike.com)
|-
|-
! colspan="8" style="background-color: #ccf; color: #000080; text-align: center;" | Unaired
|-
Bellator 124
Bellator 124 took place on September 12, 2014 at the Compuware Arena in Plymouth Township, Michigan. The event aired live in prime time on Spike TV.
Background
Bellator 124 was headlined by a Light Heavyweight Championship match between champion Emanuel Newton and Joey Beltran.
The event also featured the Bellator 2014 Light Heavyweight Tournament Final between Liam McGeary and Kelly Anundson in the co-main event, to determine the next title challenger.
Results
|-
! colspan="8" style="background-color: #ccf; color: #000080; text-align: center;" | Preliminary Card (Spike.com)
|-
! colspan="8" style="background-color: #ccf; color: #000080; text-align: center;" | Unaired
|-
Bellator 125
Bellator 125 took place on September 19, 2014 at the Save Mart Center in Fresno, California. The event aired live in prime time on Spike TV.
Background
Bellator 125 was headlined by a Middleweight match between former kickboxing champion and Bellator newcomer Melvin Manhoef facing former Bellator tournament winner, and former WEC champion, Doug Marshall.
Four time Bellator tournament veteran Brian Rogers was originally scheduled to face former WEC champion James Irvin in the co-main event of this card. However, on September 1, it was revealed that Irvin was injured and Rogers would instead face season eight tournament finalist Brett Cooper. Then, on September 9, it was announced that Cooper would have to pull out of the match due to a back injury; Rogers instead faced promotional newcomer Rafael Carvalho.
Results
|-
! colspan="8" style="background-color: #ccf; color: #000080; text-align: center;" | Preliminary Card (Spike.com)
|-
! colspan="8" style="background-color: #ccf; color: #000080; text-align: center;" | Unaired
|-
Bellator 126
Bellator 126 took place on September 26, 2014 at the Grand Canyon University Arena in Phoenix, Arizona. The event aired live in prime time on Spike TV.
Background
Bellator 126 was headlined by a Middleweight Championship bout between champion Alexander Shlemenko and Season 10 Middleweight Tournament winner Brandon Halsey.
The card also featured the final bout of the Season 10 Lightweight Tournament between Patricky Freire and Marcin Held.
Results
|-
! colspan="8" style="background-color: #ccf; color: #000080; text-align: center;" | Preliminary Card (Spike.com)
Bellator 127
Bellator 127 took place on October 3, 2014 at the Pechanga Resort & Casino in Temecula, California. The event aired live in prime time on Spike TV.
Background
The event was headlined by featherweight match between former Bellator Featherweight Champion Daniel Mason-Straus and season nine tournament finalist Justin Wilcox.
The co-main event was supposed to feature a Welterweight bout between former Dream welterweight champion Marius Zaromskis and former WEC champion Karo Parisyan. However, on September 24 it was announced that Fernando Gonzalez replaced Marius Zaromskis due to an undisclosed injury. Fernando's original opponent Justin Baesman faced newcomer John Mercurio.
Results
|-
! colspan="8" style="background-color: #ccf; color: #000080; text-align: center;" | Preliminary Card (Spike.com)
|-
|-
! colspan="8" style="background-color: #ccf; color: #000080; text-align: center;" | Unaired
|-
Bellator 128
Bellator 128 took place on October 10, 2014 at the Winstar World Casino in Thackerville, Oklahoma. The event aired live in prime time on Spike TV.
Background
Bellator 128 was headlined by a Bellator Bantamweight Championship fight between champion Eduardo Dantas and interim champion Joe Warren.
A Lightweight contest between Alexander Sarnavskiy and John Gunderson was scheduled to take place on this card. However, due to Gunderson pulling out of the bout and retiring, Derek Campos stepped in as a replacement. Campos suffered an injury and was forced out of the fight, Sarnavskiy faced promotional newcomer Dakota Cochrane.
Results
|-
! colspan="8" style="background-color: #ccf; color: #000080; text-align: center;" | Preliminary Card (Spike.com)
|-
|-
! colspan="8" style="background-color: #ccf; color: #000080; text-align: center;" | Unaired
|-
Bellator 129
Bellator 129 took place on October 17, 2014 at the Mid-America Center in Council Bluffs, Iowa. The event aired live in prime time on Spike TV.
Background
Bellator 129 was headlined by a Welterweight fight between Iowa natives and UFC Vets Josh Neer and Paul Bradley.
In the co-main event Houston Alexander was expected to face Pride FC vet James Thompson in a Heavyweight bout. However, on October 10, 2014, it was announced that Thompson was pulled from the fight due to injury. Alexander instead faced Virgil Zwicker.
Results
|-
! colspan="8" style="background-color: #ccf; color: #000080; text-align: center;" | Preliminary Card (Spike.com)
Bellator 2014 Monster Energy Cup
The Bellator 2014 Monster Energy Cup took place on October 18, 2014 at the Sam Boyd Stadium in Whitney, Nevada.
Background
On October 15, 2014, Bellator announced that during the Monster Energy Cup series three fights will take place during the "Party in the Pits" pre-race festivities.
Results
Bellator 130
Bellator 130: Newton vs. Vassel took place on October 24, 2014 at the Kansas Star Arena in Mulvane, Kansas. The event aired live in prime time on Spike TV.
Background
Bellator 130 was headlined by a Light Heavyweight Championship fight between Emanuel Newton and Linton Vassell.
Results
|-
! colspan="8" style="background-color: #ccf; color: #000080; text-align: center;" | Preliminary Card (Spike.com)
Bellator 131
Bellator 131 took place on November 15, 2014 at the Valley View Casino Center in San Diego, California. The event aired live in prime time on Spike TV.
Background
The event was announced during the Bellator Season 11 debut on September 5, 2014. It served as the season finale.
Bellator President Scott Coker announced the main event would feature a grudge match between two former top UFC light heavyweights with Tito Ortiz taking on the newly signed Stephan Bonnar.
Additionally, it was announced that the co-main event would be a rematch between current interim lightweight champion Will Brooks and former undisputed champion Michael Chandler, for the vacant world title.
Muhammed Lawal was originally scheduled to face Tom DeBlass on this card. However, on November 1, it was announced that DeBlass had suffered a cut during training and had to withdraw from the bout. Lawal instead faced Joe Vedepo.
This event was the highest rated in Bellator's history, garnering an average viewership of 1.2 million television viewers in the U.S. with a peak of over 2 million viewers in the main event.
Results
|-
|-
! colspan="8" style="background-color: #ccf; color: #000080; text-align: center;" | Preliminary Card (Spike.com)
References
External links
Bellator
2014 American television seasons
2014 in mixed martial arts
Bellator MMA events
|
55707814
|
https://en.wikipedia.org/wiki/Streckfus%20Steamers
|
Streckfus Steamers
|
Streckfus Steamers was a company started in 1910 by John Streckfus Sr. (1856–1925) born in Edgington, Illinois. He started a steam packet business in the 1880s, but transitioned his fleet to the river excursion business around the turn of the century. In 1907, he incorporated Streckfus Steamers to raise capital and expand his riverboat excursion business. A few years later, the firm acquired the Diamond Jo Line, a steamboat packet company.
The most active period started after the first World War. Bandleader Fate Marable recruited many musicians from New Orleans during this period, including Louis Armstrong. Streckfus Steamers expanded the number of excursion boats, acquired or converted larger boats, and hired more bands. After the death of the patriarch in 1925, the eldest son Joseph took over the company, and was assisted by his three brothers.
Family history
The principal of Streckfus Steamers was John Streckfus Sr., the son of Balthazar (18111881) and Anna Mary (Schaab) Streckfus, both immigrants to the United States from Bavaria. In 1850, the couple sailed for the United States with their two daughters, Barbara and Catherine. Before their ship arrived in New Orleans, Theresa gave birth to their first son, Michael. The Streckfus family eventually settled in Edgington, Illinois, but Balthazar later established his wagon shop in nearby Rock Island, Illinois in 1868. The family also had a grocery business.
John Streckfus married Theresa Bartemeier in 1880. Theresa bore nine children, and all of the surviving children worked on the riverboats. Balthazar had been commuting from Edgington to his shop in Rock Island. His sons built a house for him in the late-1860s to facilitate a shorter journey to work. The Streckfus House still stands at 908 4th Avenue (as of October 2017), and the brick Italianate house has been designated as a Rock Island Landmark.
John and Theresa Streckfus had four sons who were later licensed as captains: Joseph Leo (1887–1960), Roy Michael (1888–1968), John Nicholas (1891–1948), and Verne Walter (1895–1984). Joseph took over Streckfus Steamers in 1925 after the death of his father. Of this second Streckfus generation, he also was the most engaged with the music side of the business.
There are at least two descendants of the Streckfus family who are active as river boat captains, at least through 2005. At that time, Captain Lisa Streckfus piloted the Delta Queen on the Mississippi River. She is the daughter of riverboat captain, Bill Streckfus, and great-granddaughter of the family's first riverboat captain, John Streckfus Sr. Lisa's cousin, Sister Mary Manthey, is also a Mississippi River steamboat captain.
Packet service
John Streckfus bought his first steamboat in 1889 for $10,000. Verne Swain, a small steamer, measuring just 120-feet in length and 22-feet in width, was constructed in Stillwater, Minnesota at the Swain Shipyard. Verne Swain ran every day with several stops between Davenport and Clinton Iowa, making a three-hour, one-way trip, then departed Clinton in the afternoon and returned to Davenport every evening. The steamer featured a narrow profile of just 22-feet, but was 120-feet in length. By 1891, Streckfus had acquired his own operator’s license and the title of Captain, whereas before, he had contracted for established operators to manage his steamers, and later he earned an engineer’s license. The same year, he bought his second steamboat, the Freddie, a triple-decked, 73-foot sternwheeler with a 16-foot beam. He transported freight and passengers on both the Mississippi and the Ohio Rivers, though eventually, he ran his packets on the Mississippi, north of St.Louis. Though he gained a reputation for punctuality and efficiency, he complained about the meager profits his packets earned running freight on the rivers.
Early excursion service
By 1901, Streckfus changed his business model. Rather than using his slow paddle-wheelers to compete with the railroads for the freight business, he started transitioning to the excursion business. He tested this idea around 1900 when he installed a calliope on the City of Winona. The next year he increased his investment in the new venture with $25,000 in capital to convert a packet into a floating entertainment venue. According to his own design, Streckfus commissioned work on a 175-foot steamboat with a capacity to hold 2,000 passengers, sleeping berths for the crew and the entertainers, a 100 x 27 foot maple dance floor, a bar, a dining room, and electric lights. His first custom-built excursion boat he named the J.S. Howard Shipyard of Jeffersonville, Indiana built the steamboat according to this new design.
J.S. was the first steamboat in service on the Mississippi built especially for excursions. The 1901 excursions on the J.S. also corresponds to the first regular dance bands hired by Streckfus. Though the J.S. spent much of its time in St. Louis and St. Paul, it tramped on the Mississippi and Ohio Rivers. While cruising the Mississippi near LaCrosse, WI on the night of June 25, 1910, Streckfus lost his custom-built steamer, the J.S., to a fire allegedly ignited by a drunken and disorderly passenger. Streckfus started offering passenger service on his paddle-wheelers as a part of a new business model, balancing his business between moving freight and moving people.
Reorganization
John Streckfus organized the Streckfus Steamers Line in order to raise capital for an expansion of his steamboat excursion business. This was a closely held company, accepting investments only from members of the Streckfus family. He had started as a freight hauler who had sold passenger tickets, but his new company's main business was the excursion trade, though he also accepted freight about his steamboats.
The Diamond Jo Line
Moving freight on steamboats had already been dying as a business for a few decades, but this extended an opportunity. Since the packet business was unprofitable, this implied that many steamboat owners were motivated sellers. John Streckfus, who had seen his custom-built J.S. go up in flames, purchased a packet fleet from the Diamond Jo Line. He applied new capital raised by Streckfus Steamers to purchase of four ships. These included the Dubuque and three damaged riverboats: Sidney, a 221-foot sternwheeler; St. Paul, a 300-foot side-wheeler; and another side-wheeler, the 264-foot Quincy. Included in this February 3, 1911 acquisition were docks, shipyards and warehouses. Streckfus Steamers paid $200,000 for all of these ships and land-based assets.
Low water on the Mississippi River often sidelined the erstwhile packets for the next five seasons, but Streckfus bided his time by more bond issues and stock sales. He and his sons converted the St. Paul, which was fitted to run excursions between by 1917, when it tramped between St. Louis and St. Paul. However, Streckfus business was executed at an inconvenient time: the Mississippi River was low due to droughts, and he would not be able to run regular excursions for about five years. Some years later, by the 1920s, the Streckfus patriarch had four sons to captain his fleet: Joseph, Vern, Roy, and John Streckfus Jr.
SS St. Paul
John Streckus chose for his first conversion the largest of the Diamond Jo steamers, the 300-foot St. Paul. The company ran the first excursion for the St. Paul in 1917, and the next year it tramped between St. Louis and its namesake city on the Mississippi. The cabin was fitted with electric lights and fans. In its third season, the steamer ran aground on a sandbar, though this may have been the only mishap of the season. During the 1920s, Streckfus tramped it a bit further south, between the Quad Cities and Cape Girardeau, Missouri areas. The next decade, the large steamer plied the Ohio River until it 1930 rebuild. Rechristened Senator, it ran excursions for just a few more years.
Dixie Belle
Whether Dixie Belle was a ship in the Streckfus Steamers inventory, the company operated it for two and a half hour cruises departing and returning to the Canal Street dock in New Orleans, three nights per week during the winter of 1919. Dixie Belle was a venue for "Fate Marable and His Jazz Maniacs" and the venue for Louis Armstrong's first engagement with Streckfus Steamers.
J.S. Deluxe
In 1919, Streckfus Steamers executed the second conversion from their Diamond Jo fleet, the packet Quincy. The company developed different ships for different markets, and the J.S. Deluxe catered to wealthy people from St. Louis. The company hired white musicians to perform on this steamer. J.S. Deluxe continued to serve the upscale market in the St. Louis area until the President took over in 1934.
Capitol
The Capitol was born from the old sternwheeler, the Dubuque. It did not require as much water depth as other ships in their fleet, so Streckfus Steamers put it into use in the Upper Mississippi, while it served local cruises at New Orleans in the winter.
Fate Marable performed on the Capital starting in 1920, leading a band which included Louis Armstrong, Boyd Adkins, Norman Brashear, Warren “Baby” Dodds, David Jones, Henry Kimball, and Johnny St. Cyr.
Starting around 1924, the trumpeter Ed Allen led the Whispering Gold Band aboard the S.S. Capitol and stayed with Streckfus Steamers for about two years before moving to New York City. “Papa” Celestine brought his band to the Capitol around 1926. Sidney Desvigne who had previously played corner in Ed Allen’s band aboard the Capitol, left Streckfus Steamers for two years to lead his own band on the Island Queen. He returned to Streckfus Steamers, this time as a bandleader of the Sidney Desvigne’s S.S. Capitol Orchestra. Walter “Fats” Pinchon followed Sidney Desvigne to the Island Queen and back. Eventually, the conservatory-trained pianist headed his own group, the last New Orleans band to have regular employment with Streckfus Steamers.
Sidney
The Sidney is a steamboat first built in West Virginia between 1880 and 1881. On March 10, 1881, a breach in the steam line scalded fourteen people and killed four others. Diamond Jo Line acquired the steamer the next year for about $23,000, after which it ran the Mississippi River between St. Louis and St. Paul.
Streckfus purchased the Sidney in 1911, a 221-foot sternwheeler from the Diamond Jo Line after the steam packet had been damaged by rocks while cruising on the Mississippi River. After repairs and refitting, he assigned the Sidney to winter excursions in the New Orleans area for about a decade. There was a 1921 rebuild, after which it was rechristened Washington. Louis Armstrong performed aboard Sidney; Erroll Garner performed aboard Washington.
Fate Marable started his first New Orleans band on the Sidney in 1918, starting his expanded responsibilities as bandleader and talent scout, duties he would continue until his retirement in 1940. He scouted and hired Louis Armstrong, as well as Warren “Baby” Dodds, George “Pops” Foster, and Johnny St. Cyr.
Later acquisitions
By the late-1930s, Streckfus Steamers had an inventory of aging steamboats with wooden hulls. The United States Coast Guard was enforcing stricter standards for riverboats, so the company built its last two excursion boatsthe President and the SS Admiralwith steel hulls.
President
In 1933, Streckfus Steamers bought the steamboat Cincinnati, a steel-hulled packet built in 1924. Cincinnati, true to its name, ran freight between the Queen City and Louisville, Kentucky. The company installed twenty-four watertight compartments into the existing steel hull and rebuilt a superstructure in steel, and expanded to five decks. The new excursion steamer was dubbed President, and Streckfus Steamers dispatched it up the Ohio River to serve Pittsburgh during the depression. Eventually, President replaced the J.S. Deluxe for excursions catering to wealthy people in the St. Louis market. The ship commenced carrying excursion passenger in July 1934 out of St. Louis, with bands led by Fate Marable and Charlie Creath. The S.S. President could accommodate 3,100 passengers and continued service for many years after riverboat excursions diminished in popularity after World War II. It was a venue for the New Orleans Jazz Festival, and hosted performers such as Pete Fountain and Louis Cottrell’s Dixieland Jazz Band. In 1944, Streckfus Steamers moved the President from St. Louis to New Orleans. The company overhauled the President’s motive power, switching to diesel propulsion in 1978, before selling the ship to the New Orleans Steamboat company in 1981.
SS Admiral
The SS Admiral was the first of the Streckfus fleet to be built with a metal superstructure on a steel hull. Originally the 1907 Albatross, a railroad ferry, Streckfus Steamers stripped it down to the steel hull and rebuilt it with a steel superstructure and an Art Deco finish. The company docked on the Mississippi River in St. Louis, at the foot of Washington Avenue. SS Admiral commenced excursions in 1940, featuring an air-conditioned cabin and a large ballroom with maple flooring. The top deck, also known as the fifth deck, allowed guests to access close-up views of the riverbank sights through coin-operated telescopes. In 1973, the company removed the steam engine and converted SS Admiral to diesel power. Streckfus Steamers ran excursions on the SS Admiral through the 1978 season, and retired the ship in 1979 due to weakness in its hull. The company sold the ship to John E. Connelly in 1981.
Riverboat jazz
Early riverboat music
John Streckfus started hiring musicians in 1901, when he engaged a friend to scout talent, which resulted in the first live musical entertainment, an African-American trio from Des Moines playing banjo, guitar, and mandolin. By 1903, Streckfus employed a house band to play popular music, a quartet which included a drummer, trumpeter, violinist, and a pianist. Charles Mills was the piano player, an African-American performing with three white musicians. Mills remained with Streckfus until 1907, when he planned to seek musical opportunities in New York City. Mills told Fate Marable about his plans. The seventeen year-old piano player from Paducah, Kentucky solicited employment from an agent of the company when a Streckfus excursion boat docked in his hometown. Streckfus hired Marable to play a steam calliope and to play piano in the boat’s dance bands. Marable first played piano for Streckfus on the J.S., playing in a duo with Emil Flindt, a white violist. Marable continued as a performer on the company's flagship until its conflagration in 1910. The calliope was not just a musical instrument, it was an advertising medium for Streckfus Steamers. Its music carried for miles, announcing the presence of an excursion boat plying the river. People gathered at the docks, listening the calliope playing and some bought excursion tickets. Later, Streckfus allowed Marable to hire his own musicians. Around 1918, Marable assembled his own orchestra for the Sidney, including many from New Orleans: George "Pops" Foster (bass), Warren "Baby" Dodds (drums), Johnny St. Cyr (banjo), David Jones and Norman Mason (saxophone), Lorenzo Brashear (trombone), and Boyd Atkins (violin).
After World War I
John Streckfus Sr. and his two brothers were amateur musicians and formed specific ideas about what kind of music his guests would hear. Often, one of the brothers attended rehearsals, marking the tempo with a watch to ensure 60 beats per minute for the slow tunes, and 90 beats per minute for the fast ones. While Louis Armstrong played for Fate Marable's band, he observed Joseph Streckfus smiling, laughing, and tapping to the beat. However, another account indicates that Joseph Streckfus made music evaluations with considerations beyond his own sense of taste. According to trumpeter Henry “Red” Allen, Joseph Streckfus expected different tempi depending on where they played: St. Louis dancers liked a faster beat than the dancers in New Orleans.
John Streckfus demanded strict decorum on his steamships. Though he sold alcoholic drinks, he tolerated neither gambling nor drunkenness from his passengers or his musicians. Marable made a perfect fit as a bandleader since he enforced these rules, and he imposed the same exacting standards for studying, rehearsing, and playing music. Marable sometimes took extreme measures to make a point, as when he fired musicians. Sometimes he left a hatchet on a musician's chair, in order to him know that "he gave them the axe." Another trick was telling the whole group (except for one musician) to come to rehearsal an hour early. Musicians on Streckfus Steamers did not achieve star status during their tenure. John Streckfus established a policy of standard wages. At one point, he offered band members $35 per week, plus room and board (or $65 per week without room and board). He lowered compensation in 1919 to $37.50 per weeknon-inclusive of room and boardalbeit with much shorter work schedules. Seven years later, the company increased pay to $45 per week and added $5 weekly retention bonuses, but without room and board. A few exceptional musicians were allowed to improvise for a few bars. One exception was Louis Armstrong. The Streckfus family and Marable otherwise insisted that the performers play the arrangements as written. However, this prompted expressive and gifted musicians like Armstrong to advance his career elsewhere.
In the period after World War I, John Streckfus followed the expansion of Jim Crow practices, segregating his musicians and his passengers.
In 1920, Streckfus Steamers began running Monday night cruises for African-American audiences out of St. Louis. On the other hand, according to Louis Armstrong, Fate Marable’s band was the first African American group to play music on the Mississippi riverboats.
References
Further reading
Meyer, Dolores (1967). "Excursion Steamboating on the Mississippi with Streckfus Steamers, Inc." (St. Louis: St. Louis University) unpublished dissertation. Available at the Herman T. Pott National Inland Waterways Library, a special collection sponsored by the St. Louis Mercantile Library Association.
External links
Riverboats and Jazz. Tulane University, Howard-Tilton Memorial Library.
The Streckfus Steamboat Line. Tulane University, Riverboats & Jazz.
The Streckfus Excursion Boats. Online Steamboat Museum
A Brief Look at American Riverboat Musical Styles. University of Arkansas-Little Rock
A Guide to the William F. and Betty Streckfus Carroll Collection. The St. Louis Mercantile Library Association, 2010.
J. S. Deluxe (steamboat) Indiana Memory Digital Collections
Entertainment companies established in 1910
Defunct transportation companies of the United States
American jazz
Steamboats of the Mississippi River
Steamboats of the Ohio River
Paddle steamers
River cruise companies
1910 establishments in Illinois
1978 disestablishments in Missouri
Entertainment companies disestablished in 1978
Transportation companies based in Illinois
|
55041925
|
https://en.wikipedia.org/wiki/Arc%20Symphony
|
Arc Symphony
|
Arc Symphony is an adventure video game developed by Matilde Park and Penelope Evans, and released on May 15, 2017, both as a browser game and in a downloadable version for Microsoft Windows, MacOS, and Linux. The player takes the role of a formerly active user of a Usenet newsgroup for a fictional Japanese role-playing game (JRPG), also titled Arc Symphony, and reads messages from the game's characters.
As part of the game's release, fake game boxes for the JRPG, in the style of those for PlayStation JRPGs, were created and given to the developers' friends, who shared photos of it on social media with comments pretending that the JRPG was a real game; additionally, a fake fan site for the JRPG was created to further the illusion that it was real. Critics liked the game and its marketing, calling them accurate to fan communities in the 1990s.
Overview
Arc Symphony is a text-based adventure game, and is presented as an old computer through which the player reads messages in a Usenet newsgroup dedicated to a fictional Japanese role-playing video game (JRPG) for the PlayStation game console, also titled Arc Symphony. The player takes the role of a formerly active user of the group, and begins the game by taking a personality quiz. Messages include discussions about the fictional Arc Symphony characters and writing, and about the newsgroup users' usernames. The characters the player interacts with include a couple who chat on IRC at the same time by using two phone lines, a new user who provokes people, and a university professor who wants to be called by his username rather than his real name when in the newsgroup.
Development and release
Arc Symphony was developed by Matilde Park and Penelope Evans using the game engine Twine. Both of them had prior experience with fan communities: Evans mentioned having been a member of message boards for the game The Sims 2 as a child and having nostalgic feelings for it, while Park said that although she did not miss old websites, bulletin board systems and mailing lists, they still were a part of her. Evans described the game's interactions as feeling like a real forum experience, saying that while people look at pixels at their screen, a real person is on the other side, and that both parties get to accept or reject the other, with the possibility of hurting them.
After the completion of the development, they thought about how to launch the game, and came up with the idea to put together fake game boxes for the fictional Arc Symphony, consisting of a PlayStation-style jewel case and JRPG-like cover art with inaccurate Japanese text. The unnatural Japanese text on the case reads "Fly Shooting Free (of charge), Tactical, Flight Actions"(フライ射撃無料 タクティカル フライトの行動) A few of these were given out to friends as keepsakes, who would play along with the illusion that the Arc Symphony JRPG was a real video game by posting about it on social media, sharing photographs of the jewel cases on the internet accompanied with comments about the nostalgic feelings they supposedly had for the game. Park said that she liked this idea, since it replicated the game's premise of learning about a game through its fan community in real life. Park and Evans brought the remaining cases to the Toronto Comic Arts Festival, where more people joined in; according to Park, some people insisted that they remembered playing the Arc Symphony JRPG, something she described as feeling surreal. In addition to the case, a fake fan site for the JRPG was created in the style of fan sites from the 1990s; it was coded by Park, and includes fake fan fiction. As she had never been interested in fan fiction herself, she described what she had written as "accurately bad".
Following a countdown on the fan site, the game was released on May 15, 2017, through Park's Itch.io page, and is available both as a browser game and in a downloadable version playable on Microsoft Windows, MacOS and Linux. The game is also accessible from within Park and Evans' game Subserial Network.
Reception
Julie Muncy of Wired called the game "engaging [and] incredibly polished" despite its short playtime, and described it and its marketing as similar to performance art. Polygon Allegra Frank found the game "amusing and quirky", and also commented positively on the marketing, saying that she was amazed by how it manipulated people's memories. Gita Jackson at Kotaku said that the marketing fooled her due to how accurate the fan site was to real performance of fandom in the 1990s, and called it part of what makes the game work, as it sets up nostalgia for the JRPG, making it easier to pretend to be a fan of it within the game. She described the game as feeling "like a snapshot of [a] world long lost", with an accurate cast of characters. Brendan Caldwell at Rock, Paper, Shotgun included Arc Symphony on a list of recommended free games, where he called its characters and the interactions between them realistic, and described it as fun to see the "quirks and squabbles" of fandoms as an unseen observer.
References
External links
2010s interactive fiction
2017 video games
Browser games
Linux games
MacOS games
Single-player video games
Twine games
Video games about video games
Video games developed in Canada
Video games set in the 1990s
Windows games
|
14745373
|
https://en.wikipedia.org/wiki/IPv4%20header%20checksum
|
IPv4 header checksum
|
The IPv4 header checksum is a checksum used in version 4 of the Internet Protocol (IPv4) to detect corruption in the header of IPv4 packets. It is carried in the IP packet header, and represents the 16-bit result of summation of the header words.
The IPv6 protocol does not use header checksums. Its designers considered that the whole-packet link layer checksumming provided in protocols, such as PPP and Ethernet, combined with the use of checksums in upper layer protocols such as TCP and UDP, are sufficient. Thus, IPv6 routers are relieved of the task of recomputing the checksum whenever the packet changes, for instance by the lowering of the Hop limit counter on every hop.
Computation
The checksum calculation is defined in RFC 791:
The checksum field is the 16-bit ones' complement of the ones' complement sum of all 16-bit words in the header. For purposes of computing the checksum, the value of the checksum field is zero.
If there is no corruption, the result of summing the entire IP header, including checksum, should be zero. At each hop, the checksum is verified. Packets with checksum mismatch are discarded. The router must adjust the checksum if it changes the IP header (such as when decrementing the TTL).
Examples
Calculating the IPv4 header checksum
Take the following truncated excerpt of an IPv4 packet. The header is shown in bold and the checksum is underlined.
4500 0073 0000 4000 4011 b861 c0a8 0001
c0a8 00c7 0035 e97c 005f 279f 1e4b 8180
For ones' complement addition, each time a carry occurs, we must add a 1 to the sum. A carry check and correction can be performed with each addition or as a post-process after all additions. If another carry is generated by the correction, another 1 is added to the sum.
To calculate the checksum, we can first calculate the sum of each 16 bit value within the header, skipping only the checksum field itself. Note that these values are in hexadecimal notation.
4500 + 0073 + 0000 + 4000 + 4011 + c0a8 + 0001 + c0a8 + 00c7 = 2479c
The first digit is the carry count and is added to the sum:
2 + 479c = 479e (if another carry is generated by this addition, another 1 must be added to the sum)
To obtain the checksum we take the ones' complement of this result: b861 (as shown underlined in the original IP packet header).
Verifying the IPv4 header checksum
When verifying a checksum, the same procedure is used as above, except that the original header checksum is not omitted.
4500 + 0073 + 0000 + 4000 + 4011 + b861 + c0a8 + 0001 + c0a8 + 00c7 = 2fffd
Add the carry bits:
fffd + 2 = ffff
Taking the ones' complement (flipping every bit) yields 0000, which indicates that no error is detected.
IP header checksum does not check for the correct order of 16 bit values within the header.
See also
Header check sequence
References
External links
Header Checksum
Error detection and correction
Header Checksum
|
144949
|
https://en.wikipedia.org/wiki/Altair%208800
|
Altair 8800
|
The Altair 8800 is a microcomputer designed in 1974 by MITS and based on the Intel 8080 CPU. Interest grew quickly after it was featured on the cover of the January 1975 issue of Popular Electronics and was sold by mail order through advertisements there, in Radio-Electronics, and in other hobbyist magazines. The Altair is widely recognized as the spark that ignited the microcomputer revolution as the first commercially successful personal computer. The computer bus designed for the Altair was to become a de facto standard in the form of the S-100 bus, and the first programming language for the machine was Microsoft's founding product, Altair BASIC.
History
While serving at the Air Force Weapons Laboratory at Kirtland Air Force Base, Ed Roberts and Forrest M. Mims III decided to use their electronics background to produce small kits for model rocket hobbyists. In 1969, Roberts and Mims, along with Stan Cagle and Robert Zaller, founded Micro Instrumentation and Telemetry Systems (MITS) in Roberts' garage in Albuquerque, New Mexico, and started selling radio transmitters and instruments for model rockets.
Calculators
The model rocket kits were a modest success and MITS wanted to try a kit that would appeal to more hobbyists. The November 1970 issue of Popular Electronics featured the Opticom, a kit from MITS that would send voice over an LED light beam. As Mims and Cagle were losing interest in the kit business, Roberts bought his partners out, then began developing a calculator kit. Electronic Arrays had just announced a set of six large scale integrated (LSI) circuit chips that would make a four-function calculator. The MITS 816 calculator kit used the chipset and was featured on the November 1971 cover of Popular Electronics. This calculator kit sold for $175 ($275 assembled). Forrest Mims wrote the assembly manual for this kit and many others over the next several years. As payment for each manual he often accepted a copy of the kit.
The calculator was successful and was followed by several improved models. The MITS 1440 calculator was featured in the July 1973 issues of Radio-Electronics. It had a 14-digit display, memory, and square root function. The kit sold for $200 and the assembled version was $250. MITS later developed a programmer unit that would connect to the 816 or 1440 calculator and allow programs of up to 256 steps.
Test equipment
In addition to calculators, MITS made a line of test equipment kits. These included an IC tester, a waveform generator, a digital voltmeter, and several other instruments. To keep up with the demand, MITS moved into a larger building at 6328 Linn NE in Albuquerque in 1973. They installed a wave soldering machine and an assembly line at the new location. In 1972, Texas Instruments developed its own calculator chip and started selling complete calculators at less than half the price of other commercial models. MITS and many other companies were devastated by this, and Roberts struggled to reduce his quarter-million-dollar debt.
Popular Electronics
In January 1972, Popular Electronics merged with another Ziff-Davis magazine, Electronics World. The change in editorial staff upset many of their authors, and they started writing for a competing magazine, Radio-Electronics. In 1972 and 1973, some of the best construction projects appeared in Radio-Electronics.
In 1974, Art Salsberg became editor of Popular Electronics. It was Salsberg's goal to reclaim the lead in electronics projects. He was impressed with Don Lancaster's TV Typewriter (Radio Electronics, September 1973) article and wanted computer projects for Popular Electronics. Don Lancaster did an ASCII keyboard for Popular Electronics in April 1974. They were evaluating a computer trainer project by Jerry Ogden when the Mark-8 8008-based computer by Jonathan Titus appeared on the July 1974 cover of Radio-Electronics. The computer trainer was put on hold and the editors looked for a real computer system. (Popular Electronics gave Jerry Ogden a column, Computer Bits, starting in June 1975.)
One of the editors, Les Solomon, knew MITS was working on an Intel 8080 based computer project and thought Roberts could provide the project for the always popular January issue. The TV Typewriter and the Mark-8 computer projects were just a detailed set of plans and a set of bare printed circuit boards. The hobbyist faced the daunting task of acquiring all of the integrated circuits and other components. The editors of Popular Electronics wanted a complete kit in a professional-looking enclosure.
Ed Roberts and his head engineer, Bill Yates, finished the first prototype in October 1974 and shipped it to Popular Electronics in New York via the Railway Express Agency. However, it never arrived due to a strike by the shipping company. Solomon already had a number of pictures of the machine and the article was based on them. Roberts got to work on building a replacement. The computer on the magazine cover is an empty box with just switches and LEDs on the front panel. The finished Altair computer had a completely different circuit board layout than the prototype shown in the magazine. The January 1975 issues appeared on newsstands a week before Christmas of 1974 and the kit was officially (if not yet practically) available for sale.
The name
The typical MITS product had a generic name like the "Model 1440 Calculator" or the "Model 1600 Digital Voltmeter". Ed Roberts was busy finishing the design and left the naming of the computer to the editors of Popular Electronics.
One explanation of the Altair name, which editor Les Solomon later told the audience at the first Altair Computer Convention (March 1976), is that the name was inspired by Les's 12-year-old daughter, Lauren. "She said why don't you call it Altair – that's where the Enterprise is going tonight." The Star Trek episode is probably "Amok Time", as this is the only one from The Original Series which takes the Enterprise crew to Altair (Six).
Another explanation is that the Altair was originally going to be named the PE-8 (Popular Electronics 8-bit), but Les Solomon thought this name to be rather dull, so Les, Alexander Burawa (associate editor), and John McVeigh (technical editor) decided that: "It's a stellar event, so let's name it after a star." McVeigh suggested "Altair", the twelfth brightest star in the sky.
Intel 8080
Ed Roberts had designed and manufactured programmable calculators and was familiar with the microprocessors available in 1974. He thought the Intel 4004 and Intel 8008 were not powerful enough (in fact several microcomputers based on Intel chips were already on the market: the Canadian company Microsystems International's CPS-1 built-in 1972 used a MIL MF7114 chip modeled on the 4004, the Micral marketed in January 1973 by the French company R2E and the MCM/70 marketed in 1974 by the Canadian company Micro Computer Machines); the National Semiconductor IMP-8 and IMP-16 required external hardware; the Motorola 6800 was still in development. So he chose the 8-bit Intel 8080. At that time, Intel's main business was selling memory chips by the thousands to computer companies. They had no experience in selling small quantities of microprocessors. When the 8080 was introduced in April 1974, Intel set the single unit price at $360 (About $1,700 in 2014 dollars). "That figure had a nice ring to it," recalled Intel's Dave House in 1984. "Besides, it was a computer, and they usually cost thousands of dollars, so we felt it was a reasonable price." Ed Roberts had experience in buying OEM quantities of calculator chips and he was able to negotiate a $75 price (about $350 in 2014 dollars) for the 8080 microprocessor chips.
Intel made the Intellec-8 Microprocessor Development System that typically sold for a very profitable $10,000. It was functionally similar to the Altair 8800 but it was a commercial grade system with a wide selection of peripherals and development software. Customers would ask Intel why their Intellec-8 was so expensive when that Altair was only $400. Some salesmen said that MITS was getting cosmetic rejects or otherwise inferior chips. In July 1975, Intel sent a letter to its sales force stating that the MITS Altair 8800 computer used standard Intel 8080 parts. The sales force should sell the Intellec system based on its merits and that no one should make derogatory comments about valued customers like MITS. The letter was reprinted in the August 1975 issue of MITS Computer Notes. The "cosmetic defect" rumor has appeared in many accounts over the years although both MITS and Intel issued written denials in 1975.
The launch
For a decade, colleges had required science and engineering majors to take a course in computer programming, typically using the FORTRAN or BASIC languages. This meant there was a sizable customer base who knew about computers. In 1970, electronic calculators were not seen outside of a laboratory, but by 1974 they were a common household item. Calculators and video games like Pong introduced computer power to the general public. Electronics hobbyists were moving on to digital projects such as digital voltmeters and frequency counters. The Altair had enough power to be actually useful, and was designed as an expandable system that opened it up to all sorts of applications.
Ed Roberts optimistically told his banker that he could sell 800 computers, while in reality they needed to sell 200 over the next year just to break even. When readers got the January issue of Popular Electronics, MITS was flooded with inquiries and orders. They had to hire extra people just to answer the phones. In February MITS received 1,000 orders for the Altair 8800. The quoted delivery time was 60 days but it was months before they could meet that. Roberts focused on delivering the computer; all of the options would wait until they could keep pace with the orders. MITS claimed to have delivered 2,500 Altair 8800s by the end of May. The number was over 5,000 by August 1975. MITS had under 20 employees in January but had grown to 90 by October 1975.
The Altair 8800 computer was a break-even sale for MITS. They needed to sell additional memory boards, I/O boards and other options to make a profit. The system came with a "1024 word" (1024 byte) memory board populated with 256 bytes. The BASIC language was announced in July 1975 and it required one or two 4096 word memory boards and an interface board.
MITS Price List, Popular Electronics, August 1975.
4K BASIC language (when purchased with Altair, 4096 words of memory and interface board) $60
8K BASIC language (when purchased with Altair, two 4096-word memory boards and interface board) $75
MITS had no competition in the US for the first half of 1975. Their 4K memory board used dynamic RAM and it had several design problems. The delay in shipping optional boards and the problems with the 4K memory board created an opportunity for outside suppliers.
An enterprising Altair owner, Robert Marsh, designed a 4K static memory that was plug-in compatible with the Altair 8800 and sold for $255. His company was Processor Technology, one of the most successful Altair compatible board suppliers. Their advertisement in the July 1975 issue of Popular Electronics promised interface and PROM boards in addition to the 4K memory board. They would later develop a popular video display board that would plug directly into the Altair.
A consulting company in San Leandro, California, IMS Associates, Inc., wanted to purchase several Altair computers but the long delivery time convinced them that they should build their own computers. In the October 1975 Popular Electronics, a small advertisement announced the IMSAI 8080 computer. The ad noted that all boards were "plug compatible" with the Altair 8800. The computer cost $439 for a kit. The first 50 IMSAI computers shipped in December 1975. The IMSAI 8080 computer improved on the original Altair design in several areas. It was easier to assemble: The Altair required 60 wire connections between the front panel and the mother board (backplane.) The IMSAI motherboard had 18 slots. The MITS motherboard consisted of 4 slots segments that had to be connected together with 100 wires. The IMSAI also had a larger power supply to handle the increasing number of expansion boards used in typical systems. The IMSAI advantage was short lived because MITS had recognized these shortcomings and developed the Altair 8800B which was introduced in June 1976.
In 1977, Pertec Computer Corporation purchased MITS and began to market the computer, without changes (except for branding), as the PCC 8800 in 1978.
Description
In the first design of the Altair, the parts needed to make a complete machine would not fit on a single motherboard, and the machine consisted of four boards stacked on top of each other with stand-offs. Another problem facing Roberts was that the parts needed to make a truly useful computer weren't available, or wouldn't be designed in time for the January launch date. So during the construction of the second model, he decided to build most of the machine on removable cards, reducing the motherboard to nothing more than an interconnect between the cards, a backplane. The basic machine consisted of five cards, including the CPU on one and memory on another. He then looked for a cheap source of connectors, and came across a supply of 100-pin edge connectors. The S-100 bus was eventually acknowledged by the professional computer community and adopted as the IEEE-696 computer bus standard.
The Altair bus consists of the pins of the Intel 8080 run out onto the backplane. No particular level of thought went into the design, which led to disasters such as shorting from various power lines of differing voltages being located next to each other. Another oddity was that the system included two unidirectional 8-bit data buses, when the normal practice was for a single bidirectional bus (this oddity did, however, allow a later expansion of the S-100 standard to 16 bits bidirectional by using both 8-bit buses in parallel). A deal on power supplies led to the use of +8V and +18V, which had to be locally regulated on the cards to TTL (+5V) or RS-232 (+12V) standard voltage levels.
The Altair shipped in a two-piece case. The backplane and power supply were mounted on a base plate, along with the front and rear of the box. The "lid" was shaped like a C, forming the top, left, and right sides of the box. The front panel, which was inspired by the Data General Nova minicomputer, included a large number of toggle switches to feed binary data directly into the memory of the machine, and a number of red LEDs to read those values back out.
Programming the Altair via the front panel could be a tedious and time-consuming process. Programming required the toggling of the switches to positions corresponding to the desired 8080 microprocessor instruction or opcode in binary, then used the 'DEPOSIT NEXT' switch to load that instruction into the next address of the machine's memory. This step was repeated until all the opcodes of a presumably complete and correct program were in place. The only output from the programs was the patterns of lights on the panel. Nevertheless, many were sold in this form. Development was already underway on additional cards, including a paper tape reader for storage, additional RAM cards, and an RS-232 interface to connect to a proper Teletype terminal.
Software
Altair BASIC
Ed Roberts received a letter from Traf-O-Data asking if he would be interested in buying what would eventually be the BASIC programming language for the machine. He called the company and reached a private home, where no one had heard of anything like BASIC. In fact the letter had been sent by Bill Gates and Paul Allen from the Boston area, and they had no BASIC yet to offer. When they called Roberts to follow up on the letter he expressed his interest, and the two started work on their BASIC interpreter using a self-made simulator for the 8080 on a PDP-10 minicomputer. They figured they had 30 days before someone else beat them to the punch, and once they had a version working on the simulator, Allen flew to Albuquerque to deliver the program, Altair BASIC (aka MITS 4K BASIC), on a paper tape. The first time it was run, it displayed "READY" then Allen typed "PRINT 2+2" and it immediately printed the correct answer: "4". The game Lunar Lander was entered in and this worked as well. Gates soon joined Allen and formed Microsoft, then spelled "Micro-Soft".
Altair DOS
Announced in late 1975, it started shipping in August 1977.
See also
SIMH emulates Altair 8800 with both 8080 and Z80.
IMSAI 8080
References
Further reading
Books
Chapter 6 "Mechanics: Kits & Microcomputers"
Magazines
External links
MITS Altair 8800 exhibit at old-computers.com's virtual computer museum
Virtual Altair Museum
Altair 8800 images and information at vintage-computer.com
Marcus Bennett's Altair Documentation resource
Maker of a hardware emulation of the 8800 running on an Atmel AVR 8515
Altair 8800 Clone
True-to-life MITS Altair 8800 online simulator
Early microcomputers
Computer-related introductions in 1974
S-100 machines
8-bit computers
|
11453313
|
https://en.wikipedia.org/wiki/Shadow%20RAM%20%28Acorn%29
|
Shadow RAM (Acorn)
|
Shadow RAM, on the Acorn BBC Micro, Master-series and Acorn Electron microcomputers is the name given to a special framebuffer implementation to free up main memory for use by program code and data. Some implementations of shadow RAM also permit double-buffered graphics.
Background
The BBC Micro, Master-series and Electron machines use the 8-bit 6502 and 65C102 processors with a 16-bit address space. This address space is split into 32 KB RAM (0x0000 to 0x7FFF), 16 KB sideways "paged" address space (0x8000 to 0xBFFF) and 16 KB operating system space (0xC000 to 0xFFFF). Video or screen memory is typically allocated from 0x7FFF downwards as necessary, occupying as little as 1 KB for Teletext mode 7 (and thus the region from 0x7C00 to 0x7FFF), or as much as 20 KB for modes 0-2 (and thus the region from 0x3000 to 0x7FFF). Thus, screen memory can therefore occupy a considerable amount of the available directly-addressed 32 KB RAM.
Overview
Shadow RAM is a block of RAM that can be considered to reside in parallel to the normal memory map and is accessed by the system only under certain conditions. When shadow RAM is enabled, the memory region normally used for screen memory becomes available for BASIC program use and for applications employing officially documented operating system interfaces. Given the maximum requirement of 20 KB for screen memory with the systems concerned, the amount of shadow RAM provided is typically 20 KB.
Shadow RAM was fitted as standard on the BBC Micro Model B+ and on the BBC Master series, but was an optional feature provided by third-party expansions on earlier BBC Micro systems and the Acorn Electron. The Aries-B20 product, initially sold by Cambridge Computer Consultants, offered 20 KB shadow RAM for the BBC Model B, transparently diverting non-framebuffer accesses to the shadow RAM for addresses in the 20 KB video memory region.
BBC Master Implementation
Unlike the expansion boards for earlier systems, the BBC Master implementation of shadow RAM permits the selection of shadow memory instead of main memory for use as screen memory, this being done via the Access Control Register. By switching between main and shadow memory on alternate frames, double-buffered video could be used. Acorn provided a demonstration program in BASIC showing scrolling cloud animation with and without double buffering. The video game Firetrack would also use double buffering if shadow RAM was present.
On the BBC Master (and also the BBC Model B+), shadow RAM is activated by setting the most significant bit of the memory mode number. For example, to use mode 1 with shadow RAM enabled, mode 129 (128 combined with 1) is selected.
Another significant difference between the Master implementation of shadow RAM and previous implementations also offering 32 KB of shadow RAM is the allocation of the extra 12 KB beyond the 20 KB shadowing the screen memory. Instead of this memory occupying a single region from 0x8000 to 0xAFFF in sideways RAM space, as it does with the Aries-B32 product and the BBC Model B+, it instead occupies two regions in the Master as "private RAM": a 4 KB region from 0x8000 to 0x8FFF holding function key definitions, workspace for the operating system, and character and font definitions; an 8 KB region from 0xC000 to 0xDFFF holding paged (sideways) ROM and operating system workspace. Thus, the Master was able to support character set redefinition and to allocate memory to filing systems without the amount of available user RAM being reduced.
Further Refinements
Subsequent products augmented the shadow RAM with additional RAM that could be used for other purposes. For instance, the Aries-B32 product permitted shadow/sideways RAM combinations of 20 KB/12 KB and 16KB/16KB, or the use of the 32 KB RAM as two sideways RAM banks. The Slogger Master RAM Board offered a 32K RAM solution for the Acorn Electron alongside a "turbo mode" enhancement.
Patent Dispute
A dispute arose between the designers of the Aries-B20 shadow RAM board (Aries Computers Limited) and two other companies offering similar products, Raven Micro Products and Watford Electronics, over the alleged infringement of patent GB2137382A describing techniques employed in the design of the Aries-B20 board. The products involved were the Raven Micro Products (Raven-20) and Watford Electronics (32K RAM Expansion Board). Ultimately, in 1986, Watford Electronics acquired Aries Computers in a "five figure deal" including the patents involved, with Watford subsequently selling Aries' products alongside the company's own.
References
Computer memory
Acorn Computers
Memory management
|
66230
|
https://en.wikipedia.org/wiki/Peshawar
|
Peshawar
|
Peshawar (; ; ; ; ) is the capital of the Pakistani province of Khyber Pakhtunkhwa and its largest city. It is the sixth-largest city in Pakistan, and the largest Pashtun-majority city in the country. Situated in the broad Valley of Peshawar east of the historic Khyber Pass, close to the border with Afghanistan, Peshawar's recorded history dates back to at least 539 BCE, making it the oldest city in Pakistan and one of the oldest cities in South Asia.
In Ancient era, the city was known as Purushpura and served as the capital of the Kushan Empire under the rule of Kanishka; and was home to the Kanishka stupa, which was among the tallest buildings in the ancient world. Peshawar was then ruled by the Hephthalites, followed by the Hindu Shahis, before the arrival of Muslim empires. The city was an important trading centre during the Mughal era, before becoming part of the Pashtun Durrani Empire in 1747, and serving as their winter capital from 1776 until the capture of the city by the Sikh Empire in March 1823, who were followed by the British Indian Empire in 1846.
Etymology
The modern name of the city "Peshawar" is derived from the Sanskrit word "Purushapura" ( Puruṣapura, meaning "City of Men " or “City of Purusha"). It was named so by Mughal Emperor Akbar from its old name Parashawar, the meaning of which Akbar didn't understand. The ruler of the city during its founding may have been a Hindu raja (King) named Purush; the word pur means "city" in Sanskrit. Sanskrit, written in the Kharosthi script, was the literary language employed by the Buddhist kingdoms which ruled over the area during its earliest recorded period. The city's name may also be derived from the Sanskrit name for "City of Flowers," Poshapura, a name found in an ancient Kharosthi inscription that may refer to Peshawar.
Chinese Buddhist monk Xuanzang's 7th century account of a city in Gandhara called the city Po-la-sha-pu-lo (Chinese: 布路沙布邏, bùlùshābùló), and an earlier 5th century account by Fa-Hien records the city's name as Fou-lou-sha (Chinese: 弗樓沙, fùlóshā), the Chinese equivalent of the Sanskrit name of the city, Purushapura. An ancient inscription from the Shapur era identifies a city in the Gandhara valley by the name pskbvr, which may be a reference to Peshawar.
The Arab historian and geographer Al-Masudi noted that by the mid 10th century, the city was known as Parashāwar. The name was noted to be Purshawar and Purushavar by Al-Biruni.
The city began to be known as Peshāwar by the era of Emperor Akbar. The current name is said by some to have been based upon the Persian for "frontier town" or, more literally, "forward city," though transcription errors and linguistic shifts may account for the city's new name. One theory suggests that the city's name is derived from the Persian name "Pesh Awardan", meaning "place of first arrival" or "frontier city," as Peshawar was the first city in the Indian subcontinent after crossing the Khyber Pass. Akbar's bibliographer, Abu'l-Fazl ibn Mubarak, lists the city's name as both Parashāwar, transcribed in Persian as , and Peshāwar ().
History
Ancient
Founding
Peshawar was founded as the city of Puruṣapura, on the Gandhara Plains in the broad Valley of Peshawar in 100 CE. It may have been named after a Hindu raja who ruled the city who was known as Purush. The city likely first existed as a small village in the 5th century BCE, within the cultural sphere of ancient India. Puruṣapura was founded near the ancient Gandharan capital city of Pushkalavati, near present-day Charsadda.
Greek
In the winter of 327–26 BCE, Alexander the Great subdued the Valley of Peshawar during his invasion of Indus Valley, as well as the nearby Swat and Buner valleys. Following Alexander's conquest, the Valley of Peshawar came under the suzerainty of Seleucus I Nicator, founder of the Seleucid Empire. A locally-made vase fragment that was found in Peshawar depicts a scene from Sophocles' play Antigone.
Mauryan
Following the Seleucid–Mauryan war, the region was ceded to the Mauryan Empire in 303 BCE. Around 300 BCE, the Greek diplomat and historian Megasthenes noted that Purushapura (ancient Peshawar) was the western terminus of a Mauryan road that connected the city to the empire's capital at Pataliputra, near the city of Patna in the modern-day Indian state of Bihar.
As Mauryan power declined, the Greco-Bactrian Kingdom based in modern Afghanistan declared its independence from the Seleucid Empire, and quickly seized ancient Peshawar around 190 BCE. The city was then captured by Gondophares, founder of the Indo-Parthian Kingdom. Gondophares established the nearby Takht-i-Bahi monastery in 46 CE.
Kushan
In the first century of the Common era, Purushapura came under control of Kujula Kadphises, founder of the Kushan Empire. The city was made the empire's winter capital. The Kushan's summer capital at Kapisi (modern Bagram, Afghanistan was seen as the secondary capital of the empire, while Puruṣapura was considered to be the empire's primary capital. Ancient Peshawar's population was estimated to be 120,000, which would make it the seventh-most populous city in the world at the time. As a devout Buddhist, the emperor built the grand Kanishka Mahavihara monastery. After his death, the magnificent Kanishka stupa was built in Peshawar to house Buddhist relics. The golden age of Kushan empire in Peshawar ended in 232 CE with the death of the last great Kushan king, Vasudeva I.
Around 260 CE, the armies of the Sasanid Emperor Shapur I launched an attack against Peshawar, and severely damage Buddhist monuments and monasteries throughout the Valley of Peshawar. Shapur's campaign also resulted in damage to the city's monumental stupa and monastery. The Kushans were made subordinate to the Sasanids and their power rapidly dwindled, as the Sasanids blocked lucrative trade routes westward out of the city.
Kushan Emperor Kanishka III was able to temporarily reestablish control over the entire Valley of Peshawar after Shapur's invasion, but the city was then captured by the Central Asian Kidarite kingdom in the early 400s CE.
White Huns
The White Huns devastated ancient Peshawar in the 460s CE, and ravaged the entire region of Gandhara, destroying its numerous monasteries. The Kanishka stupa was rebuilt during the White Hun era with the construction of a tall wooden superstructure, built atop a stone base, and crowned with a 13-layer copper-gilded chatra. In the 400s CE, the Chinese Buddhist pilgrim Faxian visited the structure and described it as "the highest of all the towers" in the "terrestrial world", which ancient travelers claimed was up to tall, though modern estimates suggest a height of .
In 520 CE the Chinese monk Song Yun visited Gandhara and ancient Peshawar during the White Hun era, and noted that it was in conflict with nearby Kapisa. The Chinese monk and traveler Xuanzang visited ancient Peshawar around 630 CE, after Kapisa victory, and expressed lament that the city and its great Buddhist monuments had decayed to ruin—although some monks studying Hinayana Buddhism continued to study at the monastery's ruins. Xuanzang estimated that only about 1,000 families continued in a small quarter among the ruins of the former grand capital.
Early Islamic
Until the mid 7th century, the residents of ancient Peshawar had a ruling elite of Central Asian Scythian descent, who were then displaced by the Hindu Shahis of Kabul.
Islam is believed to have been first introduced to the Buddhist, Hindu and other indigenous inhabitants of Puruṣapura in the later 7th century.
As the first Pashtun tribe to settle the region, the Dilazak Pashtuns began settling in the Valley of Peshawar, and are believed to have settled regions up to the Indus River by the 11th century. The Arab historian and geographer Al-Masudi noted that by the mid 10th century, the city had become known as Parashāwar.
In 986–87 CE, Peshawar's first encounter with Muslim armies occurred when Sabuktigin invaded the area and fought the Hindu Shahis under their king, Anandpal.
Medieval
On 28 November 1001, Sabuktigin's son Mahmud Ghazni decisively defeated the army of Raja Jayapala, son of Anandpal, at the Battle of Peshawar, and established rule of the Ghaznavid Empire in the Peshawar region.
During the Ghaznavid era, Peshawar served as an important stop between the Afghan plateau, and the Ghaznavid garrison city of Lahore. During the 10th–12th century, Peshawar served as a headquarters for Hindu Nath Panthi Yogis, who in turn are believed to have extensively interacted with Muslim Sufi mystics.
In 1179–80, Muhammad Ghori captured Peshawar, though the city was then destroyed in the early 1200s at the hands of the Mongols. Peshawar was an important regional centre under the Lodi Empire.
The Ghoryakhel Pashtuns Khalil, Muhmands, Daudzai, Chamkani tribes and some Khashi Khel Pashtuns, ancestors of modern-day Yusufzai and Gigyani Pashtuns, began settling rural regions around Peshawar in the late 15th and 16th centuries. The Ghoryakhel and Khashi Khel tribe pushed the Dilazak Pashtun tribes east of the Indus River following a battle in 1515 near the city of Mardan.
Mughal
Peshawar remained an important centre on trade routes between India and Central Asia. The Peshawar region was a cosmopolitan region in which goods, peoples, and ideas would pass along trade routes. Its importance as a trade centre is highlighted by the destruction of over one thousand camel-loads of merchandise following an accidental fire at Bala Hissar fort in 1586. Mughal rule in the area was tenuous, as Mughal suzerainty was only firmly exercised in the Peshawar valley, while the neighbouring valley of Swat was under Mughal rule only during the reign of Akbar.
In July 1526, Emperor Babur captured Peshawar from Daulat Khan Lodi. During Babur's rule, the city was known as Begram, and he rebuilt the city's fort. Babur used the city as a base for expeditions to other nearby towns in Pashtunistan.
Under the reign of Babur's son, Humayun, direct Mughal rule over the city was briefly challenged with the rise of the Pashtun king, Sher Shah Suri, who began construction of the famous Grand Trunk Road in the 16th century. Peshawar was an important trading centre on Sher Shah Suri's Grand Trunk Road. During Akbar's rule, the name of the city changed from Begram to Peshawar. In 1586, Pashtuns rose against Mughal rule during the Roshani Revolt under the leadership of Bayazid Pir Roshan, founder of the egalitarian Roshani movement, who assembled Pashtun armies in an attempted rebellion against the Mughals. The Roshani followers laid siege to the city until 1587.
Peshawar was bestowed with its own set of Shalimar Gardens during the reign of Shah Jahan, which no longer exist.
Emperor Aurangzeb's Governor of Kabul, Mohabbat Khan bin Ali Mardan Khan used Peshawar as his winter capital during the 17th century, and bestowed the city with its famous Mohabbat Khan Mosque in 1630.
Yusufzai tribes rose against Mughal rule during the Yusufzai Revolt of 1667, and engaged in pitched-battles with Mughal battalions nearby Attock. Afridi tribes resisted Mughal rule during the Afridi Revolt of the 1670s. The Afridis massacred a Mughal battalion in the nearby Khyber Pass in 1672 and shut the pass to lucrative trade routes. Mughal armies led by Emperor Aurangzeb himself regained control of the entire area in 1674.
Following Aurangzeb's death in 1707, his son Bahadur Shah I, former Governor of Peshawar and Kabul, was selected to be the Mughal Emperor. As Mughal power declined following the death of Emperor Aurangzeb, the empire's defenses were weakened.
Persian
On 18 November 1738, Peshawar was captured from the Mughal governor Nawab Nasir Khan by the Afsharid armies during the Persian invasion of the Mughal Empire under Nader Shah.
Durranis
In 1747, Peshawar was taken by Ahmad Shah Durrani, founder of the Afghan Durrani Empire. Under the reign of his son Timur Shah, the Mughal practice of using Kabul as a summer capital and Peshawar as a winter capital was reintroduced, with the practice maintained until the Sikh invasion. Peshawar's Bala Hissar Fort served as the residence of Afghan kings during their winter stay in Peshawar, and it was noted to be the main centre of trade between Bukhara and India by British explorer William Moorcroft during the late 1700s. Peshawar was at the centre of a productive agricultural region that provided much of north India's dried fruit.
Timur Shah's grandson, Mahmud Shah Durrani, became king, and quickly seized Peshawar from his half-brother, Shah Shujah Durrani. Shah Shujah was then himself proclaimed king in 1803, and recaptured Peshawar while Mahmud Shah was imprisoned at Bala Hissar fort until his eventual escape. In 1809, the British sent an emissary to the court of Shah Shujah in Peshawar, marking the first diplomatic meeting between the British and Afghans. His half-brother Mahmud Shah then allied himself with the Barakzai Pashtuns, and captured Peshawar once again and reigned until the Battle of Nowshera in March 1823.
Sikh
Ranjit Singh invaded Peshawar in 1818 but soon lost it to the Afghans. Following the Sikh victory against Azim Khan at the Battle of Nowshera in March 1823, Ranjit Singh captured Peshawar. By 1830, Peshawar's economy was noted by Scottish explorer Alexander Burnes to have sharply declined, with Ranjit Singh's forces having destroyed the city's palace and agricultural fields.
Much of Peshawar's caravan trade from Kabul ceased on account of skirmishes between Afghan and Sikh forces, as well as a punitive tax levied on merchants by Ranjit Singh's forces. Singh's government also required Peshawar to forfeit much of its leftover agricultural output to the Sikhs as tribute, while agriculture was further decimated by a collapse of the dried fruit market in north India. Singh appointed Neapolitan mercenary Paolo Avitabile as administrator of Peshawar, who is remembered for having unleashed a reign of terror. His time in Peshawar is known as a time of "gallows and gibbets." The city's famous Mahabat Khan, built in 1630 in the Jeweler's Bazaar, was badly damaged and desecrated by the Sikh conquerors.
The Sikh Empire formally annexed Peshawar in 1834 following advances from the armies of Hari Singh Nalwa—bringing the city under direct control of the Sikh Empire's Lahore Durbar. An 1835 attempt by Dost Muhammad Khan to re-occupy the city failed when his army refused to engage in combat with the Dal Khalsa. Sikh settlers from Punjab were settled in the city during Sikh rule. The city's only remaining Gurdwaras were built by Hari Singh Nalwa to accommodate the newly-settle Sikhs. The Sikhs also rebuilt the Bala Hissar fort during their occupation of the city.
British Raj
Following the defeat of the Sikhs in the First Anglo-Sikh War in 1845-46 and the Second Anglo-Sikh War in 1849, some of their territories were captured by the British East India Company. The British re-established stability in the wake of ruinous Sikh rule. During the Sepoy Rebellion of 1857, the 4,000 members of the native garrison were disarmed without bloodshed; the absence of brutality meant that Peshawar was not affected by the widespread devastation that was experienced throughout the rest of British India and local chieftains sided with the British after the incident.
The British laid out the vast Peshawar Cantonment to the west of the city in 1868, and made the city its frontier headquarters. Additionally, several projects were initiated in Peshawar, including linkage of the city by railway to the rest of British India and renovation of the Mohabbat Khan mosque that had been desecrated by the Sikhs. British suzerainty over regions west of Peshawar was cemented in 1893 by Sir Mortimer Durand, foreign secretary of the British Indian government, who collaboratively demarcated the border between British controlled territories in India and Afghanistan.
The British built Cunningham clock tower in celebration of the Golden Jubilee of Queen Victoria, and in 1906 built the Victoria Hall (now home of the Peshawar Museum) in memory of Queen Victoria. The British introduced Western-style education into Peshawar with the establishment of Edwardes College and Islamia College in 1901 and 1913, along with several schools run by the Anglican Church. For better administration of the region, Peshawar and the adjoining districts were separated from the Punjab Province in 1901, after which Peshawar became capital of the new province.
Peshawar emerged as a centre for both Hindko and Pashtun intellectuals during the British era. Hindko speakers, also referred to as Khaarian ("city dwellers" in Pashto), were responsible for the dominant culture for most of the time that Peshawar was under British rule. Peshawar was also home to a non-violent resistance movement led by Ghaffar Khan, a disciple of Mahatma Gandhi. In April 1930, Khan led a large group of Khan's followers protested in Qissa Khawani Bazaar against discriminatory laws that had been enacted by the British rulers—hundreds were killed when British troops opened fire on the demonstrators.
Modern era
In 1947, Peshawar became part of the newly created state of Pakistan, and emerged as a cultural centre in the country's northwest. The partition of India saw the departure of many Hindko-speaking Hindus and Sikhs who held key positions in the economy of Peshawar. The University of Peshawar was established in the city in 1950, and augmented by the amalgamation of nearby British-era institutions into the university. Until the mid-1950s, Peshawar was enclosed within a city wall and sixteen gates. In the 1960s, Peshawar was a base for a CIA operation to spy on the Soviet Union, with the 1960 U-2 incident resulting in an aircraft shot down by the Soviets that flew from Peshawar. From the 1960s until the late 1970s, Peshawar was a major stop on the famous Hippie trail.
During the Soviet–Afghan War in the 1980s, Peshawar served as a political centre for the CIA and the Inter-Services Intelligence-trained mujahideen groups based in the camps of Afghan refugees. It also served as the primary destination for large numbers of Afghan refugees. By 1980, 100,000 refugees a month were entering the province, with 25% of all refugees living in Peshawar district in 1981. The arrival of large numbers of Afghan refugees strained Peshawar's infrastructure, and drastically altered the city's demography.
Like much of northwest Pakistan, Peshawar has been severely affected by violence from the attacks by the Islamist Taliban. Local poets' shrines have been targeted by the Pakistani Taliban, a suicide bomb attack targeted the historic All Saints Church in 2013, and most notably the 2014 Peshawar school massacre in which Taliban militants killed 132 school children.
Peshawar suffered 111 acts of terror in 2010, which had declined to 18 in 2014, before the launch of Operation Zarb-e-Azb, which further reduced acts of violence throughout Pakistan. More civilians died in acts of violence in 2014 compared to 2010 – largely a result of the Peshawar school massacre.
Geography
Topography
Peshawar is located in the broad Valley of Peshawar, which is surrounded by mountain ranges on three sides, with the fourth opening to the Punjab plains. The city is located in the generally level base of the valley, known as the Gandhara Plains.
Climate
With an influence from the local steppe climate, Peshawar features a hot semi-arid climate (Köppen BSh), with very hot, prolonged summers and brief, mild to cool winters. Winter in Peshawar starts in November and ends in late March, though it sometimes extends into mid-April, while the summer months are from mid-May to mid-September. The mean maximum summer temperature surpasses during the hottest month, and the mean minimum temperature is . The mean minimum temperature during the coolest month is , while the maximum is .
Peshawar is not a monsoon region, unlike other parts of Pakistan; however, rainfall occurs in both winter and summer. Due to western disturbances, the winter rainfall shows a higher record between the months of February and April. The highest amount of winter rainfall, measuring , was recorded in February 2007, while the highest summer rainfall of was recorded in July 2010; during this month, a record-breaking rainfall level of fell within a 24-hour period on 29 July 2010—the previous record was of rain, recorded in April 2009. The average winter rainfall levels are higher than those of summer. Based on a 30-year record, the average annual precipitation level was recorded as and the highest annual rainfall level of was recorded in 2003. Wind speeds vary during the year, from in December to in June. The relative humidity varies from 46% in June to 76% in August. The highest temperature of was recorded on 18 June 1995, while the lowest occurred on 7 January 1970.
Cityscape
Historically, the old city of Peshawar was a heavily guarded citadel that consisted of high walls. In the 21st century, only remnants of the walls remain, but the houses and havelis continue to be structures of significance. Most of the houses are constructed of unbaked bricks, with the incorporation of wooden structures for protection against earthquakes, with many composed of wooden doors and latticed wooden balconies. Numerous examples of the city's old architecture can still be seen in areas such as Sethi Mohallah. In the old city, located in inner-Peshawar, many historic monuments and bazaars exist in the 21st century, including the Mohabbat Khan Mosque, Kotla Mohsin Khan, Chowk Yadgar and the Qissa Khawani Bazaar. Due to the damage caused by rapid growth and development, the old walled city has been identified as an area that urgently requires restoration and protection.
The walled city was surrounded by several main gates that served as the main entry points into the city — in January 2012, an announcement was made that the government plans to address the damage that has left the gates largely non-existent over time, with all of the gates targeted for restoration.
Demographics
Population
The population of Peshawar district in 1998 was 2,026,851. The city's annual growth rate is estimated at 3.29% per year, and the 2016 population of Peshawar district is estimated to be 3,405,414. With a population of 1,970,042 according to the 2017 census, Peshawar is the sixth-largest city of Pakistan. and the largest city in Khyber Pakhtunkhwa, with a population five times higher than the second-largest city in the province.
Language
The primary native languages spoken in Peshawar are Pashto and Hindko, though English is used in the city's educational institutions, while Urdu is understood throughout the city.
The district of Peshawar is overwhelmingly Pashto-speaking, though the Hindko-speaking minority is concentrated in Peshawar's old city, Hindko speakers in Peshawar increasingly assimilate elements of Pashto and Urdu into their speech.
Religion
Peshawar is overwhelmingly Muslim, with Muslims making up 98.5% of the city's population in the 1998 census. Christians make up the second largest religious group with around 20,000 adherents, while over 7,000 members of the Ahmadiyya Community live in Peshawar. Hindus and Sikhs are also found in the city − though most of the city's Hindu and Sikh community migrated en masse to India following the Partition of British India in 1947.
Though the city's Sikh population drastically declined after Partition, the Sikh community has been bolstered in Peshawar by the arrival of approximately 4,000 Sikh refugees from conflict in the Federally Administered Tribal Areas; In 2008, the largest Sikh population in Pakistan was located in Peshawar. Sikhs in Peshawar self-identify as Pashtuns and speak Pashto as their mother tongue. There was a small, but, thriving Jewish community until the late 1940s. After the partition and the emergence of the State of Israel, Jews left for Israel.
Afghan refugees
Peshawar has hosted Afghan refugees since the start of the Afghan civil war in 1978, though the rate of migration drastically increased following the Soviet invasion of Afghanistan in 1979. By 1980, 100,000 refugees a month were entering the province, with 25% of all refugees living in Peshawar district in 1981. The arrival of large numbers of Afghan refugees strained Peshawar's infrastructure, and drastically altered the city's demography. During the 1988 national elections, an estimated 100,000 Afghans refugees were illegally registered to vote in Peshawar.
With the influx of Afghan refugees into Peshawar, the city became a hub for Afghan musicians and artists, as well as a major centre of Pashto literature. Some Afghan refugees have established successful businesses in Peshawar, and play an important role in the city's economy.
In recent years, Peshawar district hosts up to 20% of all Afghan refugees in Pakistan. In 2005, Peshawar district was home to 611,501 Afghan refugees — who constituted 19.7% of the district's total population. Peshawar's immediate environs were home to large Afghan refugee camps, with Jalozai camp hosting up to 300,000 refugees in 2001 – making it the largest refugee camp in Asia at the time.
Afghan refugees began to be frequently accused of involvement with terrorist attacks that occurred during Pakistan's war against radical Islamists. By 2015 the Pakistani government adopted a policy to repatriate Afghan refugees, including many who had spent their entire life in Pakistan. The policy of repatriation was also encouraged by the government of Afghanistan, though many refugees had not registered themselves in Pakistan. Unregistered refugees returning to Afghanistan without their old Afghan identification documents now have no official status in Afghanistan either.
Economy
Peshawar's economic importance has historically been linked to its privileged position at the entrance to the Khyber Pass – the ancient travel route by which most trade between Central Asia and the Indian Subcontinent passed. Peshawar's economy also benefited from tourism in the mid-20th century, as the city formed a crucial part of the Hippie trail.
Peshawar's estimated monthly per capita income was ₨55,246 in 2015, compared to ₨117,924 in Islamabad, and ₨66,359 in Karachi. Peshawar's surrounding region is also relatively poor − Khyber Pakhtunkhwa's cities on average have an urban per capita income that is 20% less than Pakistan's national average for urban residents.
Peshawar was noted by the World Bank in 2014 to be at the helm of a nationwide movement to create an ecosystem for entrepreneurship, freelance jobs, and technology. The city has been host to the World Bank assisted Digital Youth Summit — an annual event to connect the city and province's youths to opportunities in the digital economy. The 2017 event hosted 100 speakers including several international speakers, and approximately 3,000 delegates in attendance.
Industry
Peshawar's Industrial Estate on Jamrud Road is an industrial zone established in the 1960s on 868 acres. The industrial estate hosts furniture, marble industries, and food processing industries, though many of its plots remain underutilized. The Hayatabad Industrial Estate hosts 646 industrial units in Peshawar's western suburbs, though several of the units are no longer in use. As part of the China Pakistan Economic Corridor, 4 special economic zones are to be established in the province, with roads, electricity, gas, water, and security to be provided by the government. The nearby Hattar SEZ is envisioned to provide employment to 30,000 people, and is being developed at a cost of approximately $200 million with completion expected in 2017.
Employment
As a result of large numbers of displaced persons in the city, only 12% of Peshawar's residents were employed in the formalized economy in 2012. Approximately 41% of residents in 2012 were employed in personal services, while 55% of Afghan refugees in the city in 2012 were daily wage earners. By 2016, Pakistan adopted a policy to repatriate Afghan refugees.
Wages for unskilled workers in Peshawar grew on average 9.1% per year between 2002 and 2008. Following the outbreak of widespread Islamist violence in 2007, wages rose only 1.5% between 2008 and 2014. Real wages dropped for some skilled craftsmen during the period between 2008 and 2014.
Constraints
Peshawar's economy has been negatively impacted by political instability since 1979 resulting from the War in Afghanistan and subsequent strain on Peshawar's infrastructure from the influx of refugees. The poor security environment resulting from Islamist violence also impacted the city's economy. With the launch of Operation Zarb-e-Azb in 2014, the country's security environment has drastically improved.
The metropolitan economy suffers from poor infrastructure. The city's economy has also been adversely impacted by shortages of electricity and natural gas. The $54 billion China Pakistan Economic Corridor will generate over 10,000 MW by 2018 – greater than the current electricity deficit of approximately 4,500 MW. Peshawar will also be linked to ports in Karachi by uninterrupted motorway access, while passenger and freight railway tracks will be upgraded between Peshawar and Karachi.
Poor transportation is estimated to cause a loss of 4–6% of the Pakistani GDP. Peshawar for decades has suffered from chaotic, mismanaged, and inadequate public transportation and the poor public transportation also has been detrimental to the city's economy. Therefore, the government has since a new rapid bus service called BRT Peshawar covering the entire Peshawar.
Transportation
Road
Peshawar's east–west growth axis is centred on the historic Grand Trunk Road that connects Peshawar to Islamabad and Lahore. The road is roughly paralleled by the M-1 Motorway between Peshawar and Islamabad, while the M-2 Motorway provides an alternate route to Lahore from Islamabad. The Grand Trunk Road also provides access to the Afghan border via the Khyber Pass, with onwards connections to Kabul and Central Asia via the Salang Pass.
Peshawar is to be completely encircled by the Peshawar Ring Road in order to divert traffic away from the city's congested centre. The road is currently under construction, with some portions open to traffic.
The Karakoram Highway provides access between the Peshawar region and western China, and an alternate route to Central Asia via Kashgar in the Chinese region of Xinjiang.
The Indus Highway provides access to points south of Peshawar, with a terminus in the southern port city of Karachi via Dera Ismail Khan and northern Sindh. The Kohat Tunnel south of Peshawar provides access to the city of Kohat along the Indus Highway.
Motorways
Peshawar is connected to Islamabad and Rawalpindi by the 155 kilometre long M-1 Motorway. The motorway also links Peshawar to major cities in the province, such as Charsadda and Mardan. The M-1 motorway continues onwards to Lahore as part of the M-2 motorway.
Pakistan's motorway network links Peshawar to Faisalabad by the M-4 Motorway, while a new motorway network to Karachi is being built as part of the China Pakistan Economic Corridor.
The Hazara Motorway is being constructed as part of CPEC, and is providing control-access motorway travel all the way to Mansehra and Thakot via the M-1 and Hazara Motorways.
Rail
Peshawar Cantonment railway station serves as the terminus for Pakistan's -long Main Line-1 railway that connects the city to the port city of Karachi and passes through the Peshawar City railway station. The Peshawar to Karachi route is served by the Awam Express, Khushhal Khan Khattak Express, and the Khyber Mail services.
The entire Main Line-1 railway track between Karachi and Peshawar is to be overhauled at a cost of $3.65 billion for the first phase of the project, with completion by 2021. Upgrading the railway line will permit train travel at speeds of 160 kilometres per hour, versus the average speed currently possible on existing tracks.
Peshawar was also once the terminus of the Khyber Train Safari, a tourist-oriented train that provided rail access to Landi Kotal. The service was discontinued as the security situation west of Peshawar deteriorated with the beginning of the region's Taliban insurgency.
Air
Peshawar is served by the Bacha Khan International Airport, located in the Peshawar Cantonment. The airport served 1,255,303 passengers between 2014 and 2015, the vast majority of whom were international travelers. The airport offers direct flights throughout Pakistan, as well as to Bahrain, Malaysia, Qatar, Saudi Arabia, and the United Arab Emirates.
Public transit
BRT Peshawar is a modern & 3rd generation rapid bus service of Peshawar, which has started its service on 13 August 2020. It has 32 stations and 220 buses, which covers area from Chamkani to Karkhano Market. BRT Peshawar has replaced Peshawar's old, chaotic, dilapidated, and inadequate transportation system. The system has 32 stations and is mostly at grade, with four kilometres of elevated sections. The system also contains 3.5 kilometres of underpasses. BRT Peshawar is also complemented by a feeder system, with an additional 100 stations along those feeder lines.
Intercity bus
Peshawar is well-served by private buses (locally referred to as "flying coaches") and vans that offer frequent connections to throughout Khyber Pakhtunkhwa, as well as all major cities of Pakistan. The city's Daewoo Express bus terminal is located along the G.T. Road adjacent to the departure points for several other transportation companies.
Administration
Civic government
Politics
Peshawar has historically served as the political centre of the region, and is currently the capital city of Khyber Pakhtunkhwa province. The city and province have been historically regarded to be strongholds of the Awami National Party – a secular left-wing and moderate-nationalist party. The Pakistan Peoples Party had also enjoyed considerable support in the province due to its socialist agenda.
Despite being a centre for leftist politics in Khyber Pakhtunkhwa, Peshawar is still generally known throughout Pakistan for its social conservatism. Sunni Muslims in the city are regarded to be socially conservative, while the city's Shia population is considered to be more socially liberal.
A plurality of voters in Khyber Pakhtunkhwa province, of which Peshawar is the capital, elected one of Pakistan's only religiously-based provincial governments during the period of military dictatorship of Pervez Musharraf. A ground-swell of anti-American sentiment after the 2001 United States invasion of Afghanistan contributed to the Islamist coalition's victory.
The Islamists introduced a range of social restrictions following the election of the Islamist Muttahida Majlis-e-Amal coalition in 2002, though Islamic Shariah law was never fully enacted. Restrictions on public musical performances were introduced, as well as a ban prohibiting music to be played in any public places, including on public transportation – which lead to the creation of a thriving underground music scene in Peshawar. In 2005, the coalition successfully passed the "Prohibition of Use of Women in Photograph Bill, 2005," leading to the removal of all public advertisements in Peshawar that featured women.
The religious coalition was swept out of power by the secular and leftist Awami National Party in elections after the fall of Musharraf in 2008, leading to the removal of the MMA's socially conservative laws. 62% of eligible voters voted in the election. The Awami National Party was targeted by Taliban militants, with hundreds of its members having been assassinated by the Pakistani Taliban.
In 2013, the centrist Pakistan Tehreek-e-Insaf was elected to power in the province on an anti-corruption platform. Peshawar city recorded a voter turnout of 80% for the 2013 elections.
Municipal services
86% of Peshawar's households have access to municipal piped water as of 2015, though 39% of Peshawar's households purchase water from private companies in 2015.
42% of Peshawar households are connected to municipal sewerage as of 2015.
Culture
Music
After the 2002 Islamist government implemented restrictions on public musical performances, a thriving underground music scene took root in Peshawar. After the start of Pakistan's Taliban insurgency in 2007–2008, militants began targeting members of Peshawar's cultural establishment. By 2007, Taliban militants began a widespread campaign of bombings against music and video shops across the Peshawar region, leading to the closure of many others. In 2009, Pashto musical artist Ayman Udas was assassinated by Taliban militants on the city's outskirts. In June 2012, a Pashto singer, Ghazala Javed, and her father were killed in Peshawar, after they had fled rural Khyber Pakhtunkhwa for the relative security of Peshawar.
Musicians began to return to the city by 2016, with a security environment greatly improved following the Operation Zarb-e-Azb in 2014 to eradicate militancy in the country. The provincial government in 2016 announced a monthly income of $300 to 500 musicians in order to help support their work, as well as a $5 million fund to "revive the rich cultural heritage of the province".
Museums
The Peshawar Museum was founded in 1907 in memory of Queen Victoria. The building features an amalgamation of British, South Asian, Hindu, Buddhist and Mughal Islamic architectural styles. The museum's collection has almost 14,000 items, and is well known for its collection of Greco-Buddhist art. The museum's ancient collection features pieces from the Gandharan, Kushan, Parthian, and Indo-Scythian periods.
Notable people
Education
Numerous educational institutes — schools, colleges and universities — are located in Peshawar. 21.6% of children between the ages of 5 and 9 were not enrolled in any school in 2013, while 16.6% of children in the 10 to 14 age range were out of school.
Currently, Peshawar has universities for all major disciplines ranging from Humanities, General Sciences, Sciences, Engineering, Medical, Agriculture and Management Sciences. The first public sector university, University of Peshawar (UOP) was established in October 1950 by the first Prime Minister of Pakistan. University of Engineering and Technology, Peshawar was established in 1980 while Agriculture University Peshawar started working in 1981. The first private sector university CECOS University of IT and Emerging Sciences was established in 1986. Institute of Management Sciences started functioning in 1995, which become degree awarding institution in 2005.
There are currently 9 Medical colleges in Peshawar, 2 in public sector while 7 in private sector. The first Medical College, Khyber Medical College, was established in 1954 as part of University of Peshawar. The first Medical University, Khyber Medical University while a women only Medical college, Khyber Girls Medical College was established in 2007.
At the start of the 21st century, a host of new private sector universities started working in Peshawar. Qurtuba University, Sarhad University of Science and IT, Fast University, Peshawar Campus and City University of Science and IT were established in 2001 while Gandhara University was inaugurated in 2002 and Abasyn University in 2007.
Shaheed Benazir Bhutto Women University, the first women university of Peshawar, started working in 2009 while private sector IQRA National University was established in 2012.
Apart from good range of universities, Peshawar has host of high quality further education (Post School) educational institutes. The most renowned are, Edwardes College founded in 1900 by Herbert Edwardes, is the oldest college in the province and Islamia College Peshawar, which was established in 1913. Islamia College became university and named as Islamia College University in 2008.
The following is a list of some of the public and private universities in Peshawar:
Abasyn University (Abasyn University, Peshawar)
Agricultural University (Peshawar)
CECOS University of IT and Emerging Sciences
City University of Science and Information Technology, Peshawar
Frontier Women University
Gandhara University
IMSciences (Institute of Management Sciences)
Iqra National University, Peshawar (formerly Peshawar Campus of Iqra University Karachi)
Islamia College University
Khyber Medical University
National University of Computer and Emerging Sciences, Peshawar Campus (NU-FAST)
Preston University
Qurtuba University (Qurtuba University of Science & Information Technology)
Sarhad University of Science and Information Technology
University of Engineering and Technology, Peshawar
University of Peshawar
Landmarks
The following is a list of other significant landmarks in the city that still exist in the 21st century:
General
Governor's House
Peshawar Garrison Club – situated on Sir Syed Road near the Mall
Kotla Mohsin Khan – the residence of Mazullah Khan, 17th-century Pashtu poet
Qissa Khwani Bazaar
Kapoor Haveli Former residence of Prithviraj Kapoor – famous actor
Forts
Bala Hisar Fort
Colonial monuments
Chowk Yadgar (formerly the "Hastings Memorial")
Cunningham clock tower – built in 1900 and called "Ghanta Ghar"
Buddhist
Gorkhatri – an ancient site of Buddha's alms or begging bowl, and the headquarters of Syed Ahmad Shaheed, Governor Avitabile
Pashto Academy – the site of an ancient Buddhist university
Shahji ki Dheri – the site of the famous Kanishka stupa
Hindu
Panch Tirath – an ancient Hindu site with five sacred ponds
Gorkhatri – sacred site for Hindu yogis
Guru Gorkhnath temple
Aasamai temple – near Lady Reading Hospital (LRH)
Sikh
Sikh Gurudwara at Jogan Shah
Parks
Army Stadium
Wazir Bagh – laid in 1802, by Fatteh Khan, Prime Minister of Shah Mahmud Khan
Ali Mardan Khan Gardens (also known as Khalid bin Waleed Park) – formerly named "Company Bagh"
Shahi Bagh – a small portion constitutes the site of Arbab Niaz Stadium
Jinnah Park – A park on GT Road opposite Balahisar fort
Tatara Park – A Park located in Hayatabad
Bagh e Naran – A large park in Hayatabad. A portion of the park also has a Zoo.
Mosques
Mohabbat Khan Mosque
Qasim Ali Khan Mosque
Museums
Peshawar Museum
Zoo
Peshawar Zoo
Sports
There are hosts of sporting facilities in Peshawar. The most renowned are Arbab Niaz Stadium, which is the International cricket ground of Peshawar and Qayyum Stadium, which is the multi sports facilities located in Peshawar cantonment.
Cricket is the most popular sports in Peshawar with Arbab Niaz Stadium as the main ground coupled with Cricket Academy. There is also small cricket ground, Peshawar Gymkhana ground, which is located adjacent to Arbab Niaz Stadium, a popular club cricket ground. The oldest international cricket ground in Peshawar however is Peshawar Club Ground, which hosted the first ever test match between Pakistan and India in 1955. The Peshawar Zalmi represents the city in the Pakistan Super League.
In 1975, the first sports complex, Qayyum Stadium was built in Peshawar while Hayatabad Sports Complex was built in the early 1990s. Both Qayyum Stadium and Hayatabad Sports Complexes are multiple sports complexes with facilities for all major indoor and outdoor sports such as football, Field Hockey ground, Squash, Swimming, Gymnasium, Board Games section, Wrestling, Boxing and Badminton. In 1991, Qayyum Stadium hosted Barcelona Olympics Qualifier Football match between Pakistan and Qatar plus it also hosted National Games in 2010. Hockey and squash are also popular in Peshawar.
Professional sports teams from Peshawar
Twin towns and sister cities
Peshawar is twinned with:
Makassar, Indonesia
See also
Peshawari chappal
Peshawari turban
Karkhano
Kushan Empire
Kanishka
Bacha Khan
Khudai Khidmatgar
2014 Peshawar school attack
2020 Peshawar school bombing
Chapli Kabab
References
Bibliography
Ahmad, Aisha and Boase, Roger. 2003. "Pashtun Tales from the Pakistan-Afghan Frontier: From the Pakistan-Afghan Frontier." Saqi Books (1 March 2003). .
Beal, Samuel. 1884. "Si-Yu-Ki: Buddhist Records of the Western World, by Hiuen Tsiang." 2 vols. Trans. by Samuel Beal. London. Reprint: Delhi. Oriental Books Reprint Corporation. 1969.
Beal, Samuel. 1911. "The Life of Hiuen-Tsiang by the Shaman Hwui Li, with an Introduction containing an account of the Works of I-Tsing". Trans. by Samuel Beal. London. 1911. Reprint: Munshiram Manoharlal, New Delhi. 1973.
Dani, Ahmad Hasan. 1985. "Peshawar: Historic city of the Frontier" Sang-e-Meel Publications (1995). .
Dobbins, K. Walton. 1971. "The Stūpa and Vihāra of Kanishka I". The Asiatic Society of Bengal Monograph Series, Vol. XVIII. Calcutta.
Elphinstone, Mountstuart. 1815. "An account of the Kingdom of Caubul and its dependencies in Persia, Tartary, and India; comprising a view of the Afghaun nation." Akadem. Druck- u. Verlagsanst (1969).
Foucher, M. A. 1901. "Notes sur la geographie ancienne du Gandhâra (commentaire à un chaptaire de Hiuen-Tsang)." BEFEO No. 4, Oct. 1901, pp. 322–369.
Hargreaves, H. (1910–11): "Excavations at Shāh-jī-kī Dhērī"; Archaeological Survey of India, 1910–11, pp. 25–32.
Hill, John E. 2003. "Annotated Translation of the Chapter on the Western Regions according to the Hou Hanshu." 2nd Draft Edition.
Hill, John E. 2004. "The Peoples of the West from the Weilue" 魏略 by Yu Huan 魚豢: A Third Century Chinese Account Composed between 239 and 265 CE. Draft annotated English translation.
Hopkirk, Peter. 1984. "The Great Game: The Struggle for Empire in Central Asia" Kodansha Globe; Reprint edition. .
Moorcroft, William and Trebeck, George. 1841. "Travels in the Himalayan Provinces of Hindustan and the Panjab; in Ladakh and Kashmir, in Peshawar, Kabul, Kunduz, and Bokhara... from 1819 to 1825", Vol. II. Reprint: New Delhi, Sagar Publications, 1971.
Reeves, Richard. 1985. "Passage to Peshawar: Pakistan: Between the Hindu Kush and the Arabian Sea." Holiday House September 1985. .
Imran, Imran Rashid. 2006. "Baghaat-i-Peshawar." Sarhad Conservation Network. July 2006.
Imran, Imran Rashid. 2012. "Peshawar – Faseel-e-Shehr aur Darwazay." Sarhad Conservation Network. March 2012.
External links
Peshawar
Populated places in Peshawar District
Cities in Khyber Pakhtunkhwa
Capitals of Pakistan
Metropolitan areas of Pakistan
Populated places along the Silk Road
Populated places established in the 5th millennium BC
5th-millennium BC establishments
Cities in Pakistan
|
18713681
|
https://en.wikipedia.org/wiki/Universidad%20de%20Manila
|
Universidad de Manila
|
Universidad de Manila, also referred to by its acronym UdM, is a public coeducational city government funded higher education institution in Manila, Philippines. It was founded in April 26, 1995 with the approval by Mayor Alfredo Lim of Manila City Ordinance (MCO) No. 7885 “An Ordinance Authorizing the City Government of Manila to Establish and Operate the Dalubhasaan ng Maynila (City College of Manila). It offers both academic and technical-vocational courses and programs. Its Main Campus is located at the grounds of Mehan Gardens, Ermita in front of the Bonifacio Shrine (Kartilya ng Katipunan) and beside the Central Terminal station of LRT Line 1. It has a satellite Campus (UDM Annex) along Carlos Palanca Street in Santa Cruz.
History
On 26 April 1995, Manila City Ordinance (MCO) No. 7885 “An Ordinance Authorizing the City Government of Manila to Establish and Operate the Dalubhasaan ng Maynila (City College of Manila) and for such other purposes” was approved by Mayor Alfredo Lim. The principal sponsors of MCO No. 7885 were Manila Councilors Nestor Ponce Jr., Humberto Basco and Bernardito Ang. The then-City College of Manila (CCM) was originally located at the 15-storey Old PNB Building in Escolta Street, Binondo. Sometime in 2003, the University established its Downtown Campus (UDM Annex) along Carlos Palanca Street in Santa Cruz.
On 26 June 2006, Mayor Lito Atienza approved MCO No. 8120 which renamed the City College of Manila to Universidad de Manila. UdM was also transferred from Binondo to its current location at Cecilia Muñoz Street corner Antonio J. Villegas Street, Mehan Gardens, Ermita.
On its 25th founding anniversary, Universidad de Manila was granted fiscal autonomy by virtue of MCO No. 8635. This ordinance was approved by Mayor Francisco “Isko Moreno” Domagoso on 27 April 2020 and it states that “the University shall be treated as an independent and institutional department of the City of Manila wherein the management of fiscal, human resources, and all other assets shall be within its control.”
Colleges
College of Arts and Sciences (CAS)
Bachelor of Arts in Communication
Bachelor of Arts in Political Science
Bachelor in Public Administration
Bachelor of Science in Mathematics Major in Computer Science
Bachelor of Science in Psychology
Bachelor of Science in Social Work
College of Business, Accountancy, and Economics (CBAE)
Bachelor of Science in Accounting Information System
Bachelor in Accounting Technology
Bachelor of Science in Entrepreneurship
Bachelor of Science in Entrepreneurship with Specialization in Supply Chain Management
Bachelor of Science in Accountancy
Bachelor of Science in Business Administration Major in Economics
Bachelor of Science in Business Administration Major in Human Resource Development Management
Bachelor of Science in Business Administration Major in Marketing Management
College of Criminal Justice (CCJ)
Bachelor of Science in Criminology
College of Teacher Education (CTE)
Bachelor in Secondary Education Major in General Science
Bachelor in Secondary Education Major in Mathematics
Bachelor in Secondary Education Major in English
Bachelor in Physical Education Major in School of Physical Education
College of Engineering and Technology (CET)
Bachelor in Electronics Engineering
Bachelor of Science in Computer Engineering
Bachelor in Information Technology with Specialization in Cybersecurity
Bachelor in Information Technology with Specialization in Data Science
Bachelor of Science in Information Technology
College of Health Science (CHS)
Bachelor of Science in Nursing
Bachelor of Science in Physical Therapy
Graduate Programs
College of Law (COL)
Juris Doctor with Thesis
Juris Doctor without Thesis
Institute for Graduate and Professional Studies (IGPS)
Master in Business Administration
Master of Science in Criminal Justice
Master in Public Management and Governance
Master of Arts in Education
Doctor of Philosophy
Technical and Vocational Education and Training
Center for Micro-credentialing and Industry Training (CMIT)
UDM Center for Micro-credentialing and Industry Training was established on 19 June 2020, to offer short-term programs, focusing on specialized learning in order to develop skillset aligned with the interests of the student.
UDM recognized the shift in workplace structure and culture, most prominent during the onset of the COVID-19 pandemic. With an understanding of the needs of the city and different industries, the offerings of the following micro-credentials are set to produce skilled and capable graduates who will thrive in an Industry 4.0 workplace.
Android Development
Book Keeping
Bread and Pastry
Catering, Food and Beverage Service
Graphic Design
Photography
Programming - Java
Programming - Python
Web Development
Wood Technology
Coffee Apprenticeship
2D and 3D Animation
Educational institutions established in 1995
Education in Ermita
Local colleges and universities in Manila
Universities and colleges in Manila
1995 establishments in the Philippines
|
12547034
|
https://en.wikipedia.org/wiki/Instant%20Music%20%28software%29
|
Instant Music (software)
|
Instant Music is interactive music software released by Electronic Arts (EA) in 1986. It was developed first for the Amiga, but then ported to other platforms, such as Apple IIGS, and Commodore 64.
Instant Music allows the user to make variations on songs played by the software. The program comes with several songs of a few genres. As the software plays a song, the player, by moving the mouse up and down (or joystick with some versions), can make variations in the current tones. The software makes sure that any variations don't result in un-harmonic tunes.
Instant Music was created and developed by Robert Campbell. The prototype was created on the Commodore 64 and EA producer Stewart Bonn championed its inclusion in EA's product offerings for the (then) upcoming Amiga platform.
Reception
In December 1986, Bruce Webster's column in Byte magazine selected Instant Music as product of the month, calling it "an outstanding program." Webster praised Instant Music for turning the Amiga into "an intelligent electronic instrument" that allows "even an untalented hack" to create real music without much effort. Webster's only criticism was the key disk copy protection.
AmigaWorld gave Instant Music a 1986 Editor's Choice Award, calling it "the most fun you can have with your Amiga and your ears." AmigaWorld praised Instant Music's ability to let non-musicians create impressive music. AmigaWorld also awarded Instant Music two tongue-in-cheek awards: "The Roll Over Beethoven Award [...] For turning the complete idiot into a composer" and "Bob Ryan's Best Program in the History of Creation Award."
Compute! stated that Instant Music "breaks new ground in computer entertainment software" by making it easy for nonmusicians to play music ... it really must be seen to be believed." The reviewer reported that he had begun to play his electric guitar again with the Amiga as accompaniment.
Instant Music was mentioned in the Computer Music Journal as an example of an "intelligent instrument".
References
External links
Instant Music screenshot from the Apple IIGS
Computer music software
|
154457
|
https://en.wikipedia.org/wiki/Internet%20censorship%20in%20China
|
Internet censorship in China
|
Internet censorship in the People's Republic of China (PRC) affects both publishing and viewing online material. Many controversial events are censored from news coverage, preventing many Chinese citizens from knowing about the actions of their government, and severely restricting freedom of the press. Such measures, including the complete blockage of various websites, inspired the policy's nickname, the "Great Firewall of China", which blocks websites such as Wikipedia, YouTube, and Google. Methods used to block websites and pages include DNS spoofing, blocking access to IP addresses, analyzing and filtering URLs, packet inspection, and resetting connections.
China's Internet censorship is more comprehensive and sophisticated than any other country in the world. The government blocks website content and monitors Internet access. As required by the government, major Internet platforms in China established elaborate self-censorship mechanisms. As of 2019 more than sixty online restrictions had been created by the Government of China and implemented by provincial branches of state-owned ISPs, companies and organizations. Some companies hire teams and invested in powerful artificial intelligence algorithms to police and remove illegal online content.
Amnesty International states that China has "the largest recorded number of imprisoned journalists and cyber-dissidents in the world" and Reporters Without Borders stated in 2010 and 2012 that "China is the world's biggest prison for netizens."
About 904 million people have access to Internet in China. Commonly alleged user offenses include communicating with organized groups abroad, signing controversial online petitions, and forcibly calling for government reform. The government has escalated its efforts to reduce coverage and commentary that is critical of the regime after a series of large anti-pollution and anti-corruption protests, and in region of Xinjiang and Tibet which are subjected to terrorism. Many of these protests as well as ethnic riots were organized or publicized using instant messaging services, chat rooms, and text messages. China's Internet police force was reported by official state media to be 2 million strong in 2013.
China's special administrative regions of Hong Kong and Macau are outside the Great Firewall. However, it was reported that the central government authorities have been closely monitoring Internet use in these regions (see Internet censorship in Hong Kong).
Background
The political and ideological background of Internet censorship is considered to be one of Deng Xiaoping's favorite sayings in the early 1980s: "If you open a window for fresh air, you have to expect some flies to blow in." The saying is related to a period of the Chinese economic reform that became known as the "socialist market economy". Superseding the political ideologies of the Cultural Revolution, the reform led China towards a market economy, opening it up to foreign investors. Nonetheless, the Chinese Communist Party (CCP) wished to protect its values and political ideas by "swatting flies" of other ideologies, with a particular emphasis on suppressing movements that could potentially threaten the stability of the country.
The Internet first arrived in the country in 1994. Since its arrival and the gradual rise of availability, the Internet has become a common communication platform and an important tool for sharing information. Just as the Chinese government had expected, the number of Internet users in China soared from less than one percent in 1994, when the Internet was introduced, to 28.8 percent by 2009.
In 1998, the CCP feared the China Democracy Party (CDP), organized in contravention of the “Four Cardinal Principles”, would breed a powerful new network that CCP party elites might not be able to control resulting in the CDP being immediately banned. That same year, the "Golden Shield project" was created. The first part of the project lasted eight years and was completed in 2006. The second part began in 2006 and ended in 2008. The Golden Shield project was a database project in which the government could access the records of each citizen and connect China's security organizations. The government had the power to delete any comments online that were considered harmful.
On 6 December 2002, 300 members in charge of the Golden Shield project came from 31 provinces and cities across China to participate in a four-day inaugural "Comprehensive Exhibition on Chinese Information System". At the exhibition, many Western technology products including Internet security, video monitoring, and facial recognition systems were purchased. According to Amnesty International, around 30,000–50,000 Internet police have been employed by the Chinese government to enforce Internet laws.
The Chinese government has described censorship as the method to prevent and eliminate "risks in the ideological field from the Internet".
Legislative basis
The government of China defends its right to censor the Internet by claiming that this right extends from the country's own rules inside its borders. A white paper released in June 2010 reaffirmed the government's determination to govern the Internet within its borders under the jurisdiction of Chinese sovereignty. The document states, "Laws and regulations prohibit the spread of information that contains content subverting state power, undermining national unity [or] infringing upon national honor and interests." It adds that foreign individuals and firms can use the Internet in China, but they must abide by the country's laws.
The Central Government of China started its Internet censorship with three regulations. The first regulation was called the Temporary Regulation for the Management of Computer Information Network International Connection. The regulation was passed in the 42nd Standing Convention of the State Council on 23 January 1996. It was formally announced on 1 February 1996, and updated again on 20 May 1997. The content of the first regulation stated that Internet service providers be licensed and that Internet traffic goes through ChinaNet, GBNet, CERNET or CSTNET. The second regulation was the Ordinance for Security Protection of Computer Information Systems. It was issued on 18 February 1994 by the State Council to give the responsibility of Internet security protection to the Ministry of Public Security.
Article 5 of the Computer Information Network and Internet Security, Protection, and Management Regulations
The Ordinance regulation further led to the Security Management Procedures in Internet Accessing issued by the Ministry of Public Security in December 1997. The regulation defined "harmful information" and "harmful activities" regarding Internet usage. Section Five of the Computer Information Network and Internet Security, Protection, and Management Regulations approved by the State Council on 11 December 1997 stated the following:
(The "units" stated above refer to work units () or more broadly, workplaces). As of 2021, the regulations are still active and govern the activities of Internet users online.
Interim Regulations of the PRC on the Management of International Networking of Computer Information
In 1996, the Ministry of Commerce created a set of regulations which prohibit connection to "international networks" or use of channels outside of those provided by official government service providers without prior approval or license from authorities. The Ministry of Posts and Telecommunications has since been superseded by the Ministry of Industry and Information Technology or MIIT. To this date this regulation is still used to prosecute and fine users who connect to international networks or use VPN's.
State Council Order No. 292
In September 2000, State Council Order No. 292 created the first set of content restrictions for Internet content providers. China-based websites cannot link to overseas news websites or distribute news from overseas media without separate approval. Only "licensed print publishers" have the authority to deliver news online. These sites must obtain approval from state information offices and the State Council Information Agency. Non-licensed websites that wish to broadcast news may only publish information already released publicly by other news media. Article 11 of this order mentions that "content providers are responsible for ensuring the legality of any information disseminated through their services." Article 14 gives Government officials full access to any kind of sensitive information they wish from providers of Internet services.
Cybersecurity Law of the People's Republic of China
On November 6, 2017, the Standing Committee of the National People's Congress promulgated a cybersecurity law which among other things requires "network operators" to store data locally, hand over information when requested by state security organs and open software and hardware used by "critical information infrastructure" operators to be subject to national security review, potentially compromising source codes and security of encryption used by communications service providers. The law is an amalgamation of all previous regulations related to Internet use and online censorship and unifies and institutionalises the legislative framework governing cyber control and content censorship within the country. Article 12 states that persons using networks shall not "overturn the socialist system, incite separatism" or "break national unity" further institutionalising the suppression of dissent online.
Enforcement
In December 1997, The Public Security Minister, Zhu Entao, released new regulations to be enforced by the ministry that inflicted fines for "defaming government agencies, splitting the nation, and leaking state secrets." Violators could face a fine of up to CNY 15,000 (roughly US$1,800). Banning appeared to be mostly uncoordinated and ad hoc, with some websites allowed in one city, yet similar sites blocked in another. The blocks were often lifted for special occasions. For example, The New York Times was unblocked when reporters in a private interview with CCP General Secretary Jiang Zemin specifically asked about the block and he replied that he would look into the matter. During the APEC summit in Shanghai during 2001, normally-blocked media sources such as CNN, NBC, and the Washington Post became accessible. Since 2001, blocks on Western media sites have been further relaxed, and all three of the sites previously mentioned were accessible from mainland China. However, access to the New York Times was denied again in December 2008.
In the middle of 2005, China purchased over 200 routers from an American company, Cisco Systems, which enabled the Chinese government to use more advanced censor technology. In February 2006, Google, in exchange for equipment installation on Chinese soil, blocked websites which the Chinese government deemed illegal. Google reversed this policy in 2010, after they suspected that a Google employee passed information to the Chinese Government and inserted backdoors into their software.
In May 2011, the State Council Information Office announced the transfer of its offices which regulated the Internet to a new subordinate agency, the State Internet Information Office which would be responsible for regulating the Internet in China. The relationship of the new agency to other Internet regulation agencies in China was unclear from the announcement.
On 26 August 2014, the State Internet Information Office (SIIO) was formally authorized by the state council to regulate and supervise all Internet content. It later launched a website called the Cyberspace Administration of China (CAC) and the Office of the Central Leading Group for Cyberspace Affairs. In February 2014, the Central Internet Security and Informatization Leading Group was created in order to oversee cybersecurity and receive information from the CAC. Chairing the 2018 China Cyberspace Governance Conference on 20 and 21 April 2018, Xi Jinping, General Secretary of the Chinese Communist Party, committed to "fiercely crack down on criminal offenses including hacking, telecom fraud, and violation of citizens' privacy." The Conference comes on the eve of the First Digital China Summit, which was held at the Fuzhou Strait International Conference and Exhibition Centre in Fuzhou, the capital of Fujian Province.
On 4 January 2019, the CAC started a project to take down pornography, violence, bloody content, horror, gambling, defrauding, Internet rumors, superstition, invectives, parody, threats, and proliferation of "bad lifestyles" and "bad popular culture". On 10 January 2019, China Network Audiovisual Program Service Association announced a new regulation to censor short videos with controversial political or social content such as a "pessimistic outlook of millennials", "one night stands", "non-mainstream views of love and marriage" as well as previously prohibited content deemed politically sensitive.
China is planning to make deepfakes illegal which is described as the way to prevent "parody and pornography."
In July 2019, the CAC announced a regulation that said that Internet information providers and users in China who seriously violate related laws and regulations will be subject to Social Credit System blocklist. It also announces that Internet information providers and users who are not meeting the standard but mildly violation will be recorded in the List to Focus.
Self-regulation
Internet censorship in China has been called "a panopticon that encourages self-censorship through the perception that users are being watched." The enforcement (or threat of enforcement) of censorship creates a chilling effect where individuals and businesses willingly censor their own communications to avoid legal and economic repercussions. ISPs and other service providers are legally responsible for customers' conduct. The service providers have assumed an editorial role concerning customer content, thus becoming publishers and legally responsible for libel and other torts committed by customers. Some hotels in China advise Internet users to obey local Chinese Internet access rules by leaving a list of Internet rules and guidelines near the computers. These rules, among other things, forbid linking to politically unacceptable messages and inform Internet users that if they do, they will have to face legal consequences.
On 16 March 2002, the Internet Society of China, a self-governing Chinese Internet industry body, launched the Public Pledge on Self-Discipline for the Chinese Internet Industry, an agreement between the Chinese Internet industry regulator and companies that operate sites in China. In signing the agreement, web companies pledge to identify and prevent the transmission of information that Chinese authorities deem objectionable, including information that "breaks laws or spreads superstition or obscenity", or that "may jeopardize state security and disrupt social stability". As of 2006, the pledge had been signed by more than 3,000 entities operating websites in China.
Use of service providers
Although the government does not have the physical resources to monitor all Internet chat rooms and forums, the threat of being shut down has caused Internet content providers to employ internal staff, colloquially known as "big mamas", who stop and remove forum comments which may be politically sensitive. In Shenzhen, these duties are partly taken over by a pair of police-created cartoon characters, Jingjing and Chacha, who help extend the online "police presence" of the Shenzhen authorities. These cartoons spread across the nation in 2007 reminding Internet users that they are being watched and should avoid posting "sensitive" or "harmful" material on the Internet.
However, Internet content providers have adopted some counter-strategies. One is to post politically sensitive stories and remove them only when the government complains. In the hours or days in which the story is available online, people read it, and by the time the story is taken down, the information is already public. One notable case in which this occurred was in response to a school explosion in 2001, when local officials tried to suppress the fact the explosion resulted from children illegally producing fireworks.
On 11 July 2003, the Chinese government started granting licenses to businesses to open Internet cafe chains. Business analysts and foreign Internet operators regard the licenses as intended to clamp down on information deemed harmful to the Chinese government. In July 2007, the city of Xiamen announced it would ban anonymous online postings after text messages and online communications were used to rally protests against a proposed chemical plant in the city. Internet users will be required to provide proof of identity when posting messages on the more than 100,000 Web sites registered in Xiamen.
The Chinese government issued new rules on 28 December 2012, requiring Internet users to provide their real names to service providers, while assigning Internet companies greater responsibility for deleting forbidden postings and reporting them to the authorities. The new regulations, issued by the Standing Committee of the National People's Congress, allow Internet users to continue to adopt pseudonyms for their online postings, but only if they first provide their real names to service providers, a measure that could chill some of the vibrant discourse on the country's Twitter-like microblogs. The authorities periodically detain and even jail Internet users for politically sensitive comments, such as calls for a multiparty democracy or accusations of impropriety by local officials.
Arrests
Fines and short arrests are becoming an optional punishment to whoever spreads undesirable information through the different Internet formats, as this is seen as a risk to social stability.
In 2001, Wang Xiaoning and other Chinese activists were arrested and sentenced to 10 years in prison for using a Yahoo! email account to post anonymous writing to an Internet mailing list. On 23 July 2008, the family of Liu Shaokun was notified that he had been sentenced to one year re-education through labor for "inciting a disturbance". As a teacher in Sichuan province, he had taken photographs of collapsed schools and posted these photos online. On 18 July 2008, Huang Qi was formally arrested on suspicion of illegally possessing state secrets. Huang had spoken with the foreign press and posted information on his website about the plight of parents who had lost children in collapsed schools. Shi Tao, a Chinese journalist, used his Yahoo! email account to send a message to a U.S.-based pro-democracy website. In his email, he summarized a government order directing media organizations in China to downplay the upcoming 15th anniversary of the 1989 crackdown on pro-democracy activists. Police arrested him in November 2004, charging him with "illegally providing state secrets to foreign entities". In April 2005, he was sentenced to 10 years' imprisonment and two years' subsequent deprivation of his political rights.
In mid-2013 police across China arrested hundreds of people accused of spreading false rumors online. The arrest targeted microbloggers who accused CCP officials of corruption, venality, and sexual escapades. The crackdown was intended to disrupt online networks of like-minded people whose ideas could challenge the authority of the CCP. Some of China's most popular microbloggers were arrested. In September 2013, China's highest court and prosecution office issued guidelines that define and outline penalties for publishing online rumors and slander. The rules give some protection to citizens who accuse officials of corruption, but a slanderous message forwarded more than 500 times or read more than 5,000 times could result in up to three years in prison.
According to the 2020 World Press Freedom Index, compiled by Reporters Without Borders, China is the world's biggest jailer of journalists, holding around 100 in detention. In February 2020, China arrested two of its citizens for taking it upon themselves to cover the COVID-19 pandemic.
Technical implementation
Current methods
The Great Firewall has used numerous methods to block content, including IP dropping, DNS spoofing, deep packet inspection for finding plaintext signatures within the handshake to throttle protocols, and more recently active probing.
Future projects
The Golden Shield Project is owned by the Ministry of Public Security of the People's Republic of China (MPS). It started in 1998, began processing in November 2003, and the first part of the project passed the national inspection on 16 November 2006 in Beijing. According to MPS, its purpose is to construct a communication network and computer information system for police to improve their capability and efficiency. By 2002 the preliminary work of the Golden Shield Project had cost US$800 million (equivalent to RMB 5,000 million or €620 million). Greg Walton, a freelance researcher, said that the aim of the Golden Shield is to establish a "gigantic online database" that would include "speech and face recognition, closed-circuit television... [and] credit records" as well as traditional Internet use records.
A notice issued by the Ministry of Industry and Information Technology on 19 May stated that, as of 1 July 2009, manufacturers must ship machines to be sold in mainland China with the Green Dam Youth Escort software. On 14 August 2009, Li Yizhong, minister of industry and information technology, announced that computer manufacturers and retailers were no longer obliged to ship the software with new computers for home or business use, but that schools, Internet cafes and other public use computers would still be required to run the software.
A senior official of the Internet Affairs Bureau of the State Council Information Office said the software's only purpose was "to filter pornography on the Internet". The general manager of Jinhui, which developed Green Dam, said: "Our software is simply not capable of spying on Internet users, it is only a filter." Human rights advocates in China have criticized the software for being "a thinly concealed attempt by the government to expand censorship". Online polls conducted on Sina, Netease, Tencent, Sohu, and Southern Metropolis Daily revealed over 70% rejection of the software by netizens. However, Xinhua commented that "support [for Green Dam] largely stems from end users, opposing opinions primarily come from a minority of media outlets and businesses."
Targets of censorship
Targeted content
According to a Harvard study, at least 18,000 websites were blocked from within mainland China in 2002, including 12 out of the Top 100 Global Websites. The Chinese-sponsored news agency, Xinhua, stated that censorship targets only "superstitious, pornographic, violence-related, gambling, and other harmful information." This appears questionable, as the e-mail provider Gmail is blocked, and it cannot be said to fall into any of these categories. On the other hand, websites centered on the following political topics are often censored: Falun Gong, police brutality, 1989 Tiananmen Square protests, freedom of speech, democracy, Taiwan independence, the Tibetan independence movement, and the Tuidang movement. Foreign media websites are occasionally blocked. As of 2014 the New York Times, the BBC, and Bloomberg News are blocked indefinitely.
Testing performed by Freedom House in 2011 confirmed that material written by or about activist bloggers is removed from the Chinese Internet in a practice that has been termed "cyber-disappearance".
A 2012 study of social media sites by other Harvard researchers found that 13% of Internet posts were blocked. The blocking focused mainly on any form of collective action (anything from false rumors driving riots to protest organizers to large parties for fun), pornography, and criticism of the censors. However, significant criticisms of the government were not blocked when made separately from calls for collective action. Another study has shown comments on social media that criticize the state, its leaders, and their policies are usually published, but posts with collective action potential will be more likely to be censored whether they are against the state or not.
A lot of larger Japanese websites were blocked from the afternoon of 15 June 2012 (UTC+08:00) to the morning of 17 June 2012 (UTC+08:00), such as Google Japan, Yahoo! Japan, Amazon Japan, Excite, Yomiuri News, Sponichi News and Nikkei BP Japan.
Chinese censors have been relatively reluctant to block websites where there might be significant economic consequences. For example, a block of GitHub was reversed after widespread complaints from the Chinese software developer community. In November 2013 after the Chinese services of Reuters and the Wall Street Journal were blocked, greatfire.org mirrored the Reuters website to an Amazon.com domain in such a way that it could not be shut down without shutting off domestic access to all of Amazon's cloud storage service.
For one month beginning 17 November 2014, ProPublica tested whether the homepages of 18 international news organizations were accessible to browsers inside China, and found the most consistently blocked were Bloomberg, New York Times, South China Morning Post, Wall Street Journal, Facebook, and Twitter. Internet censorship and surveillance has tightly implemented in China that block social websites like Gmail, Google, YouTube, Facebook, Instagram, and others. The excessive censorship practices of the Great Firewall of China have now engulfed the VPN service providers as well.
Search engines
One part of the block is to filter the search results of certain terms on Chinese search engines. These Chinese search engines include both international ones (for example, yahoo.com.cn, Bing, and Google China) as well as domestic ones (for example, Sogou, 360 Search and Baidu). Attempting to search for censored keywords in these Chinese search engines will yield few or no results. Previously, google.cn displayed the following at the bottom of the page: "According to the local laws, regulations and policies, part of the searching result is not shown." When Google did business in the country, it set up computer systems inside China that try to access websites outside the country. If a site was inaccessible, then it was added to Google China's blocklist.
In addition, a connection containing intensive censored terms may also be closed by The Great Firewall, and cannot be re-established for several minutes. This affects all network connections including HTTP and POP, but the reset is more likely to occur during searching. Before the search engines censored themselves, many search engines had been blocked, namely Google and AltaVista. Technorati, a search engine for blogs, has been blocked. Different search engines implement the mandated censorship in different ways. For example, the search engine Bing is reported to censor search results from searches conducted in simplified Chinese characters (used in China), but not in traditional Chinese characters (used in Hong Kong, Taiwan and Macau).
Discussion forums
Several Bulletin Board Systems in universities were closed down or restricted public access since 2004, including the SMTH BBS and the YTHT BBS.
In September 2007, some data centers were shut down indiscriminately for providing interactive features such as blogs and forums. CBS reports an estimate that half the interactive sites hosted in China were blocked.
Coinciding with the twentieth anniversary of the government suppression of the pro-democracy protests in Tiananmen Square, the government ordered Internet portals, forums and discussion groups to shut down their servers for maintenance between 3 and 6 June 2009. The day before the mass shut-down, Chinese users of Twitter, Hotmail and Flickr, among others, reported a widespread inability to access these services.
Social media websites
The censorship of individual social media posts in China usually occurs in two circumstances:
1. Corporations/government hire censors to read individual social media posts and manually take down posts that violate policy. (Although the government and media often use the microblogging service Sina Weibo to spread ideas and monitor corruption, it is also supervised and self-censored by 700 Sina censors. )
2. Posts that will be primarily auto-blocked based on keyword filters, and decide which ones to publish later.
In the second half of 2009, the social networking sites Facebook and Twitter were blocked, presumably because of containing social or political commentary (similar to LiveJournal in the above list). An example is the commentary on the July 2009 Ürümqi riots. Another reason suggested for the block is that activists can utilize them to organize themselves.
In 2010, Chinese human rights activist Liu Xiaobo became a forbidden topic in Chinese media due to his winning the 2010 Nobel Peace Prize. Keywords and images relating to the activist and his life were again blocked in July 2017, shortly after his death.
After the 2011 Wenzhou train collision, the government started emphasizing the danger in spreading 'false rumours' (yaoyan), making the permissive usage of Weibo and social networks a public debate.
In 2012,[[First Monday (journal)| First Monday]] published an article on "political content censorship in social media, i.e., the active deletion of messages published by individuals." This academic study, which received extensive media coverage,China's social networks hit by censorship, says study , BBC News, 9 March 2012. accumulated a dataset of 56 million messages sent on Sina Weibo from June through September 2011, and statistically analyzed them three months later, finding 212,583 deletions out of 1.3 million sampled, more than 16 percent. The study revealed that censors quickly deleted words with politically controversial meanings (e.g., qingci 请辞 "asking someone to resign" referring to calls for Railway Minister Sheng Guangzu to resign after the Wenzhou train collision on 23 July 2011), and also that the rate of message deletion was regionally anomalous (compare censorship rates of 53% in Tibet and 52% in Qinghai with 12% in Beijing and 11.4% in Shanghai). In another study conducted by a research team led by political scientist Gary King, objectionable posts created by King's team on a social networking site were almost universally removed within 24 hours of their posting.
The comment areas of popular posts mentioned Vladimir Putin on Sina Weibo were closed during the 2017 G20 Hamburg summit in Germany. It is a rare example that a foreigner leader is granted the safety from a popular judgment on the Chinese Internet, which usually only granted to the Chinese leaders.
We-media
Social media and messaging app WeChat had attracted many users from blocked networks. Though subject to state rules which saw individual posts removed, Tech in Asia reported in 2013 that certain "restricted words" had been blocked on WeChat globally. A crackdown in March 2014 deleted dozens of WeChat accounts, some of which were independent news channels with hundreds of thousands of followers. CNN reported that the blocks were related to laws banning the spread of political "rumors".
The state-run Xinhua News Agency reported in July 2020 that the CAC would conduct an intensive three-month investigation and cleanup of 13 media platforms, including WeChat.
SSL Protocols
In 2020, China suddenly started blocking website using the TLS or Transport Layer Security 1.3 protocol and ESNI or Encrypted Server Name Indicator for SSL certificates, since ESNI makes it difficult if not impossible to identify the name of a website based on the server name displayed in its SSL certificate. Since May 2015, Chinese Wikipedia has been blocked in mainland China. This was done after Wikipedia started to use HTTPS encryption, which made selective censorship more difficult.
VPN Protocols
Beginning in 2018, the Ministry of Industry and Information Technology (MIIT) in conjunction with the Cyberspace Administration Commission or CAC began a sweeping crackdown on all VPN providers, ordering all major state owned telecommunications providers including China Telecom, China Mobile and China Unicom to block VPN protocols with only authorised users who have obtained permits beforehand to access VPN's (provided they are operated by state-owned telecommunications companies). In 2017 Apple also started removing all VPN apps from Apple app stores at the behest of the Chinese government.
Specific examples of Internet censorship
1989 Tiananmen Square protests
The Chinese government censors Internet materials related to the 1989 Tiananmen Square protests and massacre. According to the government's white paper in 2010 on the subject of Internet in China, the government protects "the safe flow of internet information and actively guides people to manage websites under the law and use the internet in a wholesome and correct way". The government, therefore, prevents people on the Internet from "divulging state secrets, subverting state power and jeopardizing national unification; damaging state honor" and "disrupting social order and stability." Law-abiding Chinese websites such as Sina Weibo censors words related to the protests in its search engine. Sina Weibo is one of the largest Chinese microblogging services. As of October 2012, Weibo's censored words include "Tank Man." The government also censors words that have similar pronunciation or meaning to "4 June", the date that the government's violent crackdown occurred. "陆肆", for example, is an alternative to "六四" (4 June). The government forbids remembrances of the protests. Sina Weibo's search engine, for example, censors Hong Kong lyricist Thomas Chow's song called 自由花 or "The Flower of Freedom", since attendees of the Vindicate 4 June and Relay the Torch rally at Hong Kong's Victoria Park sing this song every year to commemorate the victims of the events.
The government's Internet censorship of such topics was especially strict during the 20th anniversary of the Tiananmen Square protests, which occurred in 2009. According to a Reporters Without Borders' article, searching photos related to the protest such as "4 June" on Baidu, the most popular Chinese search engine, would return blank results and a message stating that the "search does not comply with laws, regulations, and policies". Moreover, a large number of netizens from China claimed that they were unable to access numerous Western web services such as Twitter, Hotmail, and Flickr in the days leading up to and during the anniversary. Netizens in China claimed that many Chinese web services were temporarily blocked days before and during the anniversary. Netizens also reported that microblogging services including Fanfou and Xiaonei (now known as Renren) were down with similar messages that claim that their services were "under maintenance" for a few days around the anniversary date. In 2019, censors once again doubled down during the 30th anniversary of the protests, and by this time had been "largely automated".
Reactions of netizens in China
In 2009, the Guardian wrote that Chinese netizens responded with subtle protests against the government's temporary blockages of large web services. For instance, Chinese websites made subtle grievances against the state's censorship by sarcastically calling the date 4 June as the 中国互联网维护日 or "Chinese Internet Maintenance Day". Owner of the blog Wuqing.org stated, "I, too, am under maintenance". The dictionary website Wordku.com voluntarily took its site down with the claim that this was because of the "Chinese Internet Maintenance Day". In 2013, Chinese netizens used subtle and sarcastic Internet memes to criticize the government and to bypass censorship by creating and posting humorous pictures or drawings resembling the Tank Man photo on Weibo. One of these pictures, for example, showed Florentijin Hofman's rubber ducks sculptures replacing tanks in the Tank Man photo. On Twitter, Hu Jia, a Beijing-based AIDS activist, asked netizens in mainland China to wear black T-shirts on 4 June to oppose censorship and to commemorate the date. Chinese web services such as Weibo eventually censored searches of both "black shirt" and "Big Yellow Duck" in 2013.
As a result, the government further promoted anti-western sentiment. In 2014, Chinese Communist Party general secretary Xi Jinping praised blogger Zhou Xiaoping for his "positive energy" after the latter argued in an essay titled "Nine Knockout Blows in America's Cold War Against China," that American culture was "eroding the moral foundation and self-confidence of the Chinese people."
Debates about the significance of Internet resistance to censorship
According to Chinese studies expert Johan Lagerkvist, scholars Pierre Bourdieu and Michel de Certeau argue that this culture of satire is a weapon of resistance against authority. This is because criticism against authority often results in satirical parodies that "presupposes and confirms emancipation" of the supposedly oppressed people. Academic writer Linda Hutcheon argues that some people, however, may view satirical language that is used to criticise the government as "complicity", which can "reinforce rather than subvert conservative attitudes". Chinese experts Perry Link and Xiao Qiang, however, oppose this argument. They claim that when sarcastic terms develop into common vocabulary of netizens, these terms would lose their sarcastic characteristic. They then become normal terms that carry significant political meanings that oppose the government. Xiao believes that the netizens' freedom to spread information on the Internet has forced the government to listen to popular demands of netizens. For example, the Ministry of Information Technology's plan to preinstall mandatory censoring software called Green Dam Youth Escort on computers failed after popular online opposition against it in 2009, the year of the 20th anniversary of the protest.
Lagerkvist states that the Chinese government, however, does not see subtle criticisms on the Internet as real threats that carry significant political meanings and topple the government. He argues that real threats occur only when "laugh mobs" become "organised smart mobs" that directly challenge the government's power. At a TED conference, Michael Anti gives a similar reason for the government's lack of enforcement against these Internet memes. Anti suggests that the government sometimes allows limited windows of freedom of speech such as Internet memes. Anti explains that this is to guide and generate public opinions that favor the government and to criticize enemies of the party officials.
Internet censorship of the protest in 2013
The Chinese government has become more efficient in its Internet regulations since the 20th anniversary of the Tiananmen protest. On 3 June 2013, Sina Weibo quietly suspended usage of the candle icon from the comment input tool, which netizens used to mourn the dead on forums. Some searches related to the protest on Chinese website services no longer come up with blank results, but with results that the government had "carefully selected." These subtle methods of government censorship may cause netizens to believe that their searched materials were not censored. The government, however, is inconsistent in its enforcement of censorship laws. Netizens reported that searches of some censored terms on Chinese web services still resulted in blank pages with a message that says "relevant laws, regulations, and policies" prevent the display of results related to the searches.
Usage of Internet kill switch
China completely shut down Internet service in the autonomous region of Xinjiang from July 2009 to May 2010 for up to 312 days after the July 2009 Ürümqi riots.
COVID-19 pandemic
Reporters without Borders has accused that China's policies prevented an earlier warning about the COVID-19 pandemic. At least one doctor suspected as early as 25 December 2019 that an outbreak was occurring, but arguably may have been deterred from informing the media due to harsh punishment for whistleblowers.
During the pandemic, academic research concerning the origins of the virus was censored. An investigation by ProPublica and The New York Times found that the Cyberspace Administration of China placed censorship restrictions on Chinese media outlets and social media to avoid mentions of the COVID-19 outbreak, mentions of Li Wenliang, and "activated legions of fake online commenters to flood social sites with distracting chatter".
Winnie the Pooh
Since 2013, the Disney character Winnie the Pooh is systematically removed on the Chinese Internet following the spread of an Internet meme in which photographs of Xi and other individuals were compared to the bear and other characters from the works of A. A. Milne as re-imagined by Disney. The first heavily censored viral meme can be traced back to the official visit to the United States in 2013 during which Xi was photographed by a Reuters photographer walking with then-US President Barack Obama in Sunnylands, California. A blog post where the photograph was juxtaposed with the cartoon depiction went viral, but Chinese censors rapidly deleted it. A year later came a meme featuring Xi and Shinzo Abe. When Xi Jinping inspected troops through his limousine's sunroof, a popular meme was created with Winnie the Pooh in a toy car. The widely circulated image became the most censored picture of the year in 2015. In addition to not wanting any kind of online euphemism for the Communist Party's general secretary, the Chinese government considers that the caricature undermines the authority of the presidential office as well as the president himself, and all works comparing Xi with Winnie the Pooh are purportedly banned in China.
Other examples
In February 2018 Xi Jinping appeared to set in motion a process to scrap term limits, allowing himself to become ruler for life. To suppress criticism, censors banned phrases such as "Disagree" (不同意), "Shameless" (不要脸), "Lifelong" (终身), "Animal Farm", and at one point briefly censored the letter 'N'. Li Datong, a former state newspaper editor, wrote a critical letter that was censored; some social media users evaded the censorship by posting an upside-down screenshot of the letter.
On 13 March 2018, China's CCTV incidentally showed Yicai's Liang Xiangyi apparently rolling her eyes in disgust at a long-winded and canned media question during the widely watched National People's Congress. In the aftermath, Liang's name became the most-censored search term on Weibo. The government also blocked the search query "journalist in blue" and attempted to censor popular memes inspired by the eye-roll.
On 21 June 2018, British-born comedian John Oliver criticized China's paramount leader Xi Jinping on his U.S. show Last Week Tonight over Xi Jinping's apparent descent into authoritarianism (including his sidelining of dissent, mistreatment of the Uyghur peoples and clampdowns on Chinese Internet censorship), as well as the Belt and Road Initiative. As a result, the English language name of John Oliver (although not the Chinese version) was censored on Sina Weibo and other sites on the Chinese Internet.
The American television show South Park was banned from China in 2019 and any mention of it was removed from almost all sites on the Chinese Internet, after criticizing China's government and censorship in season 23 episode, "Band in China". Series creators Trey Parker and Matt Stone later issued a mock apology.
International influence
Foreign content providers such as Yahoo!, AOL, and Skype must abide by Chinese government wishes, including having internal content monitors, to be able to operate within mainland China. Also, per mainland Chinese laws, Microsoft began to censor the content of its blog service Windows Live Spaces, arguing that continuing to provide Internet services is more beneficial to the Chinese. Chinese journalist Michael Anti's blog on Windows Live Spaces was censored by Microsoft. In an April 2006 e-mail panel discussion Rebecca MacKinnon, who reported from China for nine years as a Beijing bureau chief for CNN, said: "... many bloggers said he [Anti] was a necessary sacrifice so that the majority of Chinese can continue to have an online space to express themselves as they choose. So the point is, compromises are being made at every level of society because nobody expects political freedom anyway."
The Chinese version of Myspace, launched in April 2007, has many censorship-related differences from other international versions of the service. Discussion forums on topics such as religion and politics are absent and a filtering system that prevents the posting of content about politically sensitive topics has been added. Users are also given the ability to report the "misconduct" of other users for offenses including "endangering national security, leaking state secrets, subverting the government, undermining national unity, spreading rumors or disturbing the social order."
Some media have suggested that China's Internet censorship of foreign websites may also be a means of forcing mainland Chinese users to rely on China's e-commerce industry, thus self-insulating their economy from the dominance of international corporations. On 7 November 2005 an alliance of investors and researchers representing 26 companies in the U.S., Europe and Australia with over US$21 billion in joint assets announced that they were urging businesses to protect freedom of expression and pledged to monitor technology companies that do business in countries violating human rights, such as China. On 21 December 2005 the UN, OSCE and OAS special mandates on freedom of expression called on Internet corporations to "work together ... to resist official attempts to control or restrict the use of the Internet." Google finally responded when attacked by hackers rumored to be hired by the Chinese government by threatening to pull out of China.
In 2006, Reporters Without Borders wrote that it suspects that regimes such as Cuba, Zimbabwe, and Belarus have obtained surveillance technology from China.
Evasion
Using a VPN service
Internet censorship in China is circumvented by determined parties by using proxy servers outside the firewall. Users may circumvent all of the censorship and monitoring of the Great Firewall if they have a working VPN or SSH connection method to a computer outside mainland China. However, disruptions of VPN services have been reported and the free or popular services especially are increasingly being blocked. To avoid deep packet inspection and continue providing services in China some VPN providers implemented server obfuscation.
Changing IP addresses
Blogs hosted on services such as Blogger and Wordpress.com are frequently blocked. In response, some China-focused services explicitly offer to change a blog's IP address within 30 minutes if it is blocked by the authorities.
Using a mirror website
In 2002, Chinese citizens used the Google mirror elgooG after China blocked Google.
Modifying the network stack
In July 2006, researchers at Cambridge University claimed to have defeated the firewall by ignoring the TCP reset packets.
Using Tor and DPI-resistant tools
Although many users use VPNs to circumvent the Great Firewall of China, many Internet connections are now subject to deep packet inspection, in which data packets are looked at in detail. Many VPNs have been blocked using this method. Blogger Grey One suggests users trying to disguise VPN usage forward their VPN traffic through port 443 because this port is also heavily used by web browsers for HTTPS connections. However, Grey points out this method is futile against advanced inspection. Obfsproxy and other pluggable transports do allow users to evade deep-packet inspection.
The Tor anonymity network was and is subject to partial blocking by China's Great Firewall. The Tor website is blocked when accessed over HTTP but it is reachable over HTTPS so it is possible for users to download the Tor Browser Bundle. The Tor project also maintains a list of website mirrors in case the main Tor website is blocked.
The Tor network maintains a public list of approximately 3000 entry relays; almost all of them are blocked. In addition to the public relays, Tor maintains bridges which are non-public relays. Their purpose is to help censored users reach the Tor network. The Great Firewall scrapes nearly all the bridge IPs distributed through bridges.torproject.org and email. According to Winter's research paper published in April 2012, this blocking technique can be circumvented by using packet fragmentation or the Tor obfsproxy bundle in combination with private obfsproxy bridges. Tor Obfs4 bridges still work in China as long as the IPs are discovered through social networks or self-published bridges.
Tor now primarily functions in China using meeks which works via front-end proxies hosted on Content Delivery Networks (CDNs) to obfuscate the information coming to and from the source and destination, it is a type of pluggable transport. Examples are Microsoft's Azure and Cloudflare.
Unintended methods
It was common in the past to use Google's cache feature to view blocked websites. However, this feature of Google seems to be under some level of blocking, as access is now erratic and does not work for blocked websites. Currently, the block is mostly circumvented by using proxy servers outside the firewall and is not difficult to carry out for those determined to do so.
The mobile Opera Mini browser uses a proxy-based approach employing encryption and compression to speed up downloads. This has the side effect of allowing it to circumvent several approaches to Internet censorship. In 2009 this led the government of China to ban all but a special Chinese version of the browser.
Using an analogy to bypass keyword filters
As the Great Firewall of China gets more sophisticated, users are getting increasingly creative in the ways they elude the censorship, such as by using analogies to discuss topics. Furthermore, users are becoming increasingly open in their mockery of them by actively using homophones to avoid censorship. Deleted sites have "been harmonized", indicating CCP general secretary Hu Jintao's Internet censorship lies under the larger idea of creating a "Socialist Harmonious Society". For example, censors are referred to as "river crabs", because in Chinese that phrase forms a homophone for "harmony".
Using steganography
According to The Guardian editor Charles Arthur, Internet users in China have found more technical ways to get around the Great Firewall of China, including using steganography, a practice of "embedding useful data in what looks like something irrelevant. The text of a document can be broken into its constituent bytes, which are added to the pixels of an innocent picture. The effect is barely visible on the picture, but the recipient can extract it with the right software".
Voices
Rupert Murdoch famously proclaimed that advances in communications technology posed an "unambiguous threat to totalitarian regimes everywhere" and Ai Weiwei argued that the Chinese "leaders must understand it's not possible for them to control the Internet unless they shut it off".
However, Nathan Freitas, a fellow at the Berkman Center for Internet and Society at Harvard and technical adviser to the Tibet Action Institute, says "There’s a growing sense within China that widely used VPN services that were once considered untouchable are now being touched." In June 2015 Jaime Blasco, a security researcher at AlienVault in Silicon Valley, reported that hackers, possibly with the assistance of the Chinese government, had found ways to circumvent the most popular privacy tools on the Internet: virtual private networks, or VPNs, and Tor. This is done with the aid of a particularly serious vulnerability, known as JSONP, that 15 web services in China never patched. As long as the users are logged into one of China's top web services such as Baidu, Taobao, QQ, Sina, Sohu, and Ctrip the hackers can identify them and access their personal information, even if they are using Tor or a VPN. The vulnerability is not new; it was published in a Chinese security and web forum around 2013.
Specific examples of evasion as Internet activism
The rapid increase of access to Internet in China has also created new opportunities for Internet activism. For example, in terms of journalism, Marina Svensson's article on “Media and Civil Society in China: Community building and networking among investigative journalists and beyond” illustrates that although Chinese journalists are not able to create their own private companies, they are using informal connections online and offline that allows them to create a community that may allow them to go around state repression. Specifically, with the development of microblogging, an increase in new community that are formed underlines a possibility of "...more open expressions of solidarity and ironic resistance”. However, one shortcoming with Internet activism is digital inequality. In 2016, the number of Internet users reached 731 million, which was about a rate of 53% for Internet penetration. According to the Information and Communications Technologies Development Index (IDI), China exhibits high inequality in terms of regional and wealth differences.
Economic impact
According to the BBC, local Chinese businesses such as Baidu, Tencent and Alibaba, some of the world's largest Internet enterprises, benefited from the way China has blocked international rivals from the market, encouraging domestic competition.
According to Financial Times, China's crackdown on VPN portals has brought business to state-approved telecom companies. Reuters'' reported that China's state newspaper has expanded its online censoring business. The company's net income in 2018 has risen 140 percent. Its Shanghai-listed stock price jumped up by 166 percent in 2018.
See also
List of websites blocked in mainland China
Censorship in China
Digital divide in China
Human rights in China
Media of China
Censorship of GitHub in China
References
External links
Keywords and URLs censored on the Chinese Internet
Cyberpolice.cn (网络违法犯罪举报网站) – Ministry of Public Security P.R. China Information & Network Security
A website that lists and detects all blocked websites by GFW.
A website to test if a resource is blocked websites by GFW.
Internet Enemies: China, Reporters Without Borders
Freedom on the Net 2011: China-Freedom House: Freedom on the Net Report
Human rights abuses in China
Internet in China
China
China
Articles containing video clips
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.