id
stringlengths 3
8
| url
stringlengths 32
207
| title
stringlengths 1
114
| text
stringlengths 93
492k
|
---|---|---|---|
7043429 | https://en.wikipedia.org/wiki/Brian%20Cushing | Brian Cushing | Brian Patrick Cushing (born January 24, 1987) is a retired American football linebacker. He played college football at USC, and was drafted by the Houston Texans in the first round of the 2009 NFL Draft. He played for the Texans from 2009 to 2017 and became the team's all-time leading tackler.
Early years
Cushing trained throughout his childhood in Park Ridge, New Jersey. He attended Bergen Catholic High School in Oradell, New Jersey and went on to lead the Crusaders to the 2004 Group IV State Championship as a linebacker, tight end, and running back against Don Bosco Preparatory High School, providing huge plays in the 13–10 win. He was named "2004 North Jersey Player of the Year" by The Record, and played in the 2005 U.S. Army All-American Bowl alongside future USC teammates Patrick Turner, Rey Maualuga, and Mark Sanchez.
Cushing was recruited by top colleges, including Notre Dame, Georgia, and Penn State before eventually committing to the University of Southern California.
College career
Cushing attended the University of Southern California and played for the USC Trojans football team from 2005 to 2008 under head coach Pete Carroll. Cushing shared the No. 10 jersey with Trojans starting quarterback John David Booty from 2005 to 2007.
Cushing was on Phil Steele's 2007 pre-season All-America team and on the 2007 Bednarik Award and Lott Trophy watch lists.
Cushing returned a failed onside kick attempted by Notre Dame at the Los Angeles Memorial Coliseum for a late fourth-quarter touchdown on November 25, 2006.
On January 1, 2007, Cushing was named the Defensive MVP of the Rose Bowl Game, after recording 2½ sacks and forcing a third-quarter fumble that changed the momentum of the game.
Cushing was named AP 2nd Team All-American as a senior despite having pedestrian statistics (73 tackles, 10½ tackles for loss, & 3 sacks). NFL scouts did not admire Cushing for his statistics, but for his versatility. Fast enough to play inside and outside linebacker and big and strong enough to play defensive end.
Cushing was one of twelve senior USC players, including the four senior linebackers Cushing, Kaluka Maiava, Clay Matthews III and Rey Maualuga, invited to the 2009 NFL Scouting Combine. He also participated in the 2009 Senior Bowl with Maualuga, Matthews, Patrick Turner and others. Alongside fellow USC linebackers Maualuga and Matthews, he was featured on the cover of Sports Illustrated'''s 2009 NFL Draft Preview magazine, as all three were regarded as potential first-round selections.
Cushing, along with Archie Griffin of Ohio State University, are the only two players in college football history to have competed in the Rose Bowl as starters in all four seasons of their college careers.
College awards and honors
2005 Scout.com Freshman All-American
2006 All-Pac-10 Second-Team
2007 All-Pac-10 Honorable Mention
2007 Rose Bowl Defensive MVP
2008 All-Pac-10 First-Team selection by the conference's head coaches and Rivals.com
2008 Rivals.com First-Team All-American
2008 SI.com First-Team All-American
2008 Associated Press First-Team All-American
Professional career
2009 season
Cushing was drafted with the 15th overall pick in the 2009 NFL Draft by the Houston Texans and had one of the best defensive seasons by a rookie in NFL history.
He scored his first career points on October 4, 2009, when he recorded a safety against the Oakland Raiders. This was the first recorded safety by a Texan since the 2002 season.
Cushing was named AFC Defensive Player of the Week for Week 6 of the 2009 NFL Season, becoming the first rookie to earn the award in the 2009 season. In Week 8, Cushing was again named AFC Defensive Player of the Week, becoming the first Texans rookie to win the award twice in one season. Cushing was also named the NFL Defensive Rookie of the Month for November 2009.
Cushing finished tied with Ray Lewis for the AFC lead with 133 tackles and added 4 sacks, 10 pass deflections, 4 interceptions, and 2 forced fumbles. He was selected to the 2010 Pro Bowl as a starter. Cushing was one of three rookies to make it to the Pro Bowl, another being a former USC teammate (Clay Matthews).
On January 5, 2010, Cushing was named the NFL Defensive Rookie of the Year. Cushing is the second Texan to win the award, the first being DeMeco Ryans in 2006.
2010 season
On May 7, 2010, Adam Schefter of ESPN reported that Cushing would be suspended for the first four games of the 2010 season for violating the NFL's performance-enhancing substances policy, after he tested positive in September 2009 for having above-normal levels of the human chorionic gonadotropin (hCG) hormone in his system. Cushing had previously denied rumors of having used performance-enhancing substances prior to being drafted. He then stated that he would undergo further medical tests to determine how hCG entered his system. Cushing still denies that he ever ingested or injected any performance enhancers and claimed the test result could have been from a cancerous tumor.
On May 12, 2010, in a revote of the AP Defensive Rookie of the Year award after his suspension was made public, Cushing again received the most votes, thus retaining his award. His second-team All-Pro status from his rookie season, however, was rescinded by the AP.
In the final 12 games, he totaled 76 tackles, 1½ sacks, 4 pass deflections, and 1 forced fumble.
2011 season
During training camp heading into the 2011 season, Cushing was told that he would be switching from outside to inside linebacker due to the new 3–4 defense. They believed that he would not have the speed to get to the outside, despite the fact that he had always been an outside linebacker all through high school.
In 2011, the Houston Texans finished the season with a 10–6 record and made it to the playoffs for the first time in team history. Cushing was named Team MVP and while he wasn't voted to the Pro-Bowl he earned AP 2nd Team All-Pro honors after finishing the season with 114 tackles, 4 sacks, 5 pass deflections, 2 interceptions, and 2 forced fumbles. Houston Texans defensive coordinator, Wade Phillips, called Cushing a "holy phenom" and also said that "he's one of the best I've ever coached, and I've been at it a long time." Cushing said that he will use the fact that he did not make the Pro-Bowl as "motivation and a driving force" to become better. The Texans defeated the Cincinnati Bengals 31–10 in the first round but were defeated by the Baltimore Ravens 20–13 in the Divisional Round.
2012 season
During the second quarter of the Houston Texans win over the New York Jets on Monday October 8, 2012, Cushing left the game with what appeared to be a knee injury, which was later confirmed to be a torn ACL. He was placed on injured reserve, ending his 2012 season.
2013 season
Cushing inked a six-year deal with the Houston Texans, a contract extension worth $55.6 million, which included $21 million guaranteed. Averaging $9.2 million for the next six years, the contract was the largest at the time for any inside linebacker, and made him the third highest paid inside linebacker in the NFL.
During the October 20, 2013, game against Kansas City, Cushing broke his fibula and tore his fibular collateral ligament after a low block from Chiefs running back Jamaal Charles in the third quarter. The Texans later announced that he would miss the remainder of the season.
2014 season
Cushing started the 2014 NFL season well despite being brought along slowly at the start of the season, collecting 37 tackles in the first four games as the Texans opened with an impressive 3–1 record. Cushing had 17 tackles in their loss to the New York Giants, tying the franchise record for tackles in a game.
2015 season
After being plagued by injuries in the previous two seasons, Cushing completed the 2015 season with 110 tackles. The Texans qualified for the first round of the playoffs but fell to the Kansas City Chiefs 30–0.
2016 season
During Week 1 against the Chicago Bears on September 11, 2016, Cushing suffered a partially torn MCL but he did not require surgery.
2017 season
On September 13, 2017, Cushing was suspended for ten games for again violating the NFL's policy on performance-enhancing substances.
On February 20, 2018, Cushing was released by the Texans after nine seasons as the franchise's all-time leading tackler.
NFL career statistics
Regular season
Playoffs Source'':
NFL awards and honors
2× AFC Defensive Rookie of the Month (November 2009 & December 2009)
2× AFC Defensive Player of the Week (2009 weeks 6 & 8)
2009 AFC Defensive Player of the Month of November
2009 Pro Bowl selection (starter)
2009 SN-2nd Team All-Pro
2009 All-Rookie Selection (Sporting News, Pro Football Writers Association)
2009 NFL Defensive Rookie of the Year
2011 Team Captain
2011 Team MVP
2011 AP-2nd Team All-Pro
2011 NFL.com ranked 54th best player overall
Coaching career
Houston Texans
On January 29, 2019, Cushing was hired to work with strength and conditioning coaches and offer some assistance to players on defense with the Houston Texans of the National Football League (NFL).
Personal life
Cushing's mother, Antoinette, is of Polish descent; she was born in 1944 at a German forced labor camp.
His father, Frank, was an intelligence officer in the Vietnam War. His great great uncle was American Civil War veteran Alonzo Cushing, who was killed at the Battle of Gettysburg and posthumously received the Medal of Honor.
Cushing is married to his college sweetheart Megan, an athlete who had success on USC's women's soccer squad; they began dating just prior to Cushing's NFL draft. His sister-in-law is soccer player Kealia Ohai, who is married to his former Texans teammate J. J. Watt. The Cushings have three sons together.
References
External links
USC profile
1987 births
Living people
American Conference Pro Bowl players
American football linebackers
American sportspeople in doping cases
American people of Polish descent
Bergen Catholic High School alumni
Doping cases in American football
Houston Texans coaches
Houston Texans players
People from Park Ridge, New Jersey
Players of American football from New Jersey
Sportspeople from Bergen County, New Jersey
USC Trojans football players |
41839863 | https://en.wikipedia.org/wiki/INtime | INtime | The INtime Real Time Operating System (RTOS) family is based on a 32-bit RTOS conceived to run time-critical operations cycle-times as low as 50μs. INtime RTOS runs on single-core, hyper-threaded, and multi-core x86 PC platforms from Intel and AMD. It supports two binary compatible usage configurations; INtime for Windows, where the INtime RTOS runs alongside Microsoft Windows®, and INtime Distributed RTOS, where INtime runs one.
Like its iRMX predecessors, INtime is a real-time operating system, and like DOSRMX and iRMX for Windows, it runs concurrently with a general-purpose operating system on a single hardware platform.
History
Initial Release
INtime 1.0 was originally introduced in 1997 in conjunction with the Windows NT operating system. Since then it has been upgraded to include support for all subsequent protected-mode Microsoft Windows platforms, Windows XP to Windows 10.
INtime can also be used as a stand-alone RTOS. INtime binaries are able to run unchanged when running on a stand-alone node of the INtime RTOS. Unlike Windows, INtime can run on an Intel 80386 or equivalent processor. Current versions of the Windows operating system generally require at least a Pentium level processor in order to boot and execute.
Version 2.2
After spinning off from Radisys in 2000 development work on INtime continued at TenAsys Corporation. In 2003 TenAsys released version 2.2 of INtime.
Notable features of version 2.2 include:
Real-time Shared Libraries, or RSLs, which are the functional equivalent of the Windows Dynamically Loaded Libraries, or DLLs.
Support for the development of USB clients, and USB host control drivers for OHCI, UHCI and EHCI (USB 2.0) devices.
A new timing acquisition and display application called ""INscope"" is released.
Notes
Real-time operating systems
Embedded operating systems |
44550150 | https://en.wikipedia.org/wiki/VREAM | VREAM | VREAM, Inc. was a US technology company that functioned between 1991 and 1996. It was one of the first companies to develop PC-based software for authoring and viewing virtual reality (VR) environments.
Company history
The company was founded in Chicago in 1991 by former McKinsey & Company consultant Edward R. LaHood, who derived the name VREAM from the phrase "virtual reality dream." LaHood was joined by co-founders Ken Gaebler. and Dan Malven in 1993.
In 1991, LaHood created VREAMScript, a scripting language for virtual reality environments that allowed for the definition of complex 3D objects, environment attributes, object attributes, and triggers for cause-and-effect relationships. The company then created a PC-based authoring tool, the VREAM Virtual Reality Development System, to build virtual reality environments and an accompanying runtime player, the VREAM Runtime System, that allowed end users to experience the virtual environments, moving through them in real-time while interacting with the rendered, virtual objects.
The VREAM Virtual Reality System, which included the VREAM Virtual Reality Development System and the VREAM Runtime System, was released for purchase in December, 1992, with a $1,495 price point and with support for a wide range of immersive devices, including the Power Glove.
Initial Impact
Prior to 1992, rendering real-time 3D graphics had only been possible on high-end workstations, and creating a real-time 3D graphics simulation required strong programming skills. The growing power of PCs, driven by such innovations as the Pentium chip (introduced by Intel in March 1993), made bringing virtual reality simulations to the PC a possibility, and LaHood's strong programming skills allowed him to be first to market with a PC-based virtual reality authoring solution that could be used by non-programmers. (LaHood was not, however, the first to bring virtual reality to the PC; that credit goes to Sense8 founders Patrice Gelband and Eric Gullichsen who, earlier in 1992, introduced WorldToolKit, a VR programmer's library that allowed developers to build "virtual world" applications that ran on desktop computers.)
Due to its accessibility to non-programmers and those who could not afford high-end workstations, VREAM's software quickly became popular in the hobbyist virtual reality community. It was used for architectural walkthroughs, manufacturing training, game development, engineering prototyping, data visualization and other simulations. The software was even used in the treatment of fear of public speaking, acrophobia and male erectile disorder.
In addition to its software offerings, the company also provided virtual reality development services. A notable services client was Burger King, which hired VREAM to build a "restaurant of the future" for its franchisee conference. VREAM also created virtual reality product demonstrations for Bombardier Recreational Products for their Sea-Doo and Ski-Doo lines.
Web-Based VR
In 1994, based on the emergence of the World Wide Web and at the suggestion of co-founder Malven, the company recast its runtime player as a plug-in for Netscape Navigator, dubbing it WIRL. VREAM would later release WIRL as an ActiveX control for Microsoft Internet Explorer.
At a time when web-based software was in its earliest stages, WIRL was quite popular, ranking fourth on the list of downloaded software, surpassed only by Netscape Navigator, MPEG Player NET TOOB and HTML editor HotDog Pro. Within the first month of being available, WIRL was downloaded over 30,000 times, providing VREAM with a very efficient way to promote its software.
WIRL was also used by PC Magazine to test the capabilities of PC graphics cards. Bill Gates, CEO of Microsoft, and Craig Barrett, of Intel, used WIRL demonstrations in their keynote speeches to showcase the potential of more powerful PCs.
In November 1994, VRML, a standard file format for representing Web-based, real-time interactive 3D environments, was introduced. The introduction of VRML, backed by SGI and other companies, lessened the future potential of VREAM's proprietary VREAMScript language. To mitigate its risks, VREAM adopted the approach of supporting both VRML and VREAMScript in its software products.
In May 1995, VREAM rebranded the next generation of its VR authoring software as VRCreator, and touted its support for "multi-participant VR...across the World Wide Web."
Venture Capital Funding
In November 1995, based on the company's track record and based on growing interest in the venture capital community for Web-focused software companies, VREAM received $750,000 in venture funding from PLATINUM Venture Partners. At this time, co-founder Malven left to pursue other opportunities.
Competition
VREAM's primary competitor in its earliest days was Sense8 Corp. of Sausalito, Calif., which offered a virtual reality programming toolkit for the PC.
By March 1995, however, the VR space was getting crowded, with many authoring tools on the market, including VREAM's VRCreator, Virtus VR, Virtus Walkthrough Pro, and Superscape VRT.
As virtual reality grabbed market attention, a number of new competitors emerged, including Paper Software (founded by Mike McCue), Intervista (founded by Tony Parisi) and Silicon Graphics.
Company Sale
After Netscape acquired Paper Software in February 1996 for $20 million, VREAM hired an investment banker to sell the company.
While working to sell the company, VREAM continued to work on the next generation of its VRML authoring and browsing software. After releasing a new version, the company's VRML browser, WIRL, received the PC Magazine Editor's Choice award in November 1996.
In that same month, November 1996, VREAM was acquired by PLATINUM Technology, Inc. for $10.3 million and became a business unit within PLATINUM.
As the World Wide Web's popularity grew, creating many opportunities for tech companies, interest in VR and VRML quickly faded and many VR companies struggled. The VREAM division of PLATINUM Technology acquired Intervista and the assets of SGI's VR division. PLATINUM Technology deployed VREAM's virtual reality software in some of its enterprise software products, but closed down the VREAM business unit just prior to the acquisition of PLATINUM Technology by Computer Associates in 1999. VREAM co-founders Gaebler and LaHood left PLATINUM at that time and started a new venture, BeautyJunglecom, selling cosmetics online.
The closing down of PLATINUM's VREAM Division marked a turning point for the VR industry, which had experienced a flurry of interest, activity and investment during the nineties. Interest in VR would be mostly dormant for the next fifteen years, at which point virtual reality technology company Oculus VR would be acquired by Facebook for US$2 billion in cash and Facebook stock, shocking many VR veterans of the nineties and stimulating significant interest in virtual reality from a new generation of entrepreneurs.
References
Virtual reality companies
Technology companies |
2462324 | https://en.wikipedia.org/wiki/Corrado%20B%C3%B6hm | Corrado Böhm | Corrado Böhm (17 January 1923 – 23 October 2017) was a Professor Emeritus at the University of Rome "La Sapienza" and a computer scientist known especially for his contributions to the theory of structured programming, constructive mathematics, combinatory logic, lambda calculus, and the semantics and implementation of functional programming languages.
Work
In his PhD dissertation (in Mathematics, at ETH Zurich, 1951; published in 1954), Böhm describes for the first time a full meta-circular compiler, that is a translation mechanism of a programming language, written in that same language. His most influential contribution is the so-called structured program theorem, published in 1966 together with Giuseppe Jacopini. In lambda calculus, he established an important separation theorem between normal forms. Together with Alessandro Berarducci, he demonstrated an isomorphism between the strictly-positive algebraic data types and the polymorphic lambda-terms, otherwise known as Böhm–Berarducci encoding.
A special issue of Theoretical Computer Science was dedicated to him in 1993, on his 70th birthday. He is the recipient of the 2001 EATCS Award for a distinguished career in theoretical computer science.
Selected publications
C. Böhm, "Calculatrices digitales. Du déchiffrage des formules mathématiques par la machine même dans la conception du programme", Annali di Mat. pura e applicata, serie IV, tomo XXXVII, 1–51, 1954. PDF at ETH Zürich English translation 2016 by Peter Sestoft
C. Böhm, "On a family of Turing machines and the related programming language", ICC Bull., 3, 185–194, July 1964.
Introduced P′′, the first imperative language without GOTO to be proved Turing-complete.
C. Böhm, G. Jacopini, "Flow diagrams, Turing Machines and Languages with only Two Formation Rules", Comm. of the ACM, 9(5): 366–371,1966.
C. Böhm, "Alcune proprietà delle forme β-η-normali nel λ-K-calcolo", Pubbl. INAC, n. 696, Roma, 1968.
C. Böhm, A. Berarducci, "Automatic Synthesis of typed Lambda-programs on Term Algebras", Theoretical Computer Science, 39: 135–154, 1985.
C. Böhm, "Functional Programming and Combinatory algebras", MFCS, Carlsbad, Czechoslovakia, eds M.P. Chytil, L. Janiga and V. Koubek, LNCS 324, 14–26, 1988.
See also
P′′, a minimal computer programming language
Structured program theorem
List of pioneers in computer science
Böhm tree
References
Vitae (University of Rome)
External links
"A Collection of Contributions in Honour of Corrado Böhm on the Occasion of his 70th Birthday", Theoretical Computer Science, Volume 121, Numbers 1&2, 1993.
Corrado Böhm's personal page.
1923 births
2017 deaths
Italian computer scientists
Sapienza University of Rome faculty
Italian people of German descent
École Polytechnique Fédérale de Lausanne alumni
20th-century Italian scientists
21st-century Italian scientists |
27850970 | https://en.wikipedia.org/wiki/Touch%20Book | Touch Book | The Touch Book is a portable computing device that functions as a netbook, and a tablet computer. Designed by Always Innovating, a company situated in the city of Menlo Park, in California, USA, it was launched at the DEMO conference in March 2009. Its designers stated at launch that it is the first netbook featuring a detachable keyboard with a long battery life (more than 10 hours). It is based on the ARM TI OMAP3530 processor (taking advantage of the Beagleboard and existing open source software) and features a touchscreen.
First units to customers were shipped on August 2009. There were some (expected) software issues for early adopters, which are being progressively addressed. There were also some hardware issues, which resulted in community discontent.
After much speculation on the community forum, a revised v.2 Touch Book and new Smart Book product were announced. The Smart Book is based on the BeagleBoard-xM design.
Overview
The Touch Book is a netbook and a touch tablet device. It features a detachable keyboard, a removable back cover to access the electronics of the device, and several Linux distributions shipped by default and offered via a multiboot system.
The default operating system launched is a custom Linux OS based on Ångström, being custom themed to fit the small form factor. Since 2010, the Touch Book comes with a multi-boot graphical interface, allowing users to run also Ubuntu and Android. Users can install other OSes like Gentoo and RISC OS.
Touch Book's major intended uses are media viewing and web browsing, although more power-hungry applications such as OpenOffice.org are available on the device. The Touch Book ships standard graphics libraries such as OpenGL ES and SDL.
The Always Innovating team claims the device follows an open hardware philosophy, so that anyone can access the hardware design from the company's website, modify it and redistribute it. Although the business model of open hardware is still maturing, it seems to be profitable to the company
In addition to this open hardware approach, the Touch Book fully relies on open source software. A Git repository of the entire OS is currently available, allowing for download of the latest kernel source as well as the different root file systems. A community of contributors has emerged and is interacting with the Always Innovating developers.
Technical specifications
ARM 600 MHz (overclocked to 720 MHz) Cortex-A8 superscalar microprocessor core: Texas Instruments OMAP3530 System-on-Chip
512 MB DDR-333 SDRAM
256 MB NAND FLASH memory
DSP Core video processor at 430 MHz
OpenGL ES 2.0 compliant
Freescale-based 3D accelerometer
Integrated Ralink-based Wi-Fi 802.11b/g/n
Integrated Bluetooth 2.0
1024x600 resolution touchscreen LCD, 8.9" widescreen, 16.7 million colors
SDHC card slot (currently supporting up to 32 GB of storage)
Headphone output, microphone input
Standard QWERTY keyboard and touchpad
USB 2.0 OTG port (480Mbit/s)
6× USB 2.0 host ports (480 Mbit/s)
JTAG debugging interface
Runs the Linux kernel (2.6.x)
Dual 12000mAh + 6000mAh rechargeable lithium polymer battery
Estimated 10+ hour battery life for video and general applications
Detachable magnet system for the tablet
Dimensions: 248 mm × 158 mm × 19 mm for the tablet, 248 mm × 180 mm × 33.5 mm for the full netbook
Mass: 675 g for the Tablet; 1,418 g for the full netbook
Similar products
Other single-board computers using OMAP3500 series processors include OSWALD, Beagle Board, IGEPv2, Pandora and Gumstix Overo series. Since launch of the device in March 2009, several similar devices have come up in early 2010, such as the Freescale tablet reference design or the Lenovo U1 IdeaPad.
References
External links
Acorn Computers
Embedded Linux
Texas Instruments hardware
Linux-based devices
2-in-1 PCs |
57183 | https://en.wikipedia.org/wiki/Middleware%20%28distributed%20applications%29 | Middleware (distributed applications) | Middleware in the context of distributed applications is software that provides services beyond those provided by the operating system to enable the various components of a distributed system to communicate and manage data. Middleware supports and simplifies complex distributed applications. It includes web servers, application servers, messaging and similar tools that support application development and delivery. Middleware is especially integral to modern information technology based on XML, SOAP, Web services, and service-oriented architecture.
Middleware often enables interoperability between applications that run on different operating systems, by supplying services so the application can exchange data in a standards-based way. Middleware sits "in the middle" between application software that may be working on different operating systems. It is similar to the middle layer of a three-tier single system architecture, except that it is stretched across multiple systems or applications. Examples include EAI software, telecommunications software, transaction monitors, and messaging-and-queueing software.
The distinction between operating system and middleware functionality is, to some extent, arbitrary. While core kernel functionality can only be provided by the operating system itself, some functionality previously provided by separately sold middleware is now integrated in operating systems. A typical example is the TCP/IP stack for telecommunications, nowadays included virtually in every operating system.
Definitions
Software that provides a link between separate software applications. Middleware is sometimes called plumbing because it connects two applications and passes data between them. Middleware allows data contained in one database to be accessed through another. This definition would fit enterprise application integration and data integration software.
ObjectWeb defines middleware as: "The software layer that lies between the operating system and applications on each side of a distributed computing system in a network."
Origins
Middleware is a relatively new addition to the computing landscape. It gained popularity in the 1980s as a solution to the problem of how to link newer applications to older legacy systems, although the term had been in use since 1968.<ref>{{Cite web|first=Nick|last=Gall|url=http://ironick.typepad.com/ironick/2005/07/update_on_the_o.html|title=Origin of the term ''|date=July 30, 2005}}</ref> It also facilitated distributed processing, the connection of multiple applications to create a larger application, usually over a network.
Use
Middleware services provide a more functional set of application programming interfaces to allow an application to:
Locate transparently across the network, thus providing interaction with another service or application
Filter data to make them friendly usable or public via anonymization process for privacy protection (for example)
Be independent from network services
Be reliable and always available
Add complementary attributes like semantics
when compared to the operating system and network services.
Middleware offers some unique technological advantages for business and industry. For example, traditional database systems are usually deployed in closed environments where users access the system only via a restricted network or intranet (e.g., an enterprise’s internal network). With the phenomenal growth of the World Wide Web, users can access virtually any database for which they have proper access rights from anywhere in the world. Middleware addresses the problem of varying levels of interoperability among different database structures. Middleware facilitates transparent access to legacy database management systems (DBMSs) or applications via a web server without regard to database-specific characteristics.
Businesses frequently use middleware applications to link information from departmental databases, such as payroll, sales, and accounting, or databases housed in multiple geographic locations. In the highly competitive healthcare community, laboratories make extensive use of middleware applications for data mining, laboratory information system (LIS) backup, and to combine systems during hospital mergers. Middleware helps bridge the gap between separate LISs in a newly formed healthcare network following a hospital buyout.
Middleware can help software developers avoid having to write application programming interfaces (API) for every control program, by serving as an independent programming interface for their applications.
For Future Internet network operation through traffic monitoring in multi-domain scenarios, using mediator tools (middleware) is a powerful help since they allow operators, searchers and service providers to supervise Quality of service and analyse eventual failures in telecommunication services.
Finally, e-commerce uses middleware to assist in handling rapid and secure transactions over many different types of computer environments. In short, middleware has become a critical element across a broad range of industries, thanks to its ability to bring together resources across dissimilar networks or computing platforms.
In 2004 members of the European Broadcasting Union (EBU) carried out a study of Middleware with respect to system integration in broadcast environments. This involved system design engineering experts from 10 major European broadcasters working over a 12-month period to understand the effect of predominantly software-based products to media production and broadcasting system design techniques. The resulting reports Tech 3300 and Tech 3300s were published and are freely available from the EBU web site.
Types
Message-oriented middleware
Message-oriented middleware (MOM) is middleware where transactions or event notifications are delivered between disparate systems or components by way of messages, often via an enterprise messaging system. With MOM, messages sent to the client are collected and stored until they are acted upon, while the client continues with other processing.
Enterprise messaging
An enterprise messaging system is a type of middleware that facilitates message passing between disparate systems or components in standard formats, often using XML, SOAP or web services. As part of an enterprise messaging system, message broker software may queue, duplicate, translate and deliver messages to disparate systems or components in a messaging system.
Enterprise service bus
Enterprise service bus (ESB) is defined by the Burton Group as "some type of integration middleware product that supports both message-oriented middleware and Web services".
Intelligent middleware
Intelligent Middleware (IMW) provides real-time intelligence and event management through intelligent agents. The IMW manages the real-time processing of high volume sensor signals and turns these signals into intelligent and actionable business information. The actionable information is then delivered in end-user power dashboards to individual users or is pushed to systems within or outside the enterprise. It is able to support various heterogeneous types of hardware and software and provides an API for interfacing with external systems. It should have a highly scalable, distributed architecture which embeds intelligence throughout the network to transform raw data systematically into actionable and relevant knowledge. It can also be packaged with tools to view and manage operations and build advanced network applications most effectively.
Content-centric middleware
Content-centric middleware offers a simple provider-consumer'' abstraction through which applications can issue requests for uniquely identified content, without worrying about where or how it is obtained. Juno is one example, which allows applications to generate content requests associated with high-level delivery requirements. The middleware then adapts the underlying delivery to access the content from sources that are best suited to matching the requirements. This is therefore similar to Publish/subscribe middleware, as well as the Content-centric networking paradigm.
Remote procedure call
Remote procedure call middleware enables a client to use services running on remote systems. The process can be synchronous or asynchronous.
Object request broker
With object request broker middleware, it is possible for applications to send objects and request services in an object-oriented system.
SQL-oriented data access
SQL-oriented Data Access is middleware between applications and database servers.
Embedded middleware
Embedded middleware provides communication services and software/firmware integration interface that operates between embedded applications, the embedded operating system, and external applications.
Other
Other sources include these additional classifications:
Transaction processing monitors provides tools and an environment to develop and deploy distributed applications.
Application servers software installed on a computer to facilitate the serving (running) of other applications.
Integration Levels
Data Integration
Integration of data resources like files and databases
Cloud Integration
Integration between various cloud services
B2B Integration
Integration of data resources and partner interfaces
Application Integration
Integration of applications managed by a company
Vendors
IBM, Red Hat, Oracle Corporation and Microsoft are some of the vendors that provide middleware software. Vendors such as Axway, SAP, TIBCO, Informatica, Objective Interface Systems, Pervasive, ScaleOut Software and webMethods were specifically founded to provide more niche middleware solutions. Groups such as the Apache Software Foundation, OpenSAF, the ObjectWeb Consortium (now OW2) and OASIS' AMQP encourage the development of open source middleware. Microsoft .NET "Framework" architecture is essentially "Middleware" with typical middleware functions distributed between the various products, with most inter-computer interaction by industry standards, open APIs or RAND software licence. Solace provides middleware in purpose-built hardware for implementations that may experience scale. StormMQ provides Message Oriented Middleware as a service.
See also
Comparison of business integration software
Middleware Analysts
Service-oriented architecture
Enterprise Service Bus
Event-driven SOA
ObjectWeb
References
External links
Internet2 Middleware Initiative
SWAMI - Swedish Alliance for Middleware Infrastructure
Open Middleware Infrastructure Institute (OMII-UK)
Middleware Integration Levels
European Broadcasting Union (EBU) Middleware report.
More detailed supplement to the European Broadcasting Union Middleware report.
ObjectWeb - international community developing open-source middleware
Systems engineering |
3559312 | https://en.wikipedia.org/wiki/Military%20College%20of%20Signals | Military College of Signals | The Military College of Signals, also known as MCS, is a military school located in Rawalpindi, Punjab, Pakistan. It is a constituent college of the NUST, Islamabad. MCS consists of two engineering departments, EE & CSE. The college puts a strong emphasis on scientific and technological education and research.
It is located in the heart of Rawalpindi on Humayun Road (Lalkurti) and Adiala Road, close to the Pakistan Army's GHQ and CMH. The college is approximately from the Benazir Bhutto International Airport and from the Rawalpindi Railway Station. Prime shopping center of Saddar is about from the college.
History
Military College of Signals was raised immediately after partition of the Indo-Pak Subcontinent in 1947 as "School of Signals", with the task of training officers and selected Non-Commissioned Officers of the Corps of Signals of Pakistan Army. The School had to be raised from a scratch because the signal training facilities of undivided Indian Army were located either at Poona or Jabalpur which became a part of India at the time of independence. Lt Col C.W.M. Young, a British officer of the Royal Corps of Signals was the first commandant of the school. During the early years, due to the shortage of training facilities in the country in the field of telecommunication, a number of officers were trained at School of Signals, UK and subsequently at US Army Signal School at Fort Monmouth, New Jersey. The college, since its raising has undergone various phases of expansion to meet the requirements of the Corps of Signals.
The status of the school was raised to that of a college in 1977 when it was affiliated with University of Engineering and Technology, Lahore for Telecommunication Engineering degree program.
The college became a constituent campus of National University of Sciences and Technology in 1991 and since then it has progressed phenomenally as a center of quality education. Today the curriculum is not only confined to merely undergraduate level but also MS and PhD level.
Campus life
The college has kept military traditions alive. Military rules are applied and National University of Sciences and Technology, Pakistan (NUST) policy is followed too. It provides opportunities to participate in a variety of co-curricular activities. There are students' societies and clubs which organize different activities. Presently, the college has two societies. Both were established in June 2001.
Telecom Society comprises the President, General Secretary, Treasurer, Project Coordinator and Media Coordinator. Elections are held every year.
Software Society comprises the President, Vice President, General Secretary, General Manager, Girls Representative, Extra-Curricular Activities Manager, Sports Manager, Finance Secretary, Event Secretary and EIT. This society on one hand has generated various extra curricular activities while on the other by organizing different seminars, workshops, lectures and short duration courses has contributed effectively to the academic development.
Companies (Dorms)
There are two Companies for GC's, Jinnah Company and the Iqbal Company while Fatima Company is for civilian female students. GK hostel for civilian male students.
Organization and administration
The College of Signals is functionally and administratively controlled by the General Headquarters through Signals Directorate. It is headed by Major General/ Brigadier and is sub-divided into four functional groups.
Administrative Wing (Adm Wing) Comprises Administrative Branch (Administration and Quartering Sections), Training Branch, NUST Branch, Examination Branch and MCS Training Battalion.
Combat Wing (Cbt Wing) Comprises Tactical Branch, Telecommunication Branch, Security and Research Branch and Information Technology Branch.
Engineering Wing (Engg Wing) Comprises Electrical Engineering, Computer Software Engineering, Information Security, Humanities and Basic Sciences Departments and Library.
Research and Development Wing (R&D) Mostly focuses on research and development activities of the college.
Academics
Undergraduate program
MCS offers undergraduate degrees in three streams, namely Electrical Engineering, Computer Software Engineering & Information Security. Before 2019 batch, only Electrical Engineering in Telecommunication was offered. The program has now been converted to full Electrical Engineering. In 2020, the Bachelor's degree in Information Security was offered for the first time. Each year a standard NET (NUST Entrance Test) is conducted for undergraduate enrollment in these courses. The students are also enrolled via SAT subject tests.
Graduate program
The college also offers master's degrees in Electrical (Telecommunication) Engineering, Software Engineering, Information Security and Systems Engineering. The master's degrees also lead to PhD. Students are enrolled after passing the GAT (Graduate Assessment Test).
Departments
The college has four departments.
Department of Computer Software Engineering (al-Khawarizmi Block)
Department of Electrical Engineering (Shuja ul Qamar Block)
Department of Information Security
Department of Humanities and Basic Sciences (Sharif Block), named after a veteran Signals Officer Major Sharif, who was also father of Major Shabbir Sharif, who received Nishan-e-Haider(the Highest Military Award of Gallantry) and Ex-Pakistan Army Chief General Raheel Sharif.
Faculty
MCS has about seventy faculty members who are responsible for teaching both graduate and undergraduate courses. As of January 2022, Dr. Asif Masood is the acting dean of MCS. Dr. Adil Masood Siddiqui is the current Head of Electrical Department while Engr. Asim Bakhshi heads the Department of Computer Science and Dr. Abdul Razzaque is responsible for heading the Department of Humanities and Basic Sciences. Dr. Haider Abbas heads Department of Information Security while Dr. Adnan Ahmed Khan heads the Research department.
See also
National University of Science and Technology, Pakistan
Pakistan Military Academy
Army Medical College
College of Aeronautical Engineering
College of Electrical and Mechanical Engineering
Military College of Engineering (Pakistan)
References
External links
MCS official website
NUST official website
Signalianz alumni website
National University of Sciences & Technology
Universities and colleges in Rawalpindi District
Military education and training in Pakistan
Pakistan Army
Engineering universities and colleges in Pakistan
Military academies of Pakistan
Educational institutions established in 1947
1947 establishments in Pakistan
Pakistan Military Academy |
23932648 | https://en.wikipedia.org/wiki/%40icon%20sushi | @icon sushi | @icon sushi (aicon sushi is used when @ unsupported in a filename) is a conversion and creation computer icon freeware software tool for Microsoft Windows with support for Windows Vista. It has the ability to import icons from ICO, BMP, PNG, PSD, EXE, DLL and ICL formats and can export as ICO, BMP, PNG and ICL. This software is available in multiple languages including German.
Reception
@icon sushi received mixed reviews. Softonic praises the application for its simplicity and support of XP and Vista icons, but noted that its feature set is too minimalistic. Likewise, ZDNet notes that the software lacks in features but that it is however user friendly. Clubic rated the application 3 out of 5, noting that it is portable and simple to use.
The Official Windows Magazine stated that @icon sushi is an alternative to leading commercial software such as Microangelo Toolset for creating detailed, high-resolution icons.
Citrix suggests use of @icon sushi to troubleshoot problems with icons in their software.
See also
List of icon software
References
External links
@icon sushi website (English)
Icon software
Raster graphics editors
Windows graphics-related software
Freeware
Portable software |
27421416 | https://en.wikipedia.org/wiki/Hollywood%20%28programming%20language%29 | Hollywood (programming language) | Hollywood is a commercially distributed programming language developed by Andreas Falkenhahn (Airsoft Softwair) which mainly focuses on the creation of multimedia-oriented applications. Hollywood is available for AmigaOS, MorphOS, WarpOS, AROS, Windows, macOS, Linux, Android, and iOS. Hollywood has an inbuilt cross compiler that can automatically save executables for all platforms supported by the software. The generated executables are completely stand-alone and do not have any external dependencies, so they can also be started from a USB flash drive. An optional add-on also allows users to compile projects into APK files.
The Hollywood Designer is an add-on for Hollywood with which it is possible to use Hollywood also as a presentation software and an authoring system.
History
Hollywood has its roots on the Amiga computer. Inspired by Amiga programming languages like AMOS, Blitz BASIC, and Amiga E, Hollywood author Andreas Falkenhahn began development of Hollywood in Spring 2002 after finishing his A-levels. Version 1.0 of the software was released in November 2002, but only for 68000-based Amiga systems. A month later, a native version for the PowerPC-based MorphOS followed. Support for WarpOS was introduced with Hollywood 1.9 which appeared in Spring 2004 together with the first release of the Hollywood Designer, a tool which can be used to create presentations with Hollywood. AmigaOS 4 is supported since March 2005. Starting with version 2.0 (released in January 2006), Hollywood is using the Lua programming language as its virtual machine, but with significant modifications in syntax and functionality. Starting with version 3.0 (January 2008), Hollywood for the first time also runs on two non Amiga inspired operating systems: Microsoft Windows and macOS. Since version 4.5 (January 2010) Hollywood is also available with an integrated development environment on Windows. Since version 4.8 (April 2011) Hollywood can also compile executables for Linux. Hollywood 5.0 was released in February 2012 and introduces support for video playback and vector image formats like SVG. Starting with version 5.2 Hollywood also supports Android. Hollywood 6.0 was released in February 2015 and introduces support for OpenGL programming via a dedicated plugin as well as support for the Raspberry Pi. Hollywood 7.0 was released in March 2017 and introduces Unicode support and support for 64-bit architectures.
General information
Hollywood's focus is on ease of use and platform independence. It was mainly designed for the creation of games and multimedia applications. The language set comprises roughly 900 different commands from the following fields of application: 2D graphics, sound, file system operations, text output, animations, sprites, layers, transition effects, image manipulation, saving of images and video files, time and date functions, input functions (keyboard, joystick, mouse) as well as mathematical operations and string functions. Programming in Hollywood is done via so called Hollywood scripts (using the file extension *.hws). These scripts are compiled dynamically and can be converted into stand-alone executables. All Hollywood programs run inside a sandbox, which makes it impossible for them to crash.
Platform independence
Hollywood was designed to be a completely platform independent programming language. Thus, scripts cannot call any API functions of the host operating system directly and are limited to the inbuilt command set. Text rendering is also implemented via a platform independent font backend that ensures that TrueType text looks exactly the same on every platform. Furthermore, all versions of Hollywood support Amiga specific file formats like IFF ILBM images, IFF 8SVX sounds, or IFF ANIM files, to be fully compatible with scripts written on an Amiga system.
GUI development
There are several GUI toolkits for Hollywood. RapaGUI is a cross-platform GUI plugin for Hollywood which supports Windows, macOS, Linux, and AmigaOS. RapaGUI uses native GUI controls provided by the respective host operating system giving all RapaGUI applications a native look and feel. MUI Royale is a GUI toolkit for Hollywood which can be used to create GUIs using the Magic User Interface. Another GUI toolkit for Hollywood is HGui. In contrast to RapaGUI and MUI Royale, HGui draws its GUI controls itself which makes its graphical user interfaces look exactly the same on every platform.
Compiler
A special feature of the cross-platform compiler that comes with Hollywood is the ability to link all external files (including fonts) into the executable to be built automatically. This makes it possible to create programs which consist only of a single file and can thus be easily transported and distributed. Additionally, the Hollywood compiler can compile scripts into Hollywood applets (using the file extension *.hwa). These applets are smaller than regular Hollywood programs, but they can only be started on systems that have Hollywood installed. Finally, it is also possible to export Hollywood scripts as AVI videos.
Development environment
There is no integrated development environment for the Amiga-compatible version of Hollywood. On these systems, Cubic IDE and Codebench can be used to develop with Hollywood as these have support for the Hollywood language through plugins. On Windows, Hollywood comes with an integrated development environment that can be used to create Hollywood scripts. The macOS and Linux versions of Hollywood do not come with an IDE either and can be controlled from the console or else integrated into other IDEs.
Hello World program
A Hello World program in Hollywood could look like this:
Print("Hello World!")
WaitLeftMouse
End
The code above opens a new window on the desktop, prints the text "Hello World!" in white letters and waits for the left mouse button before quitting. The opening of the window is automatically done by Hollywood. If not otherwise requested, Hollywood will automatically open a new window in the resolution of 640x480 for every script.
Hollywood Designer
The Hollywood Designer is an add-on for Hollywood that allows the creation of presentations and kiosk systems with Hollywood. The software uses a WYSIWYG-compliant interface based on slides. Users can create as many slides as desired and fill them with texts, graphics, and sound. Hollywood Designer will then run the slides one after another or in a predefined order. Various transition effects are available. Additionally, it is possible to create applications which require user interaction, like kiosk systems.
All projects created in Hollywood Designer are displayed using Hollywood and can thus also be compiled into stand-alone executables or video files. Advanced users can also embed custom code inside their projects. Through custom code it is possible to access the complete command set of Hollywood.
Technically speaking, Hollywood Designer does nothing else but automatically generate scripts for Hollywood according to the layout defined by the user in the GUI. The process of generating scripts and running them using Hollywood is entirely hidden from the user so that programming skills are not necessary for using Hollywood Designer. However, because Hollywood Designer merely generates scripts for Hollywood, the latter is a mandatory requirement for Hollywood Designer.
The first version of Hollywood Designer was released in April 2004. Currently, the software is only available for Amiga compatible operating systems. However, thanks to the Hollywood cross-compiler, it can also save stand-alone executables for Windows, macOS and Linux from the Amiga platform.
References
External links
Homepage of the developer
Cubic IDE, an IDE for Hollywood
CodeBench, an AmigaOS 4 IDE for Hollywood and other languages
An Infochannel created using Hollywood Designer (Norwegian)
VAMP, the Virtual Amiga Multimedia Player (English & Spanish)
Homepage of KeHoSoftware, Hollywood LCARS SmartHome SmartSensor Project (English & German)
Amiga development software
AmigaOS 4 software
AROS software
Cross-platform software
Integrated development environments
MorphOS software
Presentation software
Procedural programming languages
Video game development software
Programming tools for Windows
MacOS programming tools
Programming languages created in 2002 |
16939156 | https://en.wikipedia.org/wiki/DOS%20API | DOS API | The DOS API is an API which originated with 86-DOS and is used in MS-DOS/PC DOS and other DOS-compatible operating systems. Most calls to the DOS API are invoked using software interrupt 21h (INT 21h). By calling INT 21h with a subfunction number in the AH processor register and other parameters in other registers, various DOS services can be invoked. These include handling keyboard input, video output, disk file access, program execution, memory allocation, and various other activities. In the late 1980s, DOS extenders along with the DOS Protected Mode Interface (DPMI) allow the programs to run in either 16-bit or 32-bit protected mode and still have access to the DOS API.
History of the DOS API
The original DOS API in 86-DOS and MS-DOS 1.0 was designed to be functionally compatible with CP/M. Files were accessed using file control blocks (FCBs). The DOS API was greatly extended in MS-DOS 2.0 with several Unix concepts, including file access using file handles, hierarchical directories and device I/O control. In DOS 3.1, network redirector support was added. In MS-DOS 3.31, the INT 25h/26h functions were enhanced to support hard disks greater than 32 MB. MS-DOS 5 added support for using upper memory blocks (UMBs). After MS-DOS 5, the DOS API was unchanged for the successive standalone releases of DOS.
The DOS API and Windows
In Windows 9x, DOS loaded the protected-mode system and graphical shell. DOS was usually accessed from a virtual DOS machine (VDM) but it was also possible to boot directly to real mode MS-DOS 7.0 without loading Windows. The DOS API was extended with enhanced internationalization support and long filename support, though the long filename support was only available in a VDM. With Windows 95 OSR2, DOS was updated to 7.1, which added FAT32 support, and functions were added to the DOS API to support this. Windows 98 and Windows ME also implement the MS-DOS 7.1 API, though Windows ME reports itself as MS-DOS 8.0.
Windows NT and the systems based on it (e.g. Windows XP and Windows Vista) are not based on MS-DOS, but use a virtual machine, NTVDM, to handle the DOS API. NTVDM works by running a DOS program in virtual 8086 mode (an emulation of real mode within protected mode available on 80386 and higher processors). NTVDM supports the DOS 5.0 API. DOSEMU for Linux uses a similar approach.
Interrupt vectors used by DOS
The following is the list of interrupt vectors used by programs to invoke the DOS API functions.
DOS INT 21h services
The following is the list of functions provided via the DOS API primary software interrupt vector.
Operating systems with native support
MS-DOS – most widespread implementation
PC DOS – IBM OEM version of MS-DOS
OS/2 1.x – Microsoft/IBM successor to MS-DOS and PC DOS
SISNE plus – Clone created by Itautec and Scopus Tecnologia in Brazil
DR-DOS – Digital Research DOS family, including Novell DOS, PalmDOS, OpenDOS, etc.
PTS-DOS – PhysTechSoft & Paragon DOS clone, including S/DOS
ROM-DOS – Datalight ROM DOS version
Embedded DOS – General Software version
FreeDOS – Free, open source DOS clone
ReactOS (IA-32 and x86-64 versions)
Windows 95 – contains MS-DOS 7.0
Windows 98 – contains MS-DOS 7.1
Windows 98 SE – contains MS-DOS 7.1
Windows ME – contains MS-DOS 8.0
Operating systems with DOS emulation layer
Concurrent CP/M-86 (3.1 only) with PCMODE – Digital Research CP/M-86-based OS with optional PC DOS emulator
Concurrent DOS – Digital Research CDOS family with built-in PC DOS emulator
DOS Plus – a stripped-down single-user variant of Concurrent PC DOS 4.1–5.0
Multiuser DOS – Digital Research/Novell MDOS family including Datapac System Manager, IMS REAL/32, etc.
OS/2 (2.x and later) – IBM operating system using a fully-licensed MS-DOS 5.0 in a virtual machine
Windows NT (all versions except 64-bit editions)
Other emulators
NTVDM for Windows NT
DOSEMU for Linux
DOSBox
See also
BIOS interrupt call
Ralf Brown's Interrupt List (RBIL)
Comparison of DOS operating systems
DOS Protected Mode Interface (DPMI)
DOS extender
DOS MZ executable
COMMAND.COM
References
Further reading
(xvii+1053 pages; 29 cm) (NB. This original edition contains flowcharts of the internal workings of the system. It was withdrawn by Microsoft before mass-distribution in 1986 because it contained many factual errors as well as some classified information which should not have been published. Few printed copies survived. It was replaced by a completely reworked edition in 1988. )
(xix+1570 pages; 26 cm) (NB. This edition was published in 1988 after extensive rework of the withdrawn 1986 first edition by a different team of authors. )
The New Peter Norton Programmer's Guide to the IBM PC & PS/2 by Peter Norton and Richard Wilton, Microsoft Press, 1987 .
The Programmer's PC Sourcebook by Thom Hogan, Microsoft Press, 1991
Microsoft MS-DOS Programmer's Reference - The Official Technical Reference to MS-DOS, Microsoft Press, 1993
IBM PC DOS 7 Technical Update
(Printed in the UK.)
External links
The x86 Interrupt List (a.k.a. RBIL, Ralf Brown's Interrupt List)
ctyme.com - INT Calls by function
wustl.edu - Description of MS-DOS services
DOS technology
Operating system APIs
X86 architecture
Interrupts |
25729657 | https://en.wikipedia.org/wiki/E-government%20in%20Europe | E-government in Europe | All European countries show eGovernment initiatives, mainly related to the improvement of governance at the national level. Significant eGovernment activities also take place at the European Commission level as well. There is an extensive list of eGovernment Fact Sheets maintained by the European Commission.
eGovernment at the European Commission level
The European Commission is actively supporting eGovernment both at the national level and at its own supranational level. The vice-president for Administrative Affairs is responsible for the advancement of eGovernment at the Commission level through large-scale activities that implement the e-Commission strategy. The Information Society and Media Directorate-General and the Directorate-General for Informatics implement this strategy, through several programmes and related activities. Two of the most prominent such initiatives are the IDABC programme, and its successor, ISA. IDABC is guided and monitored by a team of national experts. The eGovernment policy of the European Commission until 2010 is described by the i2010 Action Plan that defines the principles and directions of eGovernment policy of the European Commission.
Other projects funded by the European Commission include Access-eGov, EGovMoNet and SemanticGov.
In view of the next five-year period, the ministers responsible for eGovernment of the EU convened in Malmö, Sweden and unanimously presented the Ministerial Declaration on eGovernment on 18 November 2009. This document presents the vision, political priorities and objectives of the EU for the period 2010–2015.
The next eGovernment Action Plan that describes the directions for the period starting at 2011 has been announced by the European Commission under the name Europe 2020 and is expected to be formally adopted in June 2010.
Beside the member states, most (if not all) eGovernment activities of the European Commission target in addition the candidate countries and the EFTA countries. , the complete list comprises 34 countries: Austria, Belgium, Bulgaria, Croatia, Cyprus, Czech Republic, Denmark, Estonia, Finland, France, Germany, Greece, Hungary, Iceland, Ireland, Italy, Latvia, Liechtenstein, Lithuania, Luxembourg, Macedonia, Malta, Netherlands, Norway, Poland, Portugal, Romania, Slovakia, Slovenia, Spain, Sweden, Switzerland, Turkey, and the United Kingdom.
IDABC
IDABC stands for Interoperable Delivery of European eGovernment Services to public Administrations, Businesses and Citizens.
IDABC was a European Union Program launched in 2004 that promoted the correct use of Information and Communication Technologies (ICT) for cross-border services in Europe. It aimed to stimulate the development of online platforms delivering public e-Services in Europe. It used the opportunities offered by ICT to encourage and support the delivery of cross-border public sector services to citizens and enterprises in Europe, to improve efficiency and collaboration between European public administrations and to contribute to making Europe an attractive place to live, work and invest.
To achieve objectives like 'Interoperability', IDABC issued recommendations, developed solutions and provided services that enable national and European administrations to communicate electronically while offering modern public services to businesses and citizens in Europe. In the context of IDABC the European Interoperability Framework version 1.0 was issued.
The programme also provided financing to projects addressing European policy requirements, thus improving cooperation between administrations across Europe. National public sector policy-makers were represented in the IDABC programme's management committee and in many expert groups. This made the programme a unique forum for the coordination of national eGovernment policies.
By using state-of-the-art information and communication technologies, developing common solutions and services and by finally, providing a platform for the exchange of good practice between public administrations, IDABC contributed to the i2010 initiative of modernising the European public sector. IDABC was a Community programme managed by the European Commission's Directorate-General for Informatics.
In 2008, IDABC launched the Semantic Interoperability Centre Europe (SEMIC.EU). eGovernment and other pan-European collaborations can exchange their knowledge and their visions on SEMIC.EU. In the same year, IDABC launched the OSOR.eu website with the aim to facilitate the collaboration between public administrations in their use of open-source software.
IDABC followed the Interchange of Data across Administrations (IDA) program.
The new Interoperability Solutions for European Public Administrations (ISA) programme was adopted by the Council and the European Parliament in September 2009 and has replaced the IDABC programme, which came to an end on 31 December 2009.
Interoperability Solutions for European Public Administrations (ISA)
Interoperability Solutions for European Public Administrations or ISA is the name of a European Commission eGovernment programme for the period 2010–2015. On 29 September 2008, the Commission approved a proposal for a decision of the European Parliament and the Council of Ministers on a new programme for the period 2010–15. This programme is the follow-on of IDABC which came to an end on 31 December 2009. The ISA programme is focusing on back-office solutions supporting the interaction between European public administrations and the implementation of Community policies and activities.
ISA supports cross-border large scale projects launched under the auspice of the ICT Policy Support Programme.
EU member states
Austria
Brief history
eGovernment got an early start in Austria. Since its beginning, public authorities and eGovernment project teams have continually been working to expand and improve public services and underlying processes.
In 1995 the Federal Government set up an Information Society Working Group tasked with identifying the opportunities and threats posed by the development of the information society. In 1998 an IT-Cooperation Agreement was signed between the federal state and the various regions. In May 2003, the Austrian federal government launched an eGovernment initiative, the eGovernment Offensive, to co-ordinate all eGovernment activities in the country. The following year the short-term goal of the eGovernment Offensive – achieving a place in the EU's top 5 eGovernment leaders – was fulfilled, as Austria was ranked No. 4 in the annual eGovernment benchmarking survey. In 2007 according to the study The User Challenge – Benchmarking the Supply of Online Public Services, Austria is listed as the first EU member state to achieve a 100% fully online availability score for all services for citizens.
Legislation
The eGovernment Act and the General Law on Administration Processes and the Electronic Signature Act set the main eGovernment framework in Austria. Austria was the first EU Member State to implement the EU Electronic Signatures Directive.
The Austrian legal eGovernment framework (substantially revised at the end of 2007) defines the following principles for the Austrian eGovernment strategy:
Proximity to citizens
Convenience through efficiency
Trust and security
Transparency
Accessibility
Usability
Data security
Cooperation
Sustainability
Interoperability
Technological neutrality
Key to the eGovernment evolution process in Austria is the introduction of electronic data processing systems based upon citizen cards. Service providers from the public and the private sector can provide electronic services using the citizen cards for authentication.
Key actors
The central element of eGovernment in Austria appears to be the Digital Austria platform which is supervised by the Federal chief information officer, who is also providing consultation services to the federal government regarding policy, strategy and implementation issues. The State Secretary in the Federal Chancellery is responsible for eGovernment strategy at the federal level.
Belgium
In the 2009 Smarter, Faster, Better eGovernment – eighth Benchmark Measurement report prepared for the European Commission, Belgium is placed on the 16th position among the EU27+ countries with respect to the full online availability of eGovernment services and on the 12th position with respect to their online sophistication. As far as other major information society benchmarks are concerned, the country is placed on the 24th place (out of 189 countries) in the UN eGovernment Readiness Index 2008 and the 18th position (out of 133 countries) in the WEF Global Competitiveness Index 2009–2010.
Strategy
The Belgian eGovernment strategy is aimed at creating a single virtual public administration to be characterised by fast and convenient service delivery, at the same time respecting the privacy of users. The services shall be developed around the needs of citizens putting in place complete electronic administrative procedures independently from actual authorities being involved. In addition, simplified procedures shall provide for a reduced bureaucracy. To this end the strategy suggests four main streams all efforts should be structured around:
Re-engineering and integrating service delivery around users' needs and life events;
Cooperation among all levels of Government;
Simplification of administrative procedures for citizens and businesses;
Back office integration and protection of personal data.
Taking into account the federal structure of Belgium, the second strategic stream addresses the implementation of eGovernment efforts throughout all levels of Federal, Regional, and Community authorities. The framework for this co-operation was set by the eGovernment Cooperation Agreement, adopted in 2001, expressing in particular the commitment of all Government layers to use the same standards, identification infrastructure, and eSignature. This agreement was later re-conducted and enhanced by a cooperation agreement on the principles of a seamless eGovernment in 2006. Key aspects addressed by latter document include:
provision of public services following an intentioned based and user friendly approach placing emphasis on security and confidentiality aspects;
ensuring interoperability of eGovernment solutions;
maximising the re-usability of eGovernment developments and services;
ensuring that data would be collected only once and would be re-used to a maximum extend.
Legislation
To achieve the objectives of the second cooperation agreement, a resolution on a seamless government, was adopted in 2006, focusing on a close cooperation regarding identification and implementation of principles for a seamless eGovernment and on the development and usage of the corresponding services. At regional and communal level further eGovernment strategies have been put in place within the framework of competencies of the respective administrations.
At Federal level, the Minister for Enterprise and Simplification holds the responsibility for the computerisation of public services. The minister is responsible for FedICT, the federal agency in charge of eGovernment and the information society. This agency aims in particular at:
developing a common eGovernment strategy,
coordinating related actions, and
ensuring its consistent implementation within the Federal Administration.
Key actors
Coordination and implementation of eGovernment services in the social sector lies in the responsibility of the Crossroads Bank (CBSS). Besides, additional eGovernment projects are being implemented by further federal departments, ministries and agencies on a joint or individual basis. At regional level, dedicated entities have been created for the implementation of respective strategies, namely, the Coordination Cell for Flemish e-Government (CORVE) in Flanders, the eAdministration and Simplification Unit (EASI-WAL) in Wallonia, and the Brussels Regional Informatics Centre (BRIC) in the Brussels-Capital Region.
The Federal portal belgium.be serves as a single access point to all eGovernment services for both citizens and businesses. The content is offered in French, Dutch, German and English. In addition, dedicated portals have been set up for the different regions of Belgium offering a broad spectrum of relevant information. These are the Flemish regional portal vlaanderen.be, the Walloon regional portal wallonie.be, and the Brussels regional portal.
At community level, the portals of the French-speaking Community and of the German-speaking Community mainly focus on information on communities' administrative procedures and services.
With respect to the exchange of information in the public sector, the Federal Metropolitan Area Network (FedMAN) constitutes a high-speed network connecting the administrations of 15 federal ministries and the Government service buildings in Brussels.
National infrastructure
In the area of eIdentification, Belgium launched a large scale distribution of electronic identity cards in 2004. Beyond their functions as traditional identification and travel documents, the Belgian eID cards can be used for identification in restricted online services. They are implemented as smart cards containing two certificates, one to be used for authentication, and another one for generating digital signatures. The eID cards can be used within almost all governmental electronic signature applications. Moreover, an electronic ID card for the under-12s (Kids-ID) was introduced in March 2009, enabling kids to access children-only Internet chat rooms as well as a range of emergency phone numbers.
In the area of eProcurement, an eNotification platform was launched in 2002. This platform is currently used by all federal authorities for notifying invitations to tenders. Businesses can browse through the published notices to identify tender opportunities. This system communicates with the eTendering platform enabling published notices to be accessed and processed by economic operators and contracting authorities within the framework of the tendering phase.
Finally, the Belgian eGovernment relies on the concept of authentic sources. According to this approach, public entities store the data collected from citizens only once in their databases and, whenever needed, they exchange missing data among themselves. Such databases include:
the National Register (basic data of Belgian citizens);
the Crossroads Bank for Enterprises (business register containing all authentic sources for all Belgian enterprises); and
the Crossroads Bank for Social Security Register (data relating to persons registered with the Belgian Social security).
Bulgaria
Bulgaria through the constant development of information technology infrastructures and the expansion of relevant services, has managed to show a significant progress in the eGovernment sector.
Certain tokens that demonstrate its progress are indicatively the creation of:
egov.bg: the official eGovernment portal.
a National Health portal.
eID cards.
eHealth cards.
the eSender service.
the ePayment Gateway.
Strategy
The Bulgarian government has developed an eGovernment strategy in the aim to render the Bulgarian economy more competitive and at the same time satisfy the needs of its citizens and businesses thanks to efficient and effective administrative services. The principal eGovernment activities focus on:
The development of central eGovernment systems, comprising the creation of an eGovernment Web portal, and the launch of a communication strategy to inform the public on eGovernment and safety when using those systems and the services provided.
Support to regional and local administrations, namely, one stop shop services for regional administrations and technical assistance to municipal administrations (hardware, software, and eGovernment best practices).
Training of the administrations' staff in information technologies and the use of eGovernment services; the training can be divided into training for IT specialists, and mass training for the administration employees.
Key actors
The Ministry of Transport, Information Technology and Communications (MTITC) is responsible for laying down the policies (at national and regional levels) that govern eGovernment strategy in Bulgaria, and also for coordination and the provision of the necessary support. However, the implementation of eGovernment projects falls under the responsibility of the competent ministries and administrative bodies.
Croatia
Significant progress has been achieved in Croatia as the Government has been making a considerable effort to develop Information Society infrastructure and improve relevant eServices. The eGovernment strategy of the Government of Croatia is defined in the eCroatia Programme. The main objective of that programme is the constant development and the provision of accessible eServices to citizens and businesses in various fields, namely, the Public Administration, health, education, and the judicial system. The outcome of this action will thus contribute to diminishing bureaucracy, minimising illegal activities while reducing cost to government operations and facilitate government interaction with citizens and businesses.
An ICT legal framework has been created and regulated by a set of laws which is also supplemented by the Convention on Cybercrime (OG 173/2003) and the Electronic Document Act (OG 150/2005). That legal framework covers a wide variety of domains:
Freedom of Information Legislation (Act on the Right of Access to Information),
Data Protection/Privacy Legislation (Law on Personal Data Protection (OG 103/2003)),
eCommerce Legislation (Electronic Commerce Act (OG 173/2003)),
eCommunications Legislation (Electronic Communications Act (OG 73/2008)),
eSignatures Legislation (Electronic Signature Act (OG 10/2002)), and
eProcurement Legislation (Law for Public Procurement (OG 117/01)).
In 2010, the body responsible for laying down the eGovernment policies and strategies and for the coordination and implementation of the eCroatia Programme was the Central State Administrative Office for eCroatia.
Cyprus
Brief history
In March 1989 a National Government Computerisation Master Plan was adopted for the period 1989–1997 aiming to examine the governmental information needs and identify potential ICT applications. To speed up the process of implementing the plan, the Data Management Strategy (DMS) later adopted provided structural information to fulfil the requirements in the public sector. At a later stage the Information Systems Strategy (ISS) acted as a complementary plan aiming to provide good quality of services to the public. Eventually in 2002 the eGovernment Strategy was adopted updating thus the ISS. Since January 2006 all government ministries, departments, and services have their own website. At the same year the first government web portal, was launched making accessible several governmental and non-governmental websites and many informative and interactive services. Since 2008 the main aim of the strategy has been to take steps towards productivity and growth until 2015 following closely the EU policies and directives. Many of the basic objectives of the eEurope Action Plan were fulfilled and the government is now promoting the Lisbon strategy of the European Commission.
Legislation
Although there is no specific eGovernment legislation in Cyprus, section 19 of the Cyprus Constitution protects the "right to freedom of speech and expression". The data protection and privacy is being ensured by two main laws: the Processing of Personal Data (Protection of Individuals) Law, which came into force in 2001, and the Retention of Telecommunication Data for Purposes of Investigation of Serious Criminal Offences Law of 2007.
The Law for Electronic Signatures (N. 188(I)/2004) establishes the legal framework around additional requirements for the use of electronic signatures in the public sector; however, it does not change any rules created by other legislation regarding the use of the documents.
The eProcurement legislation in Cyprus has been put into force at the beginning of 2006. At a latter stage, the eProcurement system was implemented based on the provisions of the specific law (N.12(I)2006). The system aims to support the electronic publication and evaluation of tenders, and is available free of charge for all contractors in the Republic of Cyprus and all economic operators in Cyprus and abroad.
Actors
Since February 2009 the Minister of Communications and Works became the minister in charge for the information society. A national information society strategy was established by the Department of Electronic Communications, whereas an advisory committee chaired by the Permanent Secretary of the Ministry of Communications and Works was constituted by the representatives of relevant ministries, industry, and academia.
The Directorate for the Coordination of the Computerisation of the Public Sector is in charge of the computerisation project of the civil service. The departments responsible for the implementation of the information technology are:
Ministry of Finance – Department of Information Technology Services (DITS).
Ministry of Finance – Department of Public Administration and Personnel (PAPD).
Ministry of Communications and Works.
Office of the Commissioner of Electronic Communications and Postal Regulation.
Government Offices, e.g. Police, and Army.
The infrastructure components of Cyprus include the Cyprus Governmental portal, the Government Data Network (GDN) and Government Internet Node (GIN), the eProcurement System. and the Office Automation System (OAS).
Czech Republic
Strategy
The core principles guiding the development of eGovernment are listed in a policy document applicable for the period 2008–2012, the Strategy for the development of Information Society services ("Strategie rozvoje služeb pro 'informační společnost in Czech). The eGovernment concept it contains can be summed up as follows: eGovernment is the means to satisfy the citizens' expectations as to public services while modernising the public administration, in a way to cut both red tape and costs. Citizen satisfaction stands as the ultimate indicator of success. To reach success, the relevant legal basis must be established and the supporting infrastructure must be made interoperable.
Major achievements
The Czech Republic is one of the few EU Member States to have an eGovernment Act. The Czech eGovernment Act – the Act on Electronic Actions and Authorised Document Conversion ("ZÁKON o elektronických úkonech a autorizované konverzi dokumentů" in Czech) is in force since July 2009, and it provides for the following set of principles:
Electronic documents are equally valid as paper documents are;
The digitalisation of paper documents is enabled;
Electronic document exchange with and within the public administration must be as simple as possible and fully secured, by means of certified electronic signatures;
Government to Business (G2B) and Government to Government (G2G) communications shall be stored on the dedicated Data Box set up by each legal person, be that private or public. More than a simple email box, a Data Box is an authenticated communication channel. Citizens may set up a Data Box if they wish so.
It is worth highlighting that the Data Boxes Information System was successfully activated on 1 November 2009, as required by the eGovernment Act.
Other noteworthy achievements include:
The Public Information Portal, selected online information and electronic services from central and local government alike.
All of the public services aimed at businesses have been made available online.
The Czech POINTs network – a network of over 3700 offices (data available in August 2009) disseminated throughout the entire territory. They are the citizens' local contact points with the Public Administration. In these offices, citizens can request and obtain printouts of public register extracts.
The Tax Portal for the Public.
The eJustice Portal (unfortunately not containing Czech laws).
Actors
Responsibility for steering and coordinating the eGovernment policy lies with the Czech Ministry of Interior. The latter is assisted in this task by the Deputy Minister for Information Technology. The Government Council for the Information Society provides expert and technical support. At local level, the regions and municipalities perform their own eGovernment initiatives under the supervision of the Ministry of the Interior.
Denmark
Overview
According to the Towards Better Digital Service: Increased Efficiency and Stronger Collaboration strategy paper, eGovernment in Denmark has taken considerable steps in developing an effective network of public electronic services. As stated at page 6 of this document:
To this end, the Danish government, the Local Government Denmark (LGDK), and the Danish Regions have joined their forces.
Strategy
Denmark's eGovernment Policy is based on the following three priority areas:
better digital service;
increased efficiency through digitalisation; and
stronger, binding collaboration on digitalisation.
The Realising the Potential (2004–2006) strategy paper added impetus to the development of the public sector's internal digitalisation, while the Towards eGovernment: Vision and Strategy for the Public Sector in Denmark (2001–2004), marked the beginning of a joint cooperation among the municipal, regional, and State levels of administration towards digitalisation.
Key actors
The main actors that implement, co-ordinate, support, and maintain eGovernment policies in Denmark are mainly the Ministry for Science, Technology, and Innovation, the Steering Committee for joint-government cooperation (STS), the Digital Task Force, and the National IT and Telecom Agency Local Government Denmark.
In the 2009 European eGovernment Awards, the Danish Genvej portal won the eGovernment Empowering Citizens prize.
Three more portals were among the finalists of the 2009 eGovernment Awards: the Oresunddirekt Service was among the finalists in the category "eGovernment supporting the Single Market", the EasyLog-in in the category "eGovernment Enabling Administrative Efficiency and Effectiveness", and the NemHandel – Open shared e-business infrastructure in the category "eGovernment Empowering Businesses".
Estonia
Estonia is widely recognized as e-Estonia, as a reference to its tech-savvy government and society. e-Government in Estonia is an intertwined ecosystem of institutional, legal and technological frameworks that jointly facilitate an independent and decentralized application development by public and private institutions to replace conventional public services with digital ones. The most crucial components of e-Government in Estonia are digital identification of citizens (e-ID), a digital data exchange layer (X-Road) and ultimately, a layer of applications developed by different public and private institutions.
Estonia has established its e-government program with the support of the European Union since 1996 with the introduction of e-Banking. In 2017, Estonia described its digital inclusiveness under the name of e-government with a wide array of e-services in government affairs, political participation, education, justice, health, accommodation, police, taxes, and business.
Estonia conducts legally binding i-Voting at national and local elections and offers e-Residency to foreigners.
Estonia shares its knowledge of developing its e-society with other countries through its E-Governance Academy (). The academy has trained over 4,500 officials from more than 60 countries and led or participated in more than 60 international ICT projects on the national, local and organizational levels.
Estonian ID-card project
Estonia's success in transforming their public services online is based on the widespread use of electronic identification cards. Since 2002 about 1.2 million of these credit-card size personal identification documents have been issued allowing citizens to digitally identify themselves and sign documents, to vote electronically (since 2005), create a business, verify banking transactions, be used as a virtual ticket, and view medical history (since 2010).
ID-cards are compulsory for all citizens and they are equally valid for digital and physical identification. Physically, they are valid for identification in Estonia, but more importantly, they are also valid for travel in most European countries. Thus, in addition to their primary functionality – digital identification – ID-cards are effectively used as replacements for traditional identification documents.
The digital ID project started already as early as in 1998 when Estonia had sought solutions on how to digitally identify their citizens. By 1999 a viable project in the form of current ID-card was proposed and the legal framework to enable digital identification was set up in the following years. In 2000 the Identity Documents Act and the Digital Signatures Act, the two most important bills regulating the use of digital ID's, were passed in the parliament. The first states the conditions to which an ID-card must adhere to, but most importantly states that the ID-card is compulsory for all Estonian citizens. The latter, states the conditions for a state-governed certification registry, which is fundamentally linked to the functioning of the digital ID-card. Following these events, the first ID-cards where issued in January 2002. Since then about 1.24 million of digital ID-cards have been issued. By the end of 2014 digital ID card has been used about 315 million times for personal identification and 157 million times for digital signatures. An average annual growth rate over 12 years (from 2003 to 2014) amounts to about 7.4 million authentications, and about 3.5 million signatures per year.
X-Road system
As technology is the primary enabler of e-governance, the critical question is how to ensure secure communication between scattered government databases and institutions that use different procedures and technologies to deliver their services. Estonia's solution to this problem was to develop the X-Road, a secure internet-based data exchange layer that enables state's different information systems to communicate and exchange data with each other.
X-Road serves as platform for application development by which any state institution can relatively easily extend their physical services into an electronic environment. For example, if an institution, or a private company for that matter, wishes to develop an online application it can apply for joining the X-Road and thereby automatically get an access to any of the following services: client authentication (either by ID-card, mobile-ID or the internet banks authentication systems); authorization, registry services, query design services to various state managed data depositories and registries, data entry, secure data exchange, logging, query tracking, visualization environment, central and local monitoring, etc. These services are automatically provided to those who join the X-Road and they provide vital components for the subsequent application design. Therefore, X-Road offers a seamless point of interaction between those extending their services online and different state-managed data sets and services.
Another important feature of X-Road is its decentralized nature. X-Road is a platform, an environment, for efficient data exchange, but at the same time it has no monopoly over individual data repositories that belong to those institutions that join the X-Road. Moreover, by its very design X-Road requires every joining institution to share their data with others if required and necessary. As such, every joining institution, every developed application can use the data stored in other repositories and is even legally encourage to do so in order to avoid repetitive data collection from the client side. Because the data sharing enables development of more convenient services than those institutions would be able to pull of single-handedly, this system implicitly incentives the reuse of the data. The incentive works because such a collective process allows for a seamless and more efficient user experience and thus increases the interest, both form state institutions to develop digital services as well as individuals to reach out to the state.
e-Services
When the digital identification and the data exchange layer were provided by the state, different institutions have developed their own extension of their services into the digital realm. Numerous online public services are available to Estonian citizens and residents including eID, e-signature, e-taxes, online medical prescriptions, i-voting, e-police, e-health care, e-notary, e-banking, e-census, e-school and much more. Driven by convenience, most of the services offer efficiency in terms of money and time saved by the citizens as well as public officials. For example, selling a car in Estonia can be done remotely with less than 15 minutes, filing an online tax declaration takes an average person no more than five minutes, and participating in elections by Internet voting takes 90 seconds on average.
Strategy
The basic policy documents concerning the national e-Government in Estonia are the Principles of the Estonian Information Policy, approved in May 1998, and the Principles of the Estonian Information Policy 2004–2006, approved in spring 2004. In 2007 the Estonian Information Society Strategy 2013 entered into force setting thus the objectives for the ICT use in period 2007–2013. In 2005 a nationwide information security policy was launched aiming to create a safe Estonian information society for business and consumers.
The legal foundations for realization of e-Government in Estonia were laid down in 1996–2001, with the following Acts adopted by the Estonian Parliament:
the Personal Data Protection Act which entered in force in 1996: the Act protects the personal rights in terms of personal data processing;
the Digital Signatures Act which entered into force in 2000: the Act defines the legal validity of electronic vs. handwritten signatures; and
the Public Information Act which entered into force in 2000: the Act aims to establish an Administration System where all databases and information systems should be registered.
In 1998 the Government of Estonia adopted the principles of the Estonian Information Society as well as the Information Policy Action Plan – the country's first Information Society strategy documents.
The main body for the development and implementation of the state information policy in Estonia is the Ministry of Economic Affairs and Communications, and especially the Department of State Information Systems (RISO). Moreover, the Estonian Informatics Centre (RIA), develops the main governmental infrastructure components.
The following components can be mentioned as examples of the Estonian eGovernment infrastructure:
The national eGovernment portal, launched in 2003;
EEBone;
Public Procurement State Register;
X-road middleware; and
Health Information System.
In 2018, the government office and the ministry of economic affairs and communications of Estonia announced that it will also launch a cross-sectoral project to analyse and prepare the implementation of artificial intelligences (kratts). An expert group has been set up that will comprise state authorities, universities, companies, and independent experts. As part of this project, regulation will be set up for AI
Finland
In the 8th EU Benchmark report prepared in 2009 for the European Commission, Finland is considered to be a top performer country in most eGovernment and information society benchmarks. In this report, Finland stands on the seventh position among the EU27+ countries with respect to the full online availability of its services and on the seventh position with respect to their online sophistication. The latter indicator reflects the maturity of online services by classifying them into five categories (sophistication levels) according to their transactional abilities. In this area Finland belongs to the so-called fast growers, i.e. countries which improved their relative performance by at least 10% in comparison to 2007 sophistication results. In addition, the country is on a top position (third place) especially with respect to the provision of automated/personalised eGovernment services, i.e. those reaching the Pro-active fifth sophistication level. Besides, Finland is among the countries featuring the maximum possible score (100%) regarding usability of eGovernment services, user satisfaction monitoring, and user focused portal design.
Strategy
Finland's long-term strategic vision with respect to eGovernment is laid down in the National Knowledge Society Strategy 2007–2015 document, adopted in September 2006. This strategy aims to turn Finland into an "internationally attractive, humane and competitive knowledge and service society" by the year 2015. To achieve this vision, the strategy focuses on four main strategic intents, namely those of:
"Making Finland a human-centric and competitive service society": multi-channel, proactive, and interactive eServices shall be put in place at the disposal of citizens and businesses. These services shall operate on a customer-oriented and economic manner. Moreover, the acquisition processes of enterprises and the public administration should be made electronic throughout the purchase and supply chain.
"Turning ideas into products; a reformed innovation system": New products, services and social innovations shall be developed on the basis of cooperation between universities, research institutions, public administration, organisations, and enterprises. Thereby, design and user orientation of the envisaged products or services are considered to be key success factors.
"Competent and learning individuals and work communities": Appropriate measures shall be taken to facilitate citizens developing knowledge as this is an important factor for securing Finland's competitiveness in the long term.
"An interoperable information society infrastructure, the foundation of an Information Society": Finland aims to put in place by 2015 a reliable information and communications infrastructure that shall feature high-speed connections, comprehensive regional coverage, and a 24/7 availability. Paired with increased security and the availability of electronic identification, this infrastructure shall form the basis for the provision of innovative digital services.
Moreover, in April 2009, the SADe programme was set up to serve as a national action plan for putting forward eServices and eDemocracy for the period 2009–2012. This plan is aimed at:
providing all basic public eServices for citizens and for businesses by electronic means by 2013;
ensuring interoperability of information systems in the public administration; and
facilitating access to public services through a common client interface.
Within this framework, the SADe Services and Project report 2009 was published in January 2010 as an update on programme's implementation. This report constitutes a proposal for the main plans and measures for eServices and eAdministration to be followed to foster developments in the information society in the period 2009–2012.
Actors
Responsibility for eGovernment in Finland lies with the Ministry of Finance, which is also responsible for the public administration reform and the general Finnish ICT policy. Within the ministry, the Public Management Department is responsible for ICT policy coordination as well as for services provision and quality. Moreover, created in early 2005, the State IT Management Unit forms a part of the Public Management Department, in charge of preparing and implementing the Government's IT strategy. This unit is headed by the State IT Director who also acts as a government-wide chief information officer (CIO). Further relevant actors include:
the Advisory Committee on Information Management in Public Administration (JUHTA, within the Ministry of Interior), responsible for coordinating the development of information technology, information management, and electronic services in the central and local Government;
the State IT Service Centre, responsible for shared IT services in public administration; and
the Ubiquitous Information Society Advisory Board, responsible for the provision of insights on the identification of priorities for the national information society policy.
Infrastructure
The suomi.fi portal constitutes Finland's single access point to online public services offered from both state and local authorities. Launched in April 2002, the portal offers a broad spectrum of information structured around daily life events, complemented by downloadable forms. For the business community the yritysSuomi.fi portal was set up, featuring comprehensive information about enterprises, linking to business related eServices within suomi.fi as well as providing access to business related legislation on Finland's official law database Finlex.
In the area of eIdentification, Finland introduced the Finnish Electronic Identity card in 1999. This card enables Finnish citizens to authenticate themselves for online services and conduct electronic transactions. This card can also be used for encrypting emails and enables the use of digital signatures. Moreover, since October 2006, employees in the state administration are able to identify themselves in the public administration information systems by using a civil servant's identity card, containing a qualified certificate.
With respect to eProcurement, the HILMA Notification service constitutes a platform for announcing national and EU call for tenders. Using this platform is mandatory for tenders above certain thresholds. Moreover, a further eProcurement platform is maintained by Hansel Ltd, a state-owned company acting as the central purchasing unit for the government.
France
Key policy events
The oldest track of an eGovernment measure in France was the nationwide release in 1984 of Minitel terminals through which citizens and companies could access several public services and information remotely.
eGovernment first stood as a policy priority in 1998, in the framework of the strategy to prepare France to become an information society. It was in 2004 that the development of eGovernment turned into a standalone policy with the launch of both a strategic plan and an action plan commonly referred to as the ADELE programme. The latter was aiming to simplify and make the public services accessible by electronic means to all users, 24 hours a day, seven days a week, as well as to cut the costs generated by the operation of the public administration. A pre-condition for this and a related objective listed in the programme was to generate trust in new ways of delivering services.
The year 2005 marked a turning point for eGovernment with the adoption of a government ordinance regulating and granting legal value to all aspects of the electronic exchanges (of data, information, and documents) taking place within the public administration, as well as between the public bodies and the citizens or businesses. Considered as the country's eGovernment Act, the ordinance also set the year of the advent of eGovernment to 2008.
Since then, the further implementation of eGovernment has been a shared priority of both Digital France 2012 (the plan for the development of the digital economy by 2012) and the General Review of Public Policies, a reform process which was launched in June 2007 to keep the public spending in check while refining public services in a way to centre them even more on their users' needs.
Subsequently, the digital bill or “Loi pour une République Numérique” was adopted in 2016 with the goal of regulating the digital economy as a whole through new provisions. Thus, it establishes the following principles: net neutrality, data portability, right to maintain the connection, confidentiality of private correspondence, right to be forgotten for minors, better inform consumers of online reviews, openness of public data, improved accessibility, and digital death. Moreover, new development plans of the digital economy have been launched in 2017 such as Public Action 2022 (with three main objectives related to users, public officials, and taxpayers) and Concerted Development of the Territorial Digital Administration (DCANT) 2018-2020 (which aims to build fluent and efficient digital public services by being the roadmap for regional digital transformation).
Major achievements
The eGovernment portal service-public.fr provides citizens and businesses with a one stop shop to online government information and services that are displayed according to life-events. A related achievement was what the 8th EU Benchmark refers to as one of France's biggest success stories since 2007: the second generation portal called mon.service-public.fr. It is a user-customised and highly secured (via eIdentification) single access point to all the public services available online, some of which are entirely transactional. A personal account enables users to keep track and know the status of all their interactions with the public administration.
According to the United Nations' 2008 worldwide e-Government Survey, the website of the French prime minister is the best of its kind in western Europe due to the fact it "has a strong e-participation presence and has features for online consultation, has a separate e-government portal and has instituted a time frame to respond to citizen's queries and e-mails." The Survey goes on: "The site also contains a number of news feeds and RSS to continuously update citizens with information from the media and blogs."
Other noteworthy achievements include:
The taxation portal which enables the filing of personal income and corporate returns and the online payment of taxes;
The country-wide eProcurement platform Marchés publics and, since 1 January 2010, the related legal right for contracting authorities to require bidders to submit their applications and tenders by electronic means only;
The electronic social insurance card Vitale, the ePractice Good Practice Label case winner 2007;
The possibility to notify online a change of address to several public authorities at once;
The open data portal, data.gouv.fr, launched in December 2011, which allows public services to publish their own data;
The Inter-Ministerial Network of the State (RIE), a shared network that carries data exchanged within each Ministry and between ministries;
The SSO solution France Connect, launched in June 2016, which provides users (10 million users by end of 2018) with an identification mechanism recognised by all digital public services available in France, and will allow the country to implement the European Regulation eIDAS (Electronic Identification and Signature);
The portal Démarches simplifiées, launched on 1 March 2018, which aims to simplify all public services, by allowing public administrations to create their own online forms;
Actors
The Ministry for the Budget, Public Accounts and Civil Service is responsible for steering the eGovernment policy. It shares this remit with two Inter-ministerial Directorates (created in 2017), the Inter-Ministerial Directorate of Public Transformation (DITP), which is under the authority of the Minister of State Reform and led by the inter-ministerial delegate for public transformation, and the Inter-Ministerial Directorate for Digital Affairs and State Information and Communication System (DINSIC), which is placed by the delegation of the Prime Minister, under the authority of the Minister in charge of the digital. This organisation for public and digital transformation of the State was formerly known as General Secretary for the Modernisation of Public Administration (SGMAP) for the period 2012–2017. Therefore, these two eGovernment actors are acting as a support for ministries and administrations in the conduct of public transformation of the State. Indeed, they coordinate the Public Action 2022 programme and lead innovative interdepartmental projects.
Germany
Germany is a federal State made up of 16 states the so-called Länder. Based upon the political structure in the country eGovernment efforts follow a threefold dimension, thus focusing on federal, state, and local level. Initial efforts began already in 1998 with the MEDIA@Komm project for the development of local eGovernment solutions in selected regions.
Later on, in 2000, BundOnline2005 initiative was launched with the main target to modernise the administration by making all federal public services capable of electronic delivery by the end of 2005. This strategy, which dominated the upcoming years, was driven by the vision that the federal administration should be lined up as a modern, service-orientated enterprise offering services that should follow a user centric approach by focusing on citizens and their needs. This initiative was successfully completed on 31 December 2005 achieving a total of more than 440 Internet services to be made available online. The detailed results of the initiative may be found in the BundOnline Final Report, published on 24 February 2006.
eGovernment Strategy
Germany's current eGovernment Strategy is laid down in the eGovernment 2.0 programme. This programme is part of the more general strategic approach set out in the document Focused on the Future: Innovations for Administration concerning the overall modernisation of the Public Administration. The eGovernment 2.0 programme identifies the following four fields of action:
Several projects have been initiated to accomplish the targets set, including:
Electronic Identity: A new electronic identity Card has been planned to be introduced in Germany on 1 November 2010. Beyond traditional identification functions the new ID card will facilitate identification of the owner via internet by utilising a microchip containing holder's data in electronic format including biometrical data (digital facial image/fingerprints). Optional digital signature functionality is also foreseen.
De-Mail: The De-Mail system is aimed at facilitating the secure exchange of electronic documents among citizens, businesses, and public authorities via Internet.
Public Service Number (D115): Citizens are able to use the unitary public service number 115 as a single contact point to the public administration and obtain information about public services.
In parallel to modernisation efforts focusing on the federal public administration, continuous efforts are also made to create a fully integrated eGovernment landscape in Germany throughout the federal government, federal-state governments, and municipal administrations. This objective is addressed by the Deutschland-Online initiative, a joint strategy for integrated eGovernment adopted in 2003. This strategy places particular emphasis on following priorities: Integrated eServices for citizens and businesses; Interconnection of Internet portals; Development of common infrastructures; Common standards; and Experience and knowledge transfer.
Other eGovernment related strategy include: the Federal IT strategy (adopted on 5 December 2007) aiming at improving IT management within the government, as well as the Broadband Strategy of the Federal Government (adopted on 18 February 2009) aimed at providing businesses and household with high end broadband services by 2014.
Legislation
The legal basis for eGovernment in Germany is set by a framework of laws regulating key aspects of eGovernment (and of the Information society in general) such as: Freedom of Information Legislation (Freedom of information Act); Data Protection (Federal Data Protection Act); eSignatures related legislation (Digital Signature Act); and legislation regarding the re-use of public sector information (Law on re-use of Public Sector Information).
Greece
Brief history
The initial action towards eGovernment in Greece took place in 1994 with the Kleisthenis programme, which introduced new technologies in the public sector. SYZEFXIS, the National Network of Public Administration, was launched in 2001 and progressively was connected to the Hellenic Network for Research and Technology (GRNET) and to the EU-wide secure network TESTA. In the period 2000–2009 several projects were launched, such as, ARIADNI, which addressed the evaluation, simplification, and digitisation of the administrative procedures, POLITEIA, which re-established the real needs of the public administration, and Taxisnet which offered the citizens online tax and custom services that include the administration of VAT and VIES declarations, income tax declaration, vehicle registration, etc. In 2009, the National Portal of Public Administration, HERMES, ensured the safe transaction of public information. Since the introduction of the national Digital Strategy 2006–2013 which entered its second phase in 2009, Greece has also shown an important progress in the field of information and communication technologies.The Greek e-Government Interoperability Framework (Greek e-GIF) places among the overall design of the Greek Public Administration for the provision of e-Government services to public bodies, businesses and citizens. It is the cornerstone of Digital Strategy 2006-2013 for the transition and adjustment of the requirements of modern times and is directly related to the objectives and direction of European policy 2010 - European Information Society 2010. The Electronic Government framework aims to support effectively e-Government at Central, Regional and Local level and contribute to achieving interoperability at the level of information systems, procedures and data.
Strategy
The white paper published in 1999 and updated in 2002 aimed to emphasise the need of quality of public services. For the period 2006–2013 a new strategic plan, the Digital Strategy 2006–2013, was adapted to map the national digital course. The plan has not been focused on specific projects per each organisation; its purpose was the improvement of the productivity of Greek economy and the quality of citizen's life. According to the National Strategic Reference Framework for 2007–2013 the organisation of the public administration is aimed to be improved through the operational programme Public Administration Reform.
Legislation
The Greek Constitution guarantees the fundamental principles of the right to access information (relevant law is No. 2690/1999), the participation of everyone to the information society, the obligation of the state to reply to citizen requests for information in a timely fashion. The State operations on eGovernment are audited by the Hellenic Court of Auditors.
Further legal entities that have been adopted are the following:
Law 2472/1997 on the Protection of Individuals with regard to the Processing of Personal Data: it protects the citizens' right to privacy
Telecommunications Law 2867/2000: it controls the eCommunications
Presidential Decrees (59, 60, 118/2007): they make simpler the public procurement procedures and establish an eProcurement process.
Actors
The Ministry of Interior is in charge of eGovernment in Greece, and more specifically, the General Secretariat for Public Administration and eGovernment. Also, the Special Secretariat of Digital Planning, Ministry of Economy and Finance, has as its main task to implement the overall Information Society strategy.
Hungary
Hungary's eGovernment policies for the period 2008–2010 are displayed in the E-public administration 2010 Strategy document. This strategy aims to define the future Hungarian eAdministration and set the necessary uniform infrastructure for the implementation of its future objectives, focused on four primary domains:
Modern eServices for all interaction between the citizens, businesses and the public administration.
Services that will render interaction with the public administration more effective and transparent.
Dissemination of eGovernment knowledge.
More adaptable eGovernment for disadvantaged businesses and social groups.
Despite the lack of eGovernment specific legislation, the eGovernment landscape is created through the following legal framework adopted in the period 2004–2009:
The bodies responsible for laying down the eGovernment policies and strategies are the Senior State Secretariat for Info-communication (SSSI) and the State Secretariat for ICT and eGovernment (SSIeG) together with the Committee for IT in the Administration (KIB). In parallel, they are also in charge of coordination, the implementation of those policies and strategies, and the provision of the related support.
The main eGovernment portal is Magyarorszag.hu (Hungary.hu) which serves as a services platform. Thanks to the gateway Client Gate the portal has become fully transactional. Another important infrastructure component is the Electronic Government Backbone (EKG) which is a safe nationwide broadband network linking 18 county seats with the capital Budapest providing the central administration and regional bodies with a secure and monitored infrastructure, enhancing data and information exchange, Internet access and public administration internal network services.
Ireland
The use of Information Technology to support organisational change has been high on Ireland's Government modernisation agenda since the mid '90s. Similarly, the information society development policy – launched during the same decade – targeted among other goals the electronic delivery of government services. Three strands of electronic service delivery development were followed, namely: information services; transactional services; and integrated services. The latter stage was reached with the so-called "Public Service Broker", an information system acting as an intermediary between the public administration and its customers. This system has been supporting the single and secured access to central and local government services for citizens and businesses, via various type of channels (online, by phone, or in regular offices).
The year 2008 stands as a turning point in terms of governance due to the adoption of the Transforming Public Services Programme, which rethinks and streamlines the eGovernment policy, in an aim to enhance the efficiency and the consistency of the work of the public administration while centring it around the citizens' needs. The approach opted for is that of a rolling programme that is determined by the Department of Finance and assessed bi-annually. The development of shared services and the support to smaller public administration bodies are among the core elements of the new deal.
Major Projects
The Irish government seems to be effectively delivering on its policy commitments. "Irish citizens feel positive about the effectiveness of their government at working together to meet the needs of citizens". So reveals the report Leadership in Customer Service: Creating Shared Responsibility for Better Outcomes which is based on a citizen satisfaction survey conducted in 21 countries worldwide. As for the 8th EU Benchmark of 2009, it notes a "strong growth" in Ireland's eGovernment policy performance, in particular in terms of online availability and sophistication of its public services, as well as positive user experience which is estimated to be "above the EU average".
Flagship projects include:
The "Revenue Online Service" (ROS), enabling the online and secured payment of tax-related transactions by businesses and the self-employed.
The "PAYE Anytime" service for employees.
The "Citizen Information Website" – a life-event structured portal informing citizens on everything they need to know about government services (eGovernment prize winner at the World Summit Award 2007)
"BASIS", the eGovernment portal for businesses, a single access point to the administrative formalities related to creating, running and closing down a business.
Motor Tax online – website that allows motorists to pay motor tax online.
The "South Dublin Digital Books" service (Finalist project in the 2009 edition of the European eGovernment Awards)
The popular public procurement portal "eTenders".
The "Certificates.ie" service, enabling the online booking and payment of marriage, birth and death certificates.
The "Acts of the Oireachtas" web portal – a bilingual specificity, featuring national legislation texts in both Irish and English.
Italy
Key policy events
In Italy, eGovernment first became a policy priority in 2000, with the adoption of a two-year action plan. Since then, a combination of legal and policy steps have been taken to further computerise, simplify, and modernise the public administration management and services while enhancing their quality and cost efficiency. Increased user-friendliness and more transparent governance are major goals of the current eGovernment plan, the E-Government Plan 2012. In this light, a specific website allows those interested to know of the progress status of the plan's implementation.
The adoption in 2005 of the eGovernment Code, a legal act entirely dedicated to eGovernment, provided the required legal support for enabling the consistent development of eGovernment. Among other aspects, the code regulates:
the availability of public electronic services,
the electronic exchange of information within the public administration and between the latter and the citizens,
online payments, and
the use of eIdentification.
Major achievements
The central public eProcurement portal MEPA, the eMarketplace of the public administration, is a European best practice; it indeed won the European eGovernment Award 2009 in the category "eGovernment empowering businesses". Angela Russo describes: "It is a virtual market in which any public administration (PA) can buy goods and services, below the European threshold, offered by suppliers qualified according to non restrictive selection criteria. The entire process is digital, using digital signatures to ensure transparency of the process."
The Italian electronic identity card grants access to secured eGovernment services requiring electronic identification, and the possibility to perform related online transactions. Strictly for electronic use, the National Services Card (CNS – Carta Nazionale dei Servizi in Italian) also exists. This is a personal smart card for accessing G2C services and is lacking the visual security characteristics, e.g., holograms, being otherwise similar to the eID card in terms of hardware and software. The CNS card can be used both as a proof of identity and to digitally sign electronic documents.
According to the 8th EU Benchmark, as far as public eServices delivery is concerned, Italy scores high with 70% on one stop shop approach and 75% on user-focused portal design. Two comprehensive, online, single entry points to public services have been made available to citizens and businesses respectively. Both portals are clearly structured around the needs of their users and include transactional services. The portal for businesses goes further in removing the burdens resting on Italian companies and entrepreneurs; it provides a secure and personalised services suite provided by various public authorities.
Other noteworthy achievements include the taxation portal, which enables the filing of personal income and corporate returns and the online payment of taxes, and Magellano, a nationwide government knowledge management platform.
Actors
The Ministry of Public Administration and Innovation and in particular its department for the Digitisation of public administration holds political responsibility for eGovernment. It benefits from the assistance of the Standing Committee on Technological Innovation which provides expert advice on how to best devise the country's eGovernment policy. The implementation of national eGovernment initiatives is ensured by the responsible agency, namely the National Agency for Digital Administration (CNIPA) and the relevant Central Government Departments. The Italian Regions determine their respective eGovernment action plans with the technical support of CNIPA. The department for the Digitisation of Public Administration is the safeguard of the consistency of the policies that are carried out at the various levels of government.
Latvia
A milestone in the eGovernment evolution in Latvia was the approval of the Declaration of the Intended Activities of the Cabinet of Ministers on 1 December 2004. This document defined the goals, strategy, and process for eGovernment in the country; it also defined the roles and responsibilities of the minister responsible for eGovernment. At the same time, the Secretariat of the Special Assignments Minister for Electronic Government Affairs was established to be in charge of the implementation of eGovernment.
The Better governance: administration quality and efficiency document set out the framework for the development of the local government information systems in the period 2009–2013. In July 2006, the Latvian Information Society Development Guidelines (2006–2013) were launched, which according to the Special Assignments Minister for Electronic Government Affairs aimed to:
The Latvian eGovernment Development Programme 2005–2009 presented the national eGovernment strategy adopted by the government in September 2005. This programme was based on Latvia's eGovernment Conception and on the Public Administration Reform Strategy 2001–2006. The national programme Development and Improvement of eGovernment Infrastructure Base was adopted on 1 September 2004.
Legislation
The state- and local-government owned information systems in Latvia and the information services they provide are operating according to the State Information Systems Law (adopted in May 2002 and amended several times until 2008). This law addresses intergovernmental cooperation, information availability, and information quality.
Main actors
On 1 June 2009, the Ministry of Regional Development and Local Government took over the tasks of the Secretariat of Special Assignments Minister for Electronic Government Affairs and became responsible for information society and eGovernment policy development, implementation, and coordination.
Decentralised development is regulated by the State Regional Development Agency (SRDA) at the national level. Supervised by the Ministry of Regional Development and Local Government, SRDA runs the programmes of state support and the activities of the European Union Structural Funds.
Lithuania
Lithuania, through the Action Plan of the Lithuanian Government Programme for 2008–2012 has made quick steps towards eGovernment. The action plan includes in its goals the modernisation of the entire public administration to satisfy the needs of today's Lithuanian society, providing efficient services to both citizens and businesses. It also considers attaining an equilibrium of the services offered in urban and rural areas (especially regarding remote rural areas); the maintenance of a strong legal framework that would support the ICT market and a secure personal electronic identification and authentication. The enabling factors for these goals is the rapid development of public sector's eServices and the use of ICT infrastructure for the effective operation of service centres.
Achievements
Lithuania has demonstrated a significant progress in eGovernment through a highly developed legal framework protecting and supporting with various laws the eGovernment fields, and an eGovernment infrastructure offering on a daily basis pertinent information and a variety of services to the Lithuanian citizens and businesses. The legal framework comprises legislation on eGovernment, the freedom of information, data protection and privacy, eCommerce, eCommunications, eSignatures and eProcurement. The eGovernment infrastructure includes the eGovernment gateway (Lithuanian eGovernment portal) offering several eServices on a wide range of topics including:
Electronic order of Certificate of conviction (non-conviction).
Information on a citizen's State Social Insurance.
The provision of medical services and drug subscriptions.
Certificate on personal data, stored at the Register of Citizens.
Certificate on declared place of residence.
In addition, Lithuania has developed the Secure State Data Communication Network (SSDCN) (a nationwide network of secure communication services), electronic identity cards (issued on 1 January 2009), an eSignature back office infrastructure, the Central Public Procurement portal and the Network of Public Internet Access Points (PIAPs).
Actors
The Ministry of the Interior lays down the national eGovernment policies and strategies, while the Information Policy Department of the Ministry of the Interior and the Information Society Development Committee under the Government of the Republic of Lithuania share the responsibility for coordination and for the implementation of relevant eGovernment projects.
Luxembourg
During the 2000s, Luxembourg has highly progressed in the area of eGovernment. It has clearly developed its eGovernment infrastructure and has expanded its network of services to better satisfy the needs of the citizens and businesses of Luxembourg.
In February 2001, the "eLuxembourg Action Plan" establishes eGovernment as one of its primary axes. A few years later, in July 2005, a new "eGovernment Master Plan" has been elaborated to boost eGovernment development in the country. During that period, new portals have been launched, including:
Portal dedicated to primary education needs (January 2006)
Public Procurement portal (February 2006)
Business portal (June 2006)
"Emergency" portal (July 2006)
Thematic portal of Sports (December 2007)
"eGo" electronic payment system (September 2008)
"De Guichet" portal (November 2008)
"eLuxemburgensia" portal (May 2009)
Anelo.lu portal (October 2009)
This new "eGovernment Master Plan" aims to define and set a framework for the expanded use of new technologies for Luxembourg. That framework comprises the following domains:
Organisation and Management
Contents and Services
Education and Training
Technologies and Infrastructure
Security and Privacy
Legislative Framework
The Ministry of the Civil Service and Administrative Reform determines the policy and strategy in eGovernment, and is also responsible for coordination. The merge of the State Computer Centre (CIE) and of the eLuxembourg Service (SEL) has formed a new eGovernment service, the State Information Technology Centre (CTIE). This service has been created to fully cover the needs of public administrations' electronic exchanges in Luxembourg and also to keep pace with the developments of a constantly evolving information society. CTIE is responsible for the coordination and implementation of eGovernment services, in addition to providing the necessary support to public administration bodies.
The www.luxembourg.lu portal constitutes the country's main eGovernment point of contact offering important information on Luxembourg. The constitution of a Single Central Government Portal is anticipated for, joining the existing "De Guichet" and the "Business" portals with the aim to provide even more pertinent and transparent services.
Malta
Malta has remarkably progressed in the eGovernment sector concentrating its efforts to further develop and optimise existing and new eGovernment infrastructure and services.
Among other, several portals and services have been launched since 2002:
Customer Service website and Internet Phone Box service (November 2002)
e-Identity system (March 2004)
Online payment system (August 2004)
Data Protection portal and eHealth portal (February 2006)
National eTourism portal (January 2006)
eVAT service and mygov.mt: a state portal (September 2007)
online Customer Care system (May 2009)
Portal for Local councils (June 2009)
Judicial portal (October 2009)
The Malta Information Technology Agency (MITA) is the central administration responsible for the implementation of the eGovernment strategy in Malta. The official eGovernment strategy of Malta has been drafted in "White Paper on the Vision and Strategy for the Attainment of eGovernment" (2001). Currently, Malta's eGovernment Programme is based upon the evolved "Smart Island Strategy (2008–2010)", and more precisely, on one of its seven streams, the "Reinventing Government" stream. The current eGovernment strategy in Malta focuses on:
creating and operating an eGovernment platform;
using open technologies; and,
establishing one point of contact for public eservices;
developing and offering an E-procurement system;
informing the related communities, at European and local levels, on the electronic submission of tenders;
conceiving a policy framework; and
implementing a mechanism for "over-the-counter" public services for physical and legal trusted entities.
Regarding national legislation, the National ICT Strategy 2008–2010 provides for the establishment of an eGovernment legislation on electronic filing, computer accessibility for disabled persons and on the legal framework governing the use of eIdentification (Smart ID cards) etc.
Main actors
The main eGovernment actors of Malta are the Ministry for Infrastructure, Transport and Communication (MITC) responsible for eGovernment strategy and policies, and the Malta Information Technology Agency (MITA) responsible for implementation and support.
It is worth highlighting that the "Customer Care system" and the "Vehicle Registration and Licensing system" are two Maltese eGovernment services awarded with the "Good Practice label" thanks to the provision of excellent and credible services. Two more of Malta's services have been nominated for the "European eGovernment Awards": the IR Services Online and the Malta Environment Planning Authority (MEPA) e-Applications.
Netherlands
The Netherlands lay special stress on the provision of an effective ICT infrastructure and related services, easily accessible by all its citizens, to reduce red tape for citizens and businesses and facilitate their communication with the Dutch public administration.
In May 2008, the Government published the National ICT Agenda 2008–2011 that set its objectives in five primary areas:
eSkills
eGovernment
Interoperability and standards
ICT and Public domains
Services innovation and ICT
The National Implementation Programme (NUP) became the Netherlands' eGovernment strategy until 2011, focusing on the infrastructure and relevant projects that use such infrastructure.
The main infrastructure components provide citizens, businesses, and public administrations with access to a considerable amount of information and services. In addition, a series of other eServices covering various fields is provided:
eIdentification and eAuthentication.
Common Authorisation and Representation Facility (Unique numbers for individuals and businesses : Citizen Service Number (CSN), Chamber of Commerce number (CCN)).
In the international stand, the Netherlands have earned the fifth position in the UN's e-Government Survey (2008) and has been rated seventh in the eReadiness climax of the Economist Intelligence Unit (2008). Considerable progress has furthermore been observed in eGovernment, according to the Capgemini 2007 report where online availability has raised by 10% from 2006 to 2007, reaching 63%, while a strong 54% of the Dutch use the Internet services provided in their interaction with the public administration.
eGovernment in the Netherlands is regulated by a set of laws covering a broad range of fields, namely, Freedom of Information Legislation (Government Information (Public Access) Act (1991)), Data Protection / Privacy Legislation (Personal Data Protection Act (2000)), eCommerce Legislation (eCommerce Act (2004)), eCommunications Legislation (Telecommunications Act (2004)), eSignatures Legislation (Electronic Signature Act (2003)).
The body responsible for laying down the eGovernment policies and strategies is the Ministry of the Interior and Kingdom Relations, whereas the coordination of those policies/strategies is shared between the competent Ministry and the Services and eGovernment Management Committee (SeGMC). The implementation of the eGovernment policies is undertaken by the ICTU foundation and the agency Logius.
Poland
Poland has taken significant steps towards the development of an eGovernment framework that aims to define the rights and obligations of both citizens and businesses, every time they interact with the public sector through the use of electronic means.
The following list comprises key documents regarding the eGovernment strategy of Poland:
The Computerisation Development Strategy of Poland until 2013 and Perspectives for the Information Society Transformation by 2020 sets out the framework for the development of Poland's Information Society.
The National Computerisation Plan for the period 2007–2010 describes the tasks that need to be implemented by public bodies regarding the development of Information Society and the provision of electronic services.
The Strategy for the Development of the Information Society in Poland until 2013.
Poland bases its eGovernment legislation on the Act on the Computerisation of the Operations of the Entities Performing Public Tasks, which sets out the rights for citizens and businesses to contact the public authorities electronically.
The Ministry of Interior and Administration is responsible for carrying out the national eGovernment policy. The Ministry of Infrastructure is in charge of the design and implementation of Poland's telecommunication policy and broadband strategy. The Committee for Computerisation and Communications of the Council of Ministers is tasked with the coordination and monitoring of the implementation of the National Computerisation Plan for the period 2007–2010.
Portugal
The Portuguese Government has achieved significant progress in the area of eGovernment as part of the Technological Plan in an attempt to develop Information Society and render Portugal more competitive among its European counterparts and in the international stand.
The action plan Connecting Portugal ("LigarPortugal") has aimed to implement the Information Technology section of the Technological Plan. Its main objectives have focused on:
promoting a conscious and active citizenship;
guaranteeing a telecommunications' competitive environment in the Portuguese market;
ensuring transparency in any interaction with the Public Administration;
fostering the extensive use of ICT in the business sector; and
technological and scientific growth through research.
The Simplex programme comprises a well-developed Administrative and Legislative Simplification Programme which is dedicated to diminishing bureaucracy, enhancing transparency in interactions with the State and efficiency in Public Administration's operations, thus gaining the trust of the Portuguese people.
Legislation
Even though no eGovernment legislation exists as a whole, the Resolution of Cabinet no. 137/2005, of 17 August, foresees the adoption of a legal system for Public Administration bodies and services.
The Minister for the Presidency is in charge of eGovernment in Portugal. Together with the Secretary of State for Administrative Modernisation and the Agency for the Public Services Modernisation (AMA) define the eGovernment policies and strategies. AMA is also responsible for coordination, a task that it shares with the National Coordinator for the Lisbon Strategy and the Technological Plan. AMA and the Government Network Management Centre (CEGER) have undertaken the task of implementing these policies and strategies.
Infrastructure
Portugal has an advanced e-Government infrastructure containing two major portals; the Citizen's portal and the Enterprise's portal. Both are considered as main access points for interaction with the public administration. Three extent e-Government networks constitute another important part of the Portuguese e-Government infrastructure: the Electronic Government Network managed by CEGER, the Common Knowledge Network which is a portal that connects central and local public bodies, businesses and citizens and the Solidarity Network which comprises 240 broadband access points and is dedicated to the elderly and the disabled.
e-Identification is another sector in which Portugal has significantly advanced. The Citizen's Card was launched, an electronic identity card containing biometric features and electronic signatures. In addition, Portugal has issued the Portuguese Electronic Passport (PEP), which includes the personal details of a holder (as in the traditional passport) and a set of mechanisms encompassing features varying from facial recognition to the incorporation of a chip.
The national e-Procurement portal, which is currently merely an information tool, is destined to become the central procurement mechanism for the entire Portuguese public administration.
Other considerable infrastructure initiatives that have taken place are:
CITIUS, which enables the electronic submission of documents for court use.
Simplified Business Information (IES), made for businesses to electronically submit declarations, namely, accounts, tax returns and statistics.
PORBASE, the National Bibliographic Database comprising more than 1,300,000 bibliographic records.
e-Accessibility, a good practice unit focusing on accessibility to public administration by the elderly and the disabled.
Portuguese Electronic Vote Project, whose aim is to enable citizens to exercise their voting right even if they are away from their designated voting area.
Digital Cities & Digital Regions, a set of over 25 projects offering electronic solutions for local public administrations, e-Services for citizens and improved conditions for the development and blossoming of SMEs.
Public Internet Spaces, for free computer access for all citizens.
National GRID initiative, fostering the development of Grid computing and combining computer resources for the resolution of complex scientific, technical or business problems that require large amount of data to be processed.
Romania
In 2008, the responsible body for eGovernment, the Agency for Information Society Services, (ASSI) published its strategy, mainly focusing on improving the efficiency of public administration services and more precisely, on providing access to interested stakeholders (citizens and businesses). It thus became the main provider of ICT services ensuring data reusability among the public administration bodies. The utmost objectives of this strategy were to enhance efficiency, transparency, accessibility and in addition to diminish red tape and illegal activities.
The Romanian Government has also laid stress on setting a legal framework that would foster the information society and, by extension, eGovernment. This framework included the Government Decision No. 1085/2003 on the application of certain provisions of law No. 161/2003, on measures ensuring transparency in all interaction with the public administration, the prevention and prosecution of illegal activities and the implementation of the National Electronic System (NES).
The Romanian eGovernment infrastructure is based upon the main eGovernment portal that provides a single point of contact to public services at national and local levels, incorporating a transactional platform. Furthermore, NES serves as a single point of access to eServices and has been developed in parallel with the portal to operate as a data interchange Centre and ensure interoperability with back-end systems across public administration. All citizens and businesses have access to the portal and to public agencies' services through NES. Regarding eIdentification and eAuthentication, the National Person Identity System is in progress aiming at creating a computerised record of civil status for all citizens. This project also includes the following eIdentity sections:
the Civil Information System,
the Identity Card System,
the Passport system,
the Driving Licence and the Car Registration system, and
the Personal Record system.
A noteworthy infrastructure component is the eProcurement system e-licitatie.ro whose main purpose is to improve control mechanisms in procurement procedures while fostering transparency, facilitating access to public contracts, and diminishing red tape.
The Ministry of Communications and Information Society (MCSI) is the body responsible for defining the eGovernment policies and strategies and together with the Agency for the Information Society Services (ASSI) and other subordinate bodies co-ordinate the implementation of the eGovernment strategy which is done through private sector subcontractors.
Dan Nica, Minister for Information Society in Romania states in an interview that Romania has started the ambitious project of aligning itself to the latest trends in e-Government and introducing the most advanced electronic systems in providing public services to its citizens. He also mentioned that with this process there will be some reshaping in the administrative procedures based on individual life-events. The Minister speaks about upcoming projects such as Online Issuance of Civil Status Documents and the e-Agricultural Registry.
Slovakia
Brief history
The initial framework for the development of information systems of public authorities in Slovakia was set in 1995 with the Act No. 261/1995 on State Information Systems (SIS). According to eEurope+ Final Progress Report, in 2001 over 80% of the online government services were in the planning stage. Based on the same source, by 2003 this proportion was reduced to 34% and the services that posted online information increased from 2% to 24%. The National Public Administration portal was launched in 2003. Aiming to have online 20 public services by 2013, the Operational Programme of Introducing IT into Society (OPIS) was approved in 2007.
Strategy
The main Slovak eGovernment Plan is the Strategy for Building an Information Society in the Slovak Republic and Action Plan. The Plan was adopted in 2004 setting thus the national eGovernment strategic objectives. Several strategic eGovernment documents were adopted between 2001 and 2006. In 2009 with the document Information Society Strategy from 2009–2013 Slovakia presented an updated strategy for the national Information Society. The new trends in ICT were included in the new document replacing thus the original Information Society Strategy with the Action Plan.
Legislation
According to the Act No. 275/2006 on Public Administration Information Systems (20 April 2006) a framework has been developed for the information systems of public authorities. The Act has been amended in 2009. Relevant laws have been implemented regarding the freedom of information, data protection, eCommerce, eCommunications, eSignatures, eProcurement and the re-use of public sector information (PSI).
Actors
The Ministry of Finance is the main governmental body responsible for the Information Society and the building of a National eGovernment Concept of Public Administration. The ministry acts under the authority for the Operational Programme Informatisation of Society.
At regional and local authorities the public administration is executed by self-government; the Ministry of Interior is responsible for the coordination.
Other governmental bodies are:
the Government Plenipotentiary for the Information Society
the Social Insurance Agency
the Supreme Audit Office
the Office for Personal Data Protection
the National Security Authority
Slovenia
Brief history
In 1993 the Government Centre for Informatics (GCI) was established (Official Gazette of the Republic of Slovenia, No. 4/93). Between the years 2001–2004 the Electronic Commerce and Electronic Signature Act were passed along with the Strategy of eCommerce in Public Administration.
Since 2001, the governmental portal, e-Uprava, and other portals offer information and electronic services. The results are visible and encourage further work in this area. After the adoption of the eEurope Action Plan in 2002, Slovenia has risen to second position in the European Commission's eServices sector in 2007 in terms of the most developed internet-based administrative services.
The main strategic documents that comprise Slovenia's strategy include the National Development Strategy adopted in 2005, the eGovernment Strategy of the Republic of Slovenia for the period 2006 to 2010 adopted in 2006, the Action Plan for eGovernment for the period 2006 to 2010 adopted in 2007, the Strategy on IT and electronic services development and connection of official records (SREP), and the Strategy for the development of the Information Society in the Republic of Slovenia until 2010 (si2010), adopted in 2007.
The main infrastructure components of Slovenian eGovernment are:
The eGovernment portal, e-Uprava, a tool for all the visitors interested in gaining knowledge on Slovenia, and information regarding the public administration and the private sector. It offers information and electronic services with visible results encouraging thus further work in this area.
The eVEM portal scoping to Government to Business (G2B) and Government to Government (G2G) set up in 2005 to serve independent entrepreneurs in providing online the requisite tax data. The portal received international recognition at the United Nations Public Service Awards 2007, ranking second among other European countries applications.
The e-SJU portal ("electronic services of Public Administration") that aims to make most administrative forms available in electronic form.
Legislation
Currently there is no eGovernment legislation in Slovenia. The General Administrative Procedure Act which was adopted in 1999 forms the basis for all administrative proceedings.
Actors
The responsibility for the Slovenian eGovernment strategy lies with the Minister for Public Administration. The Directorate for e-Government and Administrative Processes at the Ministry of Public Administration is the body in charge for conducting the related tasks. The Information Commissioner body, established from the merging of the Commissioner for Access to Public Information and the Inspectorate for Personal Data Protection, is functional since 2006 performing duties regarding the access to public information.
Recent developments
The main eGovernment portal in Slovenia eUprava was renewed in 2015. It received a complete redesign of the system architecture as well as of the user experience. It followed the principles of modern website design – simplicity, responsiveness and user-centricity. Citizens can access around 250 government services through the portal, as well as their personal data from various public record. In 2017, another addition to the Slovenian eGovernment infrastructure was made with the launch of eZdravje (eHealth), described as a “One Stop Shop for eHealth”. Users can use the portal to their data in different eHealth databases, review their prescribed and dispensed medication, check information on waiting times and gain electronically issued referrals to specialized doctors.
Spain
Brief history
Initial steps towards a Spanish eGovernment policy were taken in 1999 and 2001 under the "Initiative XXI for the development of the Information Society". The formal start of a genuine policy in the field was marked by the "Shock Plan for the development of eGovernment" of May 2003. More than two years later, a plan named "Avanza" was adopted with the purpose of fully developing the country's information society to high level and following the European Union's relevant policy orientations. The main targets of the first (2006–2008) and the second phase (2009–2012) of the plan were the modernisation of the public administrations and the improvement of the citizens' well-being through the use of information and communication technology. In addition, the improvement of both the quality of and access to electronic public services has been a constant policy vector.
The adoption of the Law on Citizens' Electronic Access to Public Services (2007) solidly anchored eGovernment in Spain by turning it into a legal right. This Law primarily lays down a right for citizens to deal with the public administrations by electronic means, at any time and place, as well as a deriving obligation for public bodies to make this possible by 31 December 2009.
Fundamental Principles and Rights
The Law on Citizens' Electronic Access to Public Services provides:
Technological neutrality: the public administrations and the citizens alike are free to decide which technological option they wish to take.
The "availability, accessibility, integrity, authenticity, confidentiality and conservation" of the data that is exchanged between the public administration and the citizens and businesses, as well as among public administrations.
The provision by the citizens and businesses of the same data only once; public administrations must seek the needed information through their interconnections and not request this information again.
The public administrations may access personal data within the limitations of the Law on the Protection of Personal Data of 1999.
The right of citizens to follow up on their administrative file and to obtain an electronic excerpt of it.
Any electronic signature used by a citizen/a legal entity is receivable by the public administration if it complies with the provisions of the Law on Electronic Signature of 2003.
Publication of an electronic official journal.
An eGovernment ombudsman will oversee and guarantee the respect of the eGovernment rights.
Major eGovernment achievements
Spain is one of the few European Union members states to have published a legal act entirely dedicated to eGovernment. It is also one of the few countries worldwide proposing an electronic identity card to its citizens. The DNIe, as it is called in Spain, enables a secure, easy, and quick access to a wealth of public web services requiring authentication and high levels of security to be delivered. The Spanish Government is actively promoting the use of the over 14 million DNIe cards in circulation through massive awareness raising campaigns and the free allocation of hundreds of thousands of card readers.
Another salient success is the single web access point to online public services for citizens and companies alike: the "060.es" portal. Clearly structured around its users' needs, it links to over 1200 public services provided by central and regional administrations. The portal is easy to use, highly interactive and user customisable. The "060.es" portal forms part of a larger "060 Network", a network of different channels to Government services delivery also comprising a phone line and offices disseminated over the entire territory.
Other major achievements include:
ePractice Best Practice Label winner @firma, a multi-PKI Validation Platform for electronic identification and electronic signature services.
A centralised public electronic procurement portal.
The Avanza Local Solutions platform to assist local government in delivering their public services online.
Actors
It is the Ministry of the Presidency – in particular its Directorate General for the Promotion of eGovernment Development – who devises the national eGovernment policy and oversees its implementation by the respective ministries. Because the Ministry of Industry, Tourism and Trade steers the aforementioned "Avanza" Plan, both ministries collaborate closely on eGovernment matters. The Higher Council for eGovernment ensures preparatory work and the Advisory Council of eGovernment provides the responsible ministry with expert advice. At sub-national level, the autonomous communities and the municipalities design and manage their own eGovernment initiatives.
Sweden
Key political events
The early days of eGovernment in Sweden date back to 1997, with the introduction of a project named "Government e-Link", which was aiming to enable the secure electronic information exchange within the public administration, as well as between public bodies and the citizens and entrepreneurs. The year 2000 can however be considered as the kick-off year of a full-fledged eGovernment policy. It is indeed on that year that the so-called "24/7 Agency" concept was introduced as a guiding principle for the Networked Public Administration. From then on, the administration as well as the services it provides had to be made reachable at any time and place, through the combined use of three media: the Internet, phone lines and regular offices. The system was resting on the relative autonomy that the many Government Agencies in Sweden were enjoying at the time in relation to the Government Departments. Despite tangible achievements such as the creation of an access gate to all Government eServices for citizens – the "Sverige.se" portal, now closed down – this governance equation reached its limits; a lack of coordination was observed at all levels (e.g. organisational, financial and legal), leading among other drawbacks to the partitioned and duplicated development of the public eServices.
As a response to this, the eGovernment policy underwent a wide review which concluded with the publication in January 2008 of the "Action Plan for eGovernment" whose central goals were to rationalise policy governance; make the Swedish Administration the "world's simplest Administration"; and take public services delivery to a higher level than that of mere provider–customer interaction. This would happen by rendering the recipient of a public service an actor of its delivery. This streamlining effort was continued with the establishment of an institution which became the centre protagonist of the system: the eGovernment Delegation (E-Delegationen in Swedish).
Since then, Sweden has been on the track of what it calls the "Third-generation eGovernment"; a concept brought to life by the "Strategy for the government agencies work on eGovernment" document prepared by the eGovernment Delegation.
Core aspects of the Third-generation eGovernment
The core aspects of the Third-generation eGovernment include:
A simplified infrastructure for the electronic authentication of anyone willing to make use of eGovernment services requiring such authentication.
Different Government Agencies to jointly develop shared public eServices.
The re-use of infrastructure solutions.
Common technical support for the development and implementation of the aforementioned shared services.
Transparent funding mechanisms and updated legislation and regulation.
A positive approach to the use of open standards and open source software in the public administration.
The possibility for citizens and businesses to have a say, through electronic media, in decision making in the field of eGovernment.
Major achievements
According to the United Nations' 2008 E-Government Survey, Sweden is internationally acknowledged as one of the most successful eGovernment countries and the world leader in terms of e-Government Readiness. As for the 8th EU Benchmark, it places the country among the top five European Union members states.
Instead of keeping a single electronic citizen entry point to the Public Administration, the Government made the choice of web portals entirely dedicated to a given theme (e.g. taxation portal, health portal, employment, social insurance, etc.). On the contrary, businesses benefit from a one stop shop for entrepreneurs named "verksamt.se". This portal gathers company "life-event" procedures thus re-organising the public services provided by three Government Agencies in a user-friendly way.
Other eGovernment achievements that are worth being highlighted include:
Electronic Invoices – All Government Agencies have been handling invoices electronically since July 2008.
A popular electronic authentication infrastructure – referred to as E-Legitimation – enables the citizens' and businesses' access to secured public eServices.
A set of well-established public electronic procurement portals.
Actors
The Ministry for Local Government and Financial Markets holds the leadership over the Swedish eGovernment policy. Among other tasks, the eGovernment Delegation co-ordinates the work of the government agencies and departments by defining working lines, monitoring their application, and reporting to the Ministry for Local Government and Financial Markets. Furthermore, the eGovernment Delegation acts as an intermediary between the central government and the local governments – which steer their own eGovernment initiatives – to ensure good collaboration for the benefit of the country's entire public administration.
United Kingdom
The Transformational Government – Enabled by Technology strategy, published in November 2005, sets out the vision to lead eGovernment developments in the United Kingdom. This document acknowledges technology to be an important instrument for addressing three major challenges of the modern economic, namely these of "economic productivity, social justice and the reform of public services" through transformation of the way the government works.
In particular, this strategy is aimed at improving transactional services and infrastructures of government to enable the transformation of public services to the benefit of both citizens and businesses. The vision is to use latest technologies that shall enable putting in place cost efficient services (to the benefit of taxpayers) and offer citizens more personalised services as well as the choice between new communication channels for their interactions with government. Last but not least, civil servants and front line staff will be actively supported by new technologies which will assist them in better accomplishing their tasks.
To realise the envisaged objectives UK's strategy focuses on the following fields of action:
Citizen and Business Centred Services;
Shared Services;
Professionalism;
Leadership and Governance.
Accordingly, public services should be designed around citizens and businesses following a shared services approach to take advantage of synergies; reduced waste; shared investments; and increased efficiencies that shall allow putting in place public services which will better meet the needs of citizens. Technological changes should be at the same time accompanied by the development of IT professionalism and related skills and should be complemented by solid leadership qualities and coherent governance structures.
UK's eGovernment strategy is complemented by the Putting the Frontline First: Smarter Government action plan of December 2009 containing concrete measures to improve public services for the period up to 2020. In parallel, the Digital Britain Final report forms the basis for an active policy to support the government in delivering high quality public services through digital procurement and digital delivery and assist the private sector delivering modern communication infrastructures. Latter report also envisages equipping citizens with the skills needed to participate and benefit from the information society. Additional efforts are also made in the field of open standards, in line with the Open Source, Open Standards and Re–Use: Government Action Plan of March 2009, based upon the fact that open source products are able to compete (and often beat) comparable commercial products featuring thus in many cases the best value for money to the taxpayer with respect to public services delivery.
According to the Transformational Government Annual Report 2008, significant eGovernment progress has been already achieved or is well underway in the following areas:
Tell Us Once service: This programme of strategic importance, currently in a pilot phase, aims to facilitate citizens to inform public authorities about a birth or death just once – the service will be responsible for the appropriate dispatch of this information to all relevant departments that may need it . Trials have already been put in place in the north and south east of the country, covering a population of 2 million people.
Ongoing efforts for the rationalisation of the total number of public services websites: The integration of many former public websites into the two major public services portals of Directgov (citizens) and Businesslink.gov.uk (businesses) is underway resulting in an increased number of visitors for Directgov and increased user satisfaction for the Businesslink.gov.uk services.
NHS Choices: This website offers online access to online health information and services. It was launched in 2007 and is visited by six million users per month.
HealthSpace : This application can be accessed through the NHS choices website and both patients and doctors the possibility to store and update personal medical information online by using a secure account.
Shared services: By the end of March 2008, the shared services of the Department for Work and Pensions (DWP) had delivered cumulative savings of £50 million (approx. €56 million at the beginning of 2010) or around 15% year on year.
eGovernment in the UK is regulated by a framework of laws covering a broad spectrum of relevant fields such as: Freedom of Information legislation (Freedom of Information Act 2000); data protection legislation (Data Protection Act 1998); legislation related to eCommerce (Electronic Communications Act 2000; Electronic Commerce (EC Directive) Regulations 2002); legislation concerning electronic signatures (Electronic Signatures Regulations 2002); and the re-use of public sector information regulations 2005.
Infrastructure
The public services portal Directgov is the major single access point for eGovernment services to citizens. Beyond the actual services offered, the portal also contains comprehensive information on a broad spectrum of fields making thus navigation within further websites unnecessary. The equivalent of this portal for the business community is Businesslink.gov.uk providing access to business services. Participation to the services requires registration with the Government Gateway, a major authentication infrastructure component, allowing users to perform secure online transactions. Moreover, a third major website – NHS Choices – offers a broad spectrum of health related services and information. This website also serves as a front end to Health Space, an infrastructure component offering completely secure accounts where patients and their doctors can access, are able to store and update personal medical information.
In the field of networking, the Government Secure Intranet (GSI) puts in place a secure link between central government departments. It is an IP based Virtual Private Network based on broadband technology introduced in April 1998 and further upgraded in February 2004. Among other things it offers a variety of advanced services including file transfer and search facilities, directory services, email exchange facilities (both between network members and over the Internet) as well as voice and video services. An additional network is currently also under development: the Public Sector Network (PSN) will be the network to interconnect public authorities (including departments and agencies in England; devolved administrations and local governments) and facilitate in particular sharing of information and services among each other.
Other European countries
Iceland
Iceland is one of the pioneer countries in Europe in the use of digital solutions for the provision of governmental services to its citizens, with 63.3% of individuals and 89.0% of enterprises using the Internet for interacting with public authorities.
The government's efforts to enhance the use of eservices include Prime Minister's initiative to set up in June 2009 a tool kit to facilitate online public services for Icelanders.
Iceland's eGovernment Policy is shaped through a series of strategic documents:
In October 1996, the Government of Iceland publishes The Icelandic Government's Vision of the Information Society, describing the government's role in guiding information technology.
In December 2007 an assessment report entitled Threats and Merits of Government Websites was published to demonstrate a detailed survey on the performance of online public services providing authorities with information on their status.
Published in May 2008, Iceland's eGovernment Policy on Information Society for the period 2008–2012 is based on the "Iceland the e-Nation" motto and is built on three main pillars: service, efficiency, and progress. Its primary objective is to offer Icelanders online "self-service of high quality at a single location".
The Public Administration Act (No. 37/1993) as amended in 2003 sets the main eGovernment framework in Iceland. This Act has proven to be an important tool for state and municipal administration on individuals' rights and obligations.
Key Actors responsible for the implementation of eGovernment include:
Prime Minister's Office: in charge of information society and eGovernment policy.
Information Society Taskforce: to co-ordinate the policy strategy.
Icelandic Data Protection Authority (DPA): in charge of supervising the implementation of the Act on the Protection of Privacy as regards the Processing of Personal Data.
Association of Local Authorities: to provide information on particular aspects of local authorities and municipalities (last visited: 13 October 2010).
Liechtenstein
National Administration Portal of Liechtenstein (LLV eGovernment Portal)
The national Administration Portal of Liechtenstein is the central instrument in the eGovernment process of the country. It started its operation in 2002 and provides eServices for citizens and enterprises. The portal comprises three main sections:
Life topics, where information is presented structured around life events, such as marriage, passport, stay, etc.
Public Authorities that contains detailed information on role and responsibilities of individual public authorities.
On-line counter that contains downloadable forms to be completed and manually submitted to the relevant public authorities. Some of the forms can also be submitted electronically.
The LLV portal also offers a broad range of online applications. The most popular applications in November 2007 were:
Business names index for enterprises
Geospatial Data Infrastructure (GDI)
Tax declaration
Online calculator for price increase estimation
Report and application service
Legislation
In Liechtenstein, eGovernment is supported by a variety of laws:
The Information Act (Informationsgesetz) regulates the access to public documents.
The Data Protection Act protects personal data.
The Law on E-Commerce (E-Commerce-Gesetz; ECG, register no. 215.211.7) implements the European Directive 2000/31/EC on certain legal aspects of Information Society services, in particular electronic commerce, in the Internal Market (Directive on electronic commerce).
The Law of Telecommunications and the Law on Electronic Communication (Kommunikationsgesetz; KomG, registry number 784.10) create the framework in the area of eCommunications legislation. The Office of Communication (Amt für Kommunikation) was instituted on 1 January 1999 constituting the regulatory authority for telecommunications services.
The Law on Electronic Signatures (Signaturgesetz; SigG, registry number 784.11) implements the European Directive 1999/93/EC on a Community framework for Electronic Signatures.
Liechtenstein has not implemented the Directive 2003/98/EC on the re-use of public sector information. The country is committed to the implementation of the public European public procurement directives 2004/17/EC and 2004/18/EC.
Actors
Policy and strategy are drawn by the Prime Minister and the Ministry of General Government Affairs. The Office of Human and Administrative Resources called "Querschnittsamt" is responsible for coordination, implementation and support of all eGovernment activities inclusive the National Administration Portal of Liechtenstein (LLV eGovernment Portal). The National Audit Office provides independent auditing services and the Data Protection Unit is responsible for the implementation of the Data Protection Act. Due to the small size of the country, all administration and realisation of eGovernment is provided centrally.
North Macedonia
The eGovernment in Macedonia started in 1999 with the establishment of the [Metamorphosis Foundation]. The Foundation worked towards the development of democracy by promoting the knowledge-based economy and the information society. In 2001 it implemented a project financed by Foundation Open Society Institute Macedonia and UNDP that established websites for 15 municipalities using a custom-made CMS.
In 2005 the National Strategy and Action Plan for Information Society Development was created for the implementation of eGovernment at a national level. In 2006 the first electronic passports and ID cards were issued to citizens of Macedonia. At the same year the eGov project, which aimed to improve the governmental services, was also launched. The latter together with the Public Procurement Bureau provided the necessary support towards the development of the national eProcurement system in 2008.
Strategy
The main objectives related to eGovernment strategy were laid down in the Government Programme (2006–2010) as this was developed in the National Information Society Policy and the National Strategy and Action Plan for Information Society Development document.
The basic elements analysed in those two documents are the following:
Infrastructure
eBusiness
eGovernment
eEducation
eHealth
eCitizens
Legislation
Sustainability of the strategy
The eGov project was launched in 2005 and has been operational since 2007 in 11 municipalities. It aims basically to implement modern eGovernment solutions in Macedonia. Through the project, documents have been made accessible to citizens, who may request information regarding their local council member, participation in forums etc.
Legislation
Although there is no national eGovernment legislation, the main legal objectives aim to cover the protection of cybercrime, the protection of data privacy and intellectual property rights, electronic business and the electronic communication services market. Further legal entities that have also been adopted are the following:
Law on Personal Data Protection (adopted on 25 January 2005)
Law on free access to information of Public character (entered into force on 25 January 2006)
Law on Electronic Commerce (entered into force on 26 October 2007)
Electronic Communications Law (entered into force on 15 February 2005)
eSignatures Legislation
Law on Public Procurement (entered into force on 1 January 2008)
Law of Free Access to Information on Public Character
The Ministry of Finance, among others, promotes also the development of the legislative framework that supports digital signatures and other regulation related to eCommerce.
Actors
The responsibility for Macedonia's eGovernment lies with the Ministry of Information Society. More specifically, the Commission for Information Technology draws the national strategy and policy for IT. In charge of the measures coming from the National Strategy and Action Plan for Information Society Development is the Cabinet of the Minister.
Norway
Norway identified eGovernment as a policy subject as early as 1982. At that time, the first national IT policy paper entitled "Decentralisation and Efficiency of Electronic Administrative Processes in the Public Administration" was published. Since then, many public services have gone electronically and many developments have taken place in an attempt to strengthen the country's eGovernment policy.
Norway can be proud of being one of the top-ranking countries worldwide in using electronic means to provide public services to citizens and businesses. In addition, MyPage, a self-service citizen portal offering more than 200 eServices to the public received the "Participation and transparency", European eGovernment Award 2007 for offering Innovative public services.
The eGovernment policy of Norway was first outlined in the "eNorway 2009 – The Digital Leap" plan document published in June 2005. This document focuses on:
The individual in "Digital Norway";
Innovation and growth in business and industry; and
A co-ordinated and user-adapted public sector.
The "Strategy and actions for the use of electronic business processes and electronic procurement in the public sector" strategy document (October 2005) followed; and finally, the white paper "An Information Society for All" was created in 2006, focusing on the need for reform and efficiency improvements in the public administration, based on effective and standardised technical solutions.
The main actor in eGovernment in Norway is the Ministry of Government Administration, Reform, and Church Affairs. Its Department of ICT Policy and Public Sector Reform is responsible for the administration and modernisation of the public sector as well as national ICT policy. It also supervises the work of the Agency for Public Management and eGovernment (DIFI).
DIFI "aims to strengthen the government's work in renewing the Norwegian public sector and improve the organisation and efficiency of government administration". Additionally, the Norwegian Centre for Information Security is responsible for coordinating the country's ICT security activities.
Switzerland
State of play
The information society in Switzerland is highly developed bringing the country high at international benchmarks such as the UN eGovernment Readiness Index 2008 (12th place out of 189 countries) and the WEF Global Competitiveness Index 2009–2010 (second place out of 133 countries). In contrast to this, however, the status of the full online availability of public services in the country amounts to 32% according to the 8th EU Benchmark. This brings Switzerland on the 31st position among the EU27+ participating countries. Regarding the maturity of the services offered, the country achieves and an online sophistication index of 67%, placing itself on the 28th position in the same benchmark. These scores show that there is still a considerable potential to be utilised. This can be explained considering the effective operation of the traditional paper-based administration in Switzerland which resulted in less direct pressure for taking action in comparison to other countries.
Strategy
To unleash the potential offered by modern ICT, Switzerland has put in a place a strategic framework to drive eGovernment efforts at federal, cantonal, and communal level. The country's main strategic document is eGovernment strategy Switzerland, adopted by the Federal Council on 24 January 2007. This strategy is aimed at reducing administrative burdens through process optimisation, standardisation, and the development of common solutions. These goals are being realised by means of prioritised projects, following a decentralised but co-ordinated implementation approach throughout all levels of government. In parallel to these efforts, the ICT Strategy 2007–2011 has been adopted on 27 November 2006, to guide the implementation of the eGovernment efforts at a federal level. This document defines a framework setting out relevant strategic objectives and the responsible authorities. In addition, a set of partial strategies has been also put in place to complement the General ICT strategy, placing emphasis on more specific areas. These strategies are presented in the Federal Service-Oriented Architecture (SOA) 2008–2012 and the Open Source Software: Strategy of the Swiss federal administration document.
Actors
The overall strategic responsibility for ICT in the Swiss federal administration lies with the interministerial Federal IT Council (FITC) operating under the Ministry of Finance and chaired by the President of the Swiss Confederation. FITC is supported by the Federal Strategy Unit for IT (FSUIT) which acts as an administrative unit to the council. Moreover, the eGovernment Strategy of Switzerland is supervised by a steering committee, also under the Ministry of Finance, comprising three high-ranking representatives each from the confederation, the cantons, and the communes. The committee is supported by the eGovernment Switzerland Programme Office (within FSUIT) and an advisory board, composed of a maximum of nine experts from administration, the private sector, and academia. The Framework Agreement on eGovernment Cooperation, which covers the period 2007 to 2011, presents the above collaboration scheme shared by all levels of government (confederation, cantons, municipalities).
Infrastructure
The website ch.ch is Switzerland's main eGovernment portal offering access to all official services offered by federal government, cantons and local authorities. Content is available in German, French, Italian, Romansh, and English. In the 8th EU Benchmark the portal is placed in the first third of the EU27+ countries with respect to accessibility and has been assessed with a high score of 98% for its one stop shop approach and with a score of 83% regarding user focused portal design.
A further important eGovernment infrastructure component is the simap.ch portal, Switzerland's mandatory eProcurement platform. The portal covers all major phases of public procurement ranging from issuance of invitations to tenders to the announcement of contract awards. Thereby, the entire process is implemented free of media discontinuities. Other important websites are the www.sme.admin.ch, providing a broad spectrum of information for SMEs and the www.admin.ch website, the portal of the federal administration.
Turkey
Considerable progress has been made towards the modernisation of the Turkish public sector using eGovernment. eGovernment applications in Turkey have been basically focused on enterprises. Based on the seventh annual measurement of the progress of online public service delivery, Turkey ranked 20th.
Strategy and Policy
The e-Transformation Turkey Project was launched in 2003, aiming to revise both the legal framework and policies around ICT in Turkey based on EU standards. Technical and legal infrastructure, eHealth and eCommerce, policies and strategies are, according to the project, the main components of the process of Turkey's transformation into an information society. Two action plans were later developed to give a more detailed technical description of the project: the e-Transformation Turkey Project Short Term Action Plan 2003–2004 and the e-Transformation Turkey Project Short Term Action Plan 2005. According to the Information Society Strategy 2006–2010, which was initiated in 2005, Turkey's main strategic priorities are the following:
a citizen-focused service transformation;
social transformation;
the ICT adoption by businesses;
the modernisation in public administration;
a competitive, widespread and affordable telecommunications infrastructure and services;
a globally competitive IT sector; and
the improvement of R&D and innovation.
Policy objectives have also been outlined at the Ninth Development Plan (2007–2013) which further analyses the targeting transformation of the country in the economic, social, and cultural sector.
Legislation
The main legal entities regarding the eGovernment in Turkey are listed below:
eCommerce Legislation (entered into force in 2003)
Right to Information Act (entered into force in 2004)
eSignatures Legislation (entered into force in 2004)
Law regarding the Protection of Personal Data (entered into force in 2008)
eProcurement Legislation (amended in 2008)
Actors
The person in charge of eGovernment in Turkey is the Minister of State. The governmental body in charge of eGovernment policies is closely attached to the Prime Ministry. The Information Society Department of the State Planning Organisation has been responsible for the policy formulation since 2003.
Ukraine
The main coordinating government body in matters of e-government is Ministry of Digital Transformation created on September 2, 2019. It replaced State e-Government Agency established on June 4, 2014.
The beginning of the state policy of the development of the information society was the adoption in 1998-2006 of the Laws of Ukraine "On electronic documents and electronic document circulation", "On the national program of informatization", "On the electronic digital signature" and a number of state acts related to informatization. Later, the Law of Ukraine "On the Basic Principles of the Information Society in Ukraine for 2007-2015", "On Information Protection in Information and Telecommunication Systems" and some other legislative acts aimed at concretization and specification of these laws were adopted. This law emphasized the use of information and telecommunication technologies to improve public administration, relations between the state and citizens. The next stage of e-government development began in 2015 after the adoption of the Agreement of parliamentary factions of the Verkhovna Rada of Ukraine (in 2014) and the adoption of the Development Strategy "Ukraine - 2020", which was approved by Decree of the President of Ukraine dated January 12, 2015 (No. 5/2015).
In 2018, Ukraine ranks 82nd in the UN e-government ranking estimated to the one with high High EGDI (E-Government Development Index).
The biggest achievement of the Ukrainian government in this area is a new public e-procurement system—ProZorro. The system appeared to be so effective and innovative that it won a range of international rewards: the annual prize of the Open Government Awards 2016, the World Procurement Award (WPA) 2016 at Public Sector Awards etc.
Nonetheless, several other factors have contributed to Ukraine's rise in the global e-government rankings. Jordanka Tomkova, a Swiss funded E-governance Advisor in Ukraine and a Senior Governance Advisor at the INNOVABRIDGE Foundation, highlighted such factors: "First, several important legislative reforms such as the Law on Citizens’ Petitions (2015), Law on Access to Public Information and Open Data (2015) and the Law on the Open Use of Public Funds were passed. Second, several notable online tools were launched by civil society. These have included the notably successful Prozorro electronic procurement platform which in its first 14 months of operation already contributed 1.5 billion hrivnas in state savings. The spending.gov.ua or the Price of the State platforms make tracking of state expenditures a more transparent and interactive process for citizens. E-petitions instruments were adopted by the Presidential Administration, by over 200 local government authorities and more recently by the Cabinet of Ministers. Smart City, open data, e-voting pilots and the growth of regional IT innovation centers such as the Impact Hub in Odesa, Space Hub in Dnipro and iHUB in Vinnytsia are important catalysts to local civic initiatives that focus on social innovation. Lastly, momentum is gaining ground, albeit slowly, in the introduction of e-services where several Ministries (including Justice, Economic Development and Trade, Social Policy, Ecology, Regional Development, Building and Housing, Infrastructure and the State Fiscal Service) have launched some of their first electronic services. These newly launched services are facilitating more rapid and cost-efficient business and construction licensing, monitoring of illegal waste dumps and the automation of a one-stop-shop style customs clearance service".
In 2019, the digitalization (e-governance) was made a priority of state policy in the newly formed Honcharuk government and the respective Ministry of Digital Transformation was thus created to shape and implement the strategy. In 2020, the Diia app and web portal were officially launched by the ministry which allowed Ukrainians to use various kinds of documents (including ID-cards and passports) via their smarphones as well as to access various government services. The government plans to transfer all governmental services to Diia by 2023.
Specialized education in e-government in Europe
In recent years, some European academic institutions started offering masters courses in e-Government. They are:
Tallinn University of Technology, Estonia:
Koblenz-Landau University, Germany:
University of Trento, Italy:
Örebro University, Sweden:
École Polytechnique Fédérale de Lausanne, Switzerland:
University of Manchester, UK:
See also
List of European Union directives
European Interoperability Framework
MinID
Notes
References
External links
Austria eGovernment infrastructure components:
Platform "Digital Austria"
Citizen Card
HELP.gv.at
PEP online
Help-Business portal
Belgium key players:
Crossroads Bank for Social Security (CBSS)
BELNET
Court of Audit
Commission for the Protection of Privacy
Bulgaria:
Informatsionno Obsluzhvane
Ministry of State Administration and Administrative Reform
State Agency for Information Technology and Communications
National Health Portal (in Bulgarian)
National Revenue Agency
Croatia:
eCroatia implementation
HITRONet Network
Cyprus:
Cyprus Government Portal
Czech Republic:
Czech eGovernment portal
Portal of Prague
Data Boxes information website
Czech POINT network
Tax Portal for the Public
eJustice portal
Ministry of the Interior
Government Council for the Information Society
Denmark:
KMD official website
Digital gateway to the Danish public sector
National IT and Telecom Agency
National Health portal
Public Procurement Portal
Business portal
Estonia
X-road website
Ministry of Economic Affairs and Communications
RISO website
RIA website
Estonian eGovernment portal
EEBone
Public procurement state register
X-road middleware
Health information system
Finland
Suomi.fi portal
EnterpriseFinland portal
HILMA Notification service (in Finnish)
Hansel eProcurement portal
France
eGovernment portal "Service-public.fr"
eGovernment portal "Mon.service-public.fr"
Prime Minister's website (in French)
Taxation portal "Impots.gouv.fr"
Country-wide eProcurement portal "Marchés-publics.fr"
Change of address single notification portal
The Digital Bill
Public Action 2022
DCANT 2018-2020 (in French)
Open platform for French public data
Inter-Ministerial Network of the State (RIE) (in French)
SSO solution France Connect (in French)
Démarches simplifiées portal (in French)
DITP & DINSIC (in French)
About the SGMAP
Greece
HERMES, the National Portal of Public Administration
SYZEFXIS, the National Public Administration Network
KEP, the citizen service centres
AADE, the public revenue agency
GRNET, Greek Research and Technology
Hungary
eGovernment portal
Iceland infrastructure resources
Government's Portal for all governmental ministries, agencies, departments
National information and service portal
Iceland's official gateway for foreigners
Official gateway to the Icelandic Foreign Service
Multicultural information centre to provide assistance to immigrants
Information centre on Icelandic language technology
UT-Web of Information Technology
Ireland
Revenue Online Service
What is PAYE Anytime?
Ireland's Citizen Information website
BASIS portal
eTenders portal
Certificate.ie service
"Acts of the Oireachtas" web portal
Irish Department of Finance
Italy
Public website on the implementation status of the E-Government plan 2012
Website of the National Services Card project
eGovernment portal for citizens
eGovernment portal for businesses
Public eProcurement portal "Acquisti in rete"
Knowledge Management Platform "Magellano"
Taxation portal
Ministry for Innovation and Public Administration
National Agency for Digital Administration – CNIPA
Latvia resources:
The State portal
National Data Transmission Network
Latviesi.com
eSignature
State Archives of Latvia
National Library of Latvia
Lithuania
Ministry of the Interior
Luxembourg
"De Guichet" portal
eLuxemburgensia
Anelo.lu
Macedonia
Government website
eGovernment portal (in Macedonian)
eProcurement System
Education Web Portal
Netherlands
Overheid.nl, the main administrative portal
Citizen's portal
citizen's link
Business portal
Norway
MinID – a personalised log-in system for accessing online public services from the national public sector (see MinID)
Doffin – Database for Public Procurement
MyPage – an online one-stop public service centre
Norway.no -Gateway to the Public Sector
Altinn – Portal for Public Reporting
Centre for Information Security (NorSIS)
Electronic Public Procurement Portal
Norway Digital – the national geographical infrastructure
Poland infrastructure components:
Ministry of Interior and Administration
Ministry of Infrastructure
Electronic Platform of Public Administration Services ePUAP
Public Procurement Office (in Polish)
Geoportal
Portugal
Citizen's portal
Enterprises' portal
Common Knowledge Network
Citizen's card
Portuguese Electronic Passport
National eProcurement portal
Romania
Romanian eGovernment portal e-guvernare
eProcurement system e-licitatie
Slovakia
The Central Public Administration portal (in Slovak)
Ministry of Finance of the Slovak Republic
Slovenia
eUprava portal
eVEM portal
e-SJU portal
Spain:
060.es Portal (in Spanish)
@firma (in Spanish)
Sweden
eGovernment Delegation
Taxation portal
Employment portal
Business Portal
E-Legitimation
Turkey
e-Government Gateway (e-Devlet Kapısı)
Public Secure Network
MERNIS (Central Demographic Administration System, in Turkish Merkezî Nüfus İdaresi Sistemi)
Information technology organizations based in Europe |
7308303 | https://en.wikipedia.org/wiki/Desktop%20virtualization | Desktop virtualization | Desktop virtualization is a software technology that separates the desktop environment and associated application software from the physical client device that is used to access it.
Desktop virtualization can be used in conjunction with application virtualization and user profile management systems, now termed user virtualization, to provide a comprehensive desktop environment management system. In this mode, all the components of the desktop are virtualized, which allows for a highly flexible and much more secure desktop delivery model. In addition, this approach supports a more complete desktop disaster recovery strategy as all components are essentially saved in the data center and backed up through traditional redundant maintenance systems. If a user's device or hardware is lost, the restore is straightforward and simple, because the components will be present at login from another device. In addition, because no data are saved to the user's device, if that device is lost, there is much less chance that any critical data can be retrieved and compromised.
System architectures
Desktop virtualization implementations are classified based on whether the virtual desktop runs remotely or locally, on whether the access is required to be constant or is designed to be intermittent, and on whether or not the virtual desktop persists between sessions. Typically, software products that deliver desktop virtualization solutions can combine local and remote implementations into a single product to provide the most appropriate support specific to requirements. The degrees of independent functionality of the client device is necessarily interdependent with the server location and access strategy. And virtualization is not strictly required for remote control to exist. Virtualization is employed to present independent instances to multiple users and requires a strategic segmentation of the host server and presentation at some layer of the host's architecture. The enabling layer—usually application software—is called a hypervisor.
Remote desktop virtualization
Remote desktop virtualization implementations operate in a client/server computing environment. Application execution takes place on a remote operating system which communicates with the local client device over a network using a remote display protocol through which the user interacts with applications. All applications and data used remain on the remote system with only display, keyboard, and mouse information communicated with the local client device, which may be a conventional PC/laptop, a thin client device, a tablet, or even a smartphone. A common implementation of this approach involves hosting multiple desktop operating system instances on a server hardware platform running a hypervisor. Its latest iteration is generally referred to as Virtual Desktop Infrastructure, or "VDI". (Note that "VDI" is often used incorrectly to refer to any desktop virtualization implementation.).
Remote desktop virtualization is frequently used in the following scenarios:
in distributed environments with high availability requirements and where desk-side technical support is not readily available, such as branch office and retail environments.
in environments where high network latency degrades the performance of conventional client/server applications
in environments where remote access and data security requirements create conflicting requirements that can be addressed by retaining all (application) data within the data center – with only display, keyboard, and mouse information communicated with the remote client.
It is also used as a means of providing access to Windows applications on non-Windows endpoints (including tablets, smartphones, and non-Windows-based desktop PCs and laptops).
Remote desktop virtualization can also provide a means of resource sharing, to distribute low-cost desktop computing services in environments where providing every user with a dedicated desktop PC is either too expensive or otherwise unnecessary.
For IT administrators, this means a more centralized, efficient client environment that is easier to maintain and able to respond more quickly to the changing needs of the user and business.
Presentation virtualization
Remote desktop software allows a user to access applications and data on a remote computer over a network using a remote-display protocol. A VDI service provides individual desktop operating system instances (e.g., Windows XP, 7, 8.1, 10, etc.) for each user, whereas remote desktop sessions run in a single shared-server operating system. Both session collections and virtual machines support full desktop based sessions and remote application deployment.
The use of a single shared-server operating system instead of individual desktop operating system instances consumes significantly fewer resources than the same number of VDI sessions. At the same time, VDI licensing is both more expensive and less flexible than equivalent remote desktop licenses. Together, these factors can combine to make remote desktop-based remote desktop virtualization more attractive than VDI.
VDI implementations allow for delivering personalized workspace back to a user, which retains all the user's customizations. There are several methods to accomplish this.
Application virtualization
Application virtualization improves delivery and compatibility of applications by encapsulating them from the underlying operating system on which they are executed. A fully virtualized application is not installed on hardware in the traditional sense. Instead, a hypervisor layer intercepts the application, which at runtime acts as if it is interfacing with the original operating system and all the resources managed by it when in reality it is not.
User virtualization
User virtualization separates all of the software aspects that define a user’s personality on a device from the operating system and applications to be managed independently and applied to a desktop as needed without the need for scripting, group policies, or use of roaming profiles. The term "user virtualization" sounds misleading; this technology is not limited to virtual desktops. User virtualization can be used regardless of platform – physical, virtual, cloud, etc. The major desktop virtualization platform vendors, Citrix, Microsoft and VMware, all offer a form of basic user virtualization in their platforms.
Layering
Desktop layering is a method of desktop virtualization that divides a disk image into logical parts to be managed individually. For example, if all members of a user group use the same OS, then the core OS only needs to be backed up once for the entire environment who share this layer. Layering can be applied to local physical disk images, client-based virtual machines, or host-based desktops. Windows operating systems are not designed for layering, therefore each vendor must engineer their own proprietary solution.
Desktop as a service
Remote desktop virtualization can also be provided via cloud computing similar to that provided using a software as a service model. This approach is usually referred to as cloud-hosted virtual desktops. Cloud-hosted virtual desktops are divided into two technologies:
Managed VDI, which is based on VDI technology provided as an outsourced managed service, and
Desktop as a service (DaaS), which provides a higher level of automation and real multi-tenancy, reducing the cost of the technology. The DaaS provider typically takes full responsibility for hosting and maintaining the computer, storage, and access infrastructure, as well as applications and application software licenses needed to provide the desktop service in return for a fixed monthly fee.
Cloud-hosted virtual desktops can be implemented using both VDI and Remote Desktop Services-based systems and can be provided through the public cloud, private cloud infrastructure, and hybrid cloud platforms. Private cloud implementations are commonly referred to as "managed VDI". Public cloud offerings tend to be based on desktop-as-a-service technology.
Local desktop virtualization
Local desktop virtualization implementations run the desktop environment on the client device using hardware virtualization or emulation. For hardware virtualization, depending on the implementation both Type I and Type II hypervisors may be used.
Local desktop virtualization is well suited for environments where continuous network connectivity cannot be assumed and where application resource requirements can be better met by using local system resources. However, local desktop virtualization implementations do not always allow applications developed for one system architecture to run on another. For example, it is possible to use local desktop virtualization to run Windows 7 on top of OS X on an Intel-based Apple Mac, using a hypervisor, as both use the same x86 architecture.
References
Further reading
Paul Venezia (April 13, 2011) Virtualization shoot-out: Citrix, Microsoft, Red Hat, and VMware. The leading server virtualization contenders tackle InfoWorld's ultimate virtualization challenge, InfoWorld
Keith Schultz (December 14, 2011) VDI shoot-out: Citrix XenDesktop vs. VMware View. Citrix XenDesktop 5.5 and VMware View 5 vie for the most flexible, scalable, and complete virtual desktop infrastructure, InfoWorld
Keith Schultz (December 14, 2011) VDI shoot-out: HDX vs. PCoIP. The differences between the Citrix and VMware remote desktop protocols are more than skin deep, InfoWorld
Centralized computing
Remote desktop
Thin clients
fr:Virtual Desktop Infrastructure |
26908549 | https://en.wikipedia.org/wiki/NetworkX | NetworkX | NetworkX is a Python library for studying graphs and networks. NetworkX is free software released under the BSD-new license.
Features
Classes for graphs and digraphs.
Conversion of graphs to and from several formats.
Ability to construct random graphs or construct them incrementally.
Ability to find subgraphs, cliques, k-cores.
Explore adjacency, degree, diameter, radius, center, betweenness, etc.
Draw networks in 2D and 3D.
Suitability
NetworkX is suitable for operation on large real-world graphs: e.g., graphs in excess of 10 million nodes and 100 million edges. Due to its dependence on a pure-Python "dictionary of dictionary" data structure, NetworkX is a reasonably efficient, very scalable, highly portable framework for network and social network analysis.
Integration
NetworkX is integrated into SageMath.
See also
Social network analysis software
JGraph
References
External links
Official website:
NetworkX discussion group
Survey of existing graph theory software
NetworkX on StackOverflow
Free mathematics software
Free software programmed in Python
Graph drawing software
Numerical software
Software using the BSD license
Python (programming language) scientific libraries |
20861415 | https://en.wikipedia.org/wiki/MiniGUI | MiniGUI | MiniGUI is a GUI system with support for real-time and embedded operating systems, and aims to be fast, stable, light-weight and cross-platform. It was first released under the GNU GPL in 1999, and has since offered a commercial version with more features, including support for operating systems other than Linux and eCos. MiniGUI has been widely used in handheld terminals, portable media players, and industry instruments.
History
MiniGUI was started by Wei Yongming as a simple interface for a control system based on Linux. The project was developed independently under the GNU GPL until September 2002, when the developers founded Feynman Software Technology and began commercial marketing of the software.
In October 2003, MiniGUI was ported to μClinux and eCos.
Features
Support for many embedded operating systems, including Linux and its derivative μClinux, eCos, VxWorks, pSOS, ThreadX and Nucleus
Support for embedded resources and as a result devices without file systems
Compatibility with Windows resource file formats including icons and cursors.
Skin support
Support for many character sets, including ISO8859 and BIG5
References
External links
MiniGUI official website
Graphical user interfaces |
66079479 | https://en.wikipedia.org/wiki/2020%E2%80%9321%20Little%20Rock%20Trojans%20women%27s%20basketball%20team | 2020–21 Little Rock Trojans women's basketball team | The 2020–21 Little Rock Trojans women's basketball team represents the University of Arkansas at Little Rock during the 2020–21 NCAA Division I women's basketball season. The basketball team, led by eighteenth-year head coach Joe Foley, play all home games at the Jack Stephens Center along with the Little Rock Trojans men's basketball team. They are members of the Sun Belt Conference.
Previous season
The Trojans finished the 2019–20 season 12–19, 9–9 in Sun Belt play to finish sixth in the conference. They made it to the 2019-20 Sun Belt Conference Women's Basketball Tournament where they defeated Appalachian State in the First Round before being defeated by Louisiana in the Quarterfinals. Following the season, all conference tournaments as well as all postseason play was cancelled due to the COVID-19 pandemic.
Offseason
Departures
Transfers
Recruiting
Roster
Schedule and results
|-
!colspan=9 style=| Non-conference Regular Season
|-
|-
!colspan=9 style=| Conference Regular Season
|-
|-
!colspan=9 style=| Sun Belt Tournament
See also
2020–21 Little Rock Trojans men's basketball team
References
Little Rock Trojans women's basketball seasons
Little Rock Trojans
Little Rock Trojans women's basketball
Little Rock Trojans women's basketball |
4930652 | https://en.wikipedia.org/wiki/Software%20review | Software review | A software review is "a process or meeting during which a software product is examined by a project personnel, managers, users, customers, user representatives, or other interested parties for comment or approval".
In this context, the term "software product" means "any technical document or partial document, produced as a deliverable of a software development activity", and may include documents such as contracts, project plans and budgets, requirements documents, specifications, designs, source code, user documentation, support and maintenance documentation, test plans, test specifications, standards, and any other type of specialist work product.
Varieties of software review
Software reviews may be divided into three categories:
Software peer reviews are conducted by one or more colleagues of the author, to evaluate the technical content and/or quality of the work.
Software management reviews are conducted by management representatives to evaluate the status of work done and to make decisions regarding downstream activities.
Software audit reviews are conducted by personnel external to the software project, to evaluate compliance with specifications, standards, contractual agreements, or other criteria.
Different types of peer reviews
Code review is systematic examination (often as peer review) of computer source code.
Pair programming is a type of code review where two persons develop code together at the same workstation.
Inspection is a very formal type of peer review where the reviewers are following a well-defined process to find defects.
Walkthrough is a form of peer review where the author leads members of the development team and other interested parties go through a software product and the participants ask questions and make comments about defects.
Technical review is a form of peer review in which a team of qualified personnel examines the suitability of the software product for its intended use and identifies discrepancies from specifications and standards.
Formal versus informal reviews
"Formality" identifies the degree to which an activity is governed by agreed (written) rules. Software review processes exist across a spectrum of formality, with relatively unstructured activities such as "buddy checking" towards one end of the spectrum, and more formal approaches such as walkthroughs, technical reviews, and software inspections, at the other. IEEE Std. 1028-1997 defines formal structures, roles and processes for each of the last three ("formal peer reviews"), together with software audits. IEEE 1028-1997 was succeeded by IEEE 1028-2008.
Research studies tend to support the conclusion that formal reviews greatly outperform informal reviews in cost-effectiveness. Informal reviews may often be unnecessarily expensive (because of time-wasting through lack of focus) and frequently provide a sense of security which is quite unjustified by the relatively small number of real defects found and repaired.
IEEE 1028 generic process for formal reviews
IEEE Std 1028 defines a common set of activities for "formal" reviews (with some variations, especially for software audit). The sequence of activities is largely based on the software inspection process originally developed at IBM by Michael Fagan. Differing types of review may apply this structure with varying degrees of rigour, but all activities are mandatory for inspection:
0. [Entry evaluation]: The review leader uses a standard checklist of entry criteria to ensure that optimum conditions exist for a successful review.
1. Management preparation: Responsible management ensure that the review will be appropriately resourced with staff, time, materials and tools, and will be conducted according to policies, standards or other relevant criteria.
2. Planning the review: The review leader identifies or confirms the objectives of the review, organises a team of reviewers and ensures that the team is equipped with all necessary resources for conducting the review.
3. Overview of review procedures: The review leader, or some other qualified person, ensures (at a meeting if necessary) that all reviewers understand the review goals, the review procedures, the materials available to them and the procedures for conducting the review.
4. [Individual] Preparation: The reviewers individually prepare for group examination of the work under review, by examining it carefully for "anomalies" (potential defects), the nature of which will vary with the type of review and its goals.
5. [Group] Examination: The reviewers meet at a planned time to pool the results of their preparation activity and arrive at a consensus regarding the status of the document (or activity) being reviewed.
6. Rework/follow-up: The author of the work product (or other assigned person) undertakes whatever actions are necessary to repair defects or otherwise satisfy the requirements agreed to at the examination meeting. The review leader verifies that all action items are closed.
7. [Exit evaluation]: The review leader verifies that all activities necessary for successful review have been accomplished and that all outputs appropriate to the type of review have been finalised.
Value of reviews
The most obvious value of software reviews (especially formal reviews) is that they can identify issues earlier and more cheaply than they would be identified by testing or by field use (the "defect detection process"). The cost to find and fix a defect by a well-conducted review may be one or two orders of magnitude less than when the same defect is found by test execution or in the field.
A second, but ultimately more important, value of software reviews is that they can be used to train technical authors in the development of extremely low-defect documents, and also to identify and remove process inadequacies that encourage defects (the "defect prevention process").
This is particularly the case for peer reviews if they are conducted early and often, on samples of work, rather than waiting until the work has been completed. Early and frequent reviews of small work samples can identify systematic errors in the author's work processes, which can be corrected before further faulty work is done. This improvement in author skills can dramatically reduce the time it takes to develop a high-quality technical document and dramatically decrease the error-rate in using the document in downstream processes.
As a general principle, the earlier a technical document is produced, the greater will be the impact of its defects on any downstream activities and their work products. Accordingly, greatest value will accrue from early reviews of documents such as marketing plans, contracts, project plans and schedules and requirements specifications. Researchers and practitioners have shown the effectiveness of reviewing process in finding bugs and security issues.
See also
Software quality
List of software development philosophies
References
Software review |
9748335 | https://en.wikipedia.org/wiki/Delcam | Delcam | Delcam is a supplier of advanced CAD/CAM software for the manufacturing industry.
The company has grown steadily since being founded formally in 1977, after initial development work at Cambridge University, UK.
It is now a global developer of product design and manufacturing software, with subsidiaries and joint ventures in North America, South America, Europe and Asia with a total staff of over 800 people and local support provided from over 300 re-seller offices worldwide. It was listed on the London Stock Exchange until 6 February 2014, when it was acquired by Autodesk.
It now operates as a wholly owned, independently operated subsidiary of Autodesk.
History
Overview
In 1965, Donald Welbourn
saw the possibility of using computers to help pattern makers solve the problems of modelling difficult 3D shapes. He persuaded the Science Research Council to support research at the Cambridge University Engineering Department. Early sponsorship was provided by Ford and Control Data in Germany, whose customers included Volkswagen and Daimler-Benz. In 1974 the Delta Group seconded Ed Lambourne (now Technical Director at Delcam) to the Cambridge Team. After Lambourne returned to Delta, a Birmingham-based development centre was established in 1977.
In 1989, the company was bought from Delta Group in a buyout led by Managing Director Hugh Humphreys and Ed Lambourne. The company was renamed Delcam International in 1991 and moved to a new purpose-built office in Small Heath. In July 1997, Delcam Ltd was floated on the Alternative Investment Market to expand international operations and increase the investment development.
The company now have over 300 offices assisting 90,000 users worldwide with an annual turnover of over £100 million with the largest development team in the industry.
Clive Martell was chief executive from August 2009.
In February 2015, Pete Baxter, former vice president of sales and country manager for Autodesk in the UK, was appointed vice president.
Key Dates
1960s Initial research at Cambridge University
1970s Development taken over by Delta Group
1980s Software sales start
1989 Staff purchase company
1991 New HQ, Birmingham, UK
1997 Delcam floats on UK Market
2001 Alcami Joins Delcam
2005 EGS (FeatureCAM) joins Delcam
2006 IMCS (PartMaker) joins Delcam
2007 Crispin joins Delcam
2008 25,000th Customer
2009 30,000th Customer
2010 35,000th Customer
2012 40,000th Customer
2013 45,000th Customer
2014 Acquired by Autodesk
2015 50,000th Customer
2017 ArtCAM Announced End of development / End of support The ArtCAM code was then licensed to the company Carvceo who re-branded the software and continue to develop and sell to previous ArtCAM customers
2018 upon request to comm. dep.: Autodesk did not continue to develop and supporting the (Footwear) software; beside Powershape
Products
Advanced Manufacturing Solutions
PowerSHAPE
Is a 3D CAD (Computer-aided Design) solution that runs on Microsoft Windows which allows for the design of 3D complex models using surfaces, solids and triangles. The software allows for the import of 3D point cloud data to reverse engineer 3D models.
PowerSHAPE is used for a variety of applications including Modelling for manufacture, electrode design, mould and toolmaking.
The code of PowerSHAPE originates from the DUCT software.
PowerMILL
A CAM solution for the programming of tool paths for 2 to 5 axis CNC Milling (Computer Numerical Control).
PowerINSPECT
A CAD based inspection solution package for use with many types of inspection hardware,
including manual and CNC CMMs, portable arms, optical measuring devices and CNC machine tools (OMV). Developed for use on Microsoft Windows, the software is sold to a wide range of industries.
In 2004 Delcam won a Queen's award
for innovation for PowerINSPECT and by 2008 PowerINSPECT was Delcam's second biggest selling product.
PowerMILL Robot Interface
A software package for the programming of machining robots with up to 8 axes.
FeatureCAM
A feature-based CAM solution for milling, turning, and wire EDM.
PartMaker
A CAM software for programming of turn-mill equipment bar-fed mills and Swiss-type lathes.
Delcam for SolidWorks
A CAM solution based on PowerMILL and FeatureCAM embedded into SolidWorks.
Delcam Exchange
A CAD data translator for reading and writing all commonly used CAD format files.
Delcam Electrode
An integrated software within PowerSHAPE for automatic generation of solid electrode models for EDM with optional tool path generation in PowerMILL.
Metrology Solutions
On-Machine Verification
A package for measurement of complex parts directly on the machine tools.
Artistic CADCAM Solutions
ArtCAM JewelSmith
A specialist 3D design and manufacture solution for jewelers.
ArtCAM Pro
A complete solution for the design and manufacture of 2D Artwork & 3D Reliefs.
ArtCAM Insignia
Is for production-level layout and manufacture of 2D Artwork & 3D Reliefs.
ArtCAM Express
Is a low cost introductory design and machining solution.
After acquisition by Autodesk in 2014, Artcam brand was discontinued However the software is still in existence and licensed under another name - 'Carveco'
Footwear Solutions
OrderManager
Is a web-based workflow management tool for tracking orders remotely.
OrthoMODEL
Is a software package for the design of custom orthotic insoles.
OrthoMILL
Is a software package for the manufacture of custom orthotic insoles.
iQube Scanner
Is a foot, plaster cast and foambox scanner
LastMaker
Is a software package for 3D last modification and 3D last grading.
ShoeMaker
Is a software package for the 3D design of footwear.
SoleEngineer
Is software for 3D sole unit engineering and grading.
Engineer Pro
Is software for 2D pattern engineering and grading.
PatternCut
Is software for 2D pattern part nesting and cutting.
KnifeCut
Is software for 2D pattern part nesting and cutting for projection cutting machines.
ShoeCost
Is software for total footwear costing.
TechPac
Is a technical documentation software package
Awards
1991 Queen's Award for International Trade
2003 Queen's Award for Innovation awarded for ArtCAM
2004 Queen's Award for Innovation awarded for PowerINSPECT
2005 Queen's Award for International Trade
2010 Queen's Award for Innovation awarded for Dental CADCAM Software
2011 Queen's Award for International Trade
2011 Ringier Technology Innovation Award for 'Delcam for SolidWorks' from International Metalworking News
2012 MTA Manufacturing Industry Award for Best Supplier Partnership for its relationship with Coventry Engineering Group
2014 MWP Awards 2014 – Best CADCAM or Control System
2014 Asian Manufacturing Awards 2014 - Best CAM Systems Provider
Graduate Programme
Delcam operates a development scheme for engineering and software graduates. The graduate programme consists of five 10-week rotations across different functions of the company. Functions include Professional Services, international support, marketing, ArtCAM, PowerINSPECT, and training. Graduates also have the opportunity to work at Delcam USA in Philadelphia or Salt Lake City.
Boris
Delcam's logo incorporates a computer-modelled spider named Boris. This was produced as an experiment to render hairs in Delcam's CAD software in 1984. Upon seeing a machined version at a show, the Managing Director Hugh Humphreys decided to take it home and use it within the company's logo.
References
External links
Official Delcam Site
Computer-aided manufacturing software
Computer-aided design software
Companies based in Birmingham, West Midlands |
31486803 | https://en.wikipedia.org/wiki/List%20of%20Lepidoptera%20of%20Cyprus | List of Lepidoptera of Cyprus | Lepidoptera of Cyprus consist of both the butterflies and moths recorded from Cyprus.
Butterflies
Hesperiidae
Carcharodus alceae (Esper, 1780)
Gegenes pumilio (Hoffmannsegg, 1804)
Pelopidas thrax (Hübner, 1821)
Thymelicus acteon (Rottemburg, 1775)
Lycaenidae
Aricia agestis (Denis & Schiffermuller, 1775)
Azanus jesous (Guerin-Meneville, 1847)
Celastrina argiolus (Linnaeus, 1758)
Cigaritis acamas (Klug, 1834)
Favonius quercus (Linnaeus, 1758)
Freyeria trochylus (Freyer, 1845)
Glaucopsyche paphos Chapman, 1920
Lampides boeticus (Linnaeus, 1767)
Leptotes pirithous (Linnaeus, 1767)
Luthrodes galba (Lederer, 1855)
Lycaena phlaeas (Linnaeus, 1761)
Lycaena thersamon (Esper, 1784)
Polyommatus icarus (Rottemburg, 1775)
Pseudophilotes vicrama (Moore, 1865)
Tarucus balkanica (Freyer, 1844)
Zizeeria karsandra (Moore, 1865)
Nymphalidae
Argynnis pandora (Denis & Schiffermuller, 1775)
Charaxes jasius (Linnaeus, 1767)
Chazara briseis (Linnaeus, 1764)
Danaus chrysippus (Linnaeus, 1758)
Hipparchia syriaca (Staudinger, 1871)
Hipparchia cypriensis Holik, 1949
Hyponephele lupinus (O. Costa, 1836)
Kirinia roxelana (Cramer, 1777)
Lasiommata maera (Linnaeus, 1758)
Lasiommata megera (Linnaeus, 1767)
Libythea celtis (Laicharting, 1782)
Limenitis reducta Staudinger, 1901
Maniola cypricola (Graves, 1928)
Nymphalis polychloros (Linnaeus, 1758)
Pararge aegeria (Linnaeus, 1758)
Pseudochazara anthelea (Hübner, 1824)
Vanessa atalanta (Linnaeus, 1758)
Vanessa cardui (Linnaeus, 1758)
Ypthima asterope (Klug, 1832)
Papilionidae
Papilio machaon Linnaeus, 1758
Zerynthia cerisy (Godart, 1824)
Pieridae
Anthocharis cardamines (Linnaeus, 1758)
Aporia crataegi (Linnaeus, 1758)
Colias croceus (Fourcroy, 1785)
Euchloe ausonia (Hübner, 1804)
Gonepteryx cleopatra (Linnaeus, 1767)
Pieris brassicae (Linnaeus, 1758)
Pieris rapae (Linnaeus, 1758)
Pontia chloridice (Hübner, 1813)
Moths
Adelidae
Adela paludicolella Zeller, 1850
Nemophora minimella (Denis & Schiffermuller, 1775)
Alucitidae
Alucita klimeschi Scholz & Jackh, 1997
Autostichidae
Apatema acutivalva Gozmany, 2000
Apatema apatemella Amsel, 1958
Apatema mediopallidum Walsingham, 1900
Apiletria luella Lederer, 1855
Charadraula cassandra Gozmany, 1967
Orpecacantha aphrodite (Gozmany, 1986)
Symmoca vitiosella Zeller, 1868
Syringopais temperatella (Lederer, 1855)
Blastobasidae
Blastobasis phycidella (Zeller, 1839)
Tecmerium perplexum (Gozmany, 1957)
Carposinidae
Carposina berberidella Herrich-Schäffer, 1854
Carposina scirrhosella Herrich-Schäffer, 1854
Choreutidae
Anthophila fabriciana (Linnaeus, 1767)
Choreutis nemorana (Hübner, 1799)
Coleophoridae
Coleophora alashiae Baldizzone, 1996
Coleophora albicostella (Duponchel, 1842)
Coleophora arenbergerella Baldizzone, 1985
Coleophora bilineella Herrich-Schäffer, 1855
Coleophora granulosella Staudinger, 1880
Coleophora helianthemella Milliere, 1870
Coleophora jerusalemella Toll, 1942
Coleophora lassella Staudinger, 1859
Coleophora luteolella Staudinger, 1880
Coleophora maritimella Newman, 1863
Coleophora mausolella Chretien, 1908
Coleophora ononidella Milliere, 1879
Coleophora parthenica Meyrick, 1891
Coleophora salicorniae Heinemann & Wocke, 1877
Coleophora salinella Stainton, 1859
Coleophora semicinerea Staudinger, 1859
Coleophora tamesis Waters, 1929
Coleophora valesianella Zeller, 1849
Coleophora zernyi Toll, 1944
Goniodoma limoniella (Stainton, 1884)
Cosmopterigidae
Alloclita recisella Staudinger, 1859
Anatrachyntis simplex (Walsingham, 1891)
Ascalenia vanelloides Gerasimov, 1930
Cosmopterix coryphaea Walsingham, 1908
Cosmopterix pararufella Riedl, 1976
Eteobalea dohrnii (Zeller, 1847)
Eteobalea intermediella (Riedl, 1966)
Eteobalea sumptuosella (Lederer, 1855)
Pyroderces argyrogrammos (Zeller, 1847)
Ramphis libanoticus Riedl, 1969
Sorhagenia reconditella Riedl, 1983
Cossidae
Dyspessa ulula (Borkhausen, 1790)
Paropta l-nigrum (Bethune-Baker, 1894)
Paropta paradoxus (Herrich-Schäffer, [1851])
Zeuzera pyrina (Linnaeus, 1761)
Crambidae
Anarpia incertalis (Duponchel, 1832)
Ancylolomia chrysographellus (Kollar & Redtenbacher, 1844)
Eudonia delunella (Stainton, 1849)
Herpetogramma licarsisalis (Walker, 1859)
Metasia albicostalis Hampson, 1900
Metasia parvalis Caradja, 1916
Metasia rosealis Ragonot, 1895
Prionapteryx obeliscota (Meyrick, 1936)
Prochoristis crudalis (Lederer, 1863)
Scoparia berytella Rebel, 1911
Scoparia ingratella (Zeller, 1846)
Scoparia staudingeralis (Mabille, 1869)
Douglasiidae
Tinagma anchusella (Benander, 1936)
Tinagma klimeschi Gaedike, 1987
Elachistidae
Agonopterix cachritis (Staudinger, 1859)
Agonopterix ferulae (Zeller, 1847)
Agonopterix nodiflorella (Milliere, 1866)
Agonopterix rutana (Fabricius, 1794)
Agonopterix scopariella (Heinemann, 1870)
Depressaria daucivorella Ragonot, 1889
Depressaria depressana (Fabricius, 1775)
Elachista sutteri Kaila, 2002
Elachista pigerella (Herrich-Schäffer, 1854)
Ethmia bipunctella (Fabricius, 1775)
Exaeretia ledereri (Zeller, 1854)
Erebidae
Autophila asiatica (Staudinger, 1888)
Autophila dilucida (Hübner, 1808)
Autophila luxuriosa Zerny, 1933
Autophila anaphanes Boursin, 1940
Autophila maura (Staudinger, 1888)
Catephia alchymista (Denis & Schiffermuller, 1775)
Catocala coniuncta (Esper, 1787)
Catocala conversa (Esper, 1783)
Catocala dilecta (Hübner, 1808)
Catocala disjuncta (Geyer, 1828)
Catocala diversa (Geyer, 1828)
Catocala elocata (Esper, 1787)
Catocala eutychea Treitschke, 1835
Catocala nymphaea (Esper, 1787)
Catocala nymphagoga (Esper, 1787)
Catocala promissa (Denis & Schiffermuller, 1775)
Catocala separata Freyer, 1848
Clytie syriaca (Bugnion, 1837)
Coscinia cribraria (Linnaeus, 1758)
Coscinia striata (Linnaeus, 1758)
Drasteria cailino (Lefebvre, 1827)
Dysauxes famula (Freyer, 1836)
Dysgonia algira (Linnaeus, 1767)
Dysgonia torrida (Guenee, 1852)
Eilema complana (Linnaeus, 1758)
Eilema muscula (Staudinger, 1899)
Eublemma candidana (Fabricius, 1794)
Eublemma cochylioides (Guenee, 1852)
Eublemma gratissima (Staudinger, 1892)
Eublemma ostrina (Hübner, 1808)
Eublemma pallidula (Herrich-Schäffer, 1856)
Eublemma parva (Hübner, 1808)
Eublemma polygramma (Duponchel, 1842)
Eublemma scitula Rambur, 1833
Eublemma straminea (Staudinger, 1892)
Eublemma suppura (Staudinger, 1892)
Euplagia quadripunctaria (Poda, 1761)
Euproctis chrysorrhoea (Linnaeus, 1758)
Grammodes bifasciata (Petagna, 1787)
Grammodes stolida (Fabricius, 1775)
Heteropalpia vetusta (Walker, 1865)
Hypena lividalis (Hübner, 1796)
Hypena obsitalis (Hübner, 1813)
Hypenodes cypriaca Fibiger, Pekarsky & Ronkay, 2010
Lygephila craccae (Denis & Schiffermuller, 1775)
Lymantria atlantica (Rambur, 1837)
Metachrostis dardouini (Boisduval, 1840)
Metachrostis velocior (Staudinger, 1892)
Metachrostis velox (Hübner, 1813)
Micronoctua karsholti Fibiger, 1997
Minucia lunaris (Denis & Schiffermuller, 1775)
Ocnogyna loewii (Zeller, 1846)
Ophiusa tirhaca (Cramer, 1773)
Orectis proboscidata (Herrich-Schäffer, 1851)
Orgyia josephina Astaut, 1880
Orgyia trigotephras Boisduval, 1829
Pandesma robusta (Walker, 1858)
Parascotia detersa (Staudinger, 1891)
Parascotia fuliginaria (Linnaeus, 1761)
Parocneria terebinthi (Freyer, 1838)
Pericyma albidentaria (Freyer, 1842)
Pericyma squalens Lederer, 1855
Raparna conicephala (Staudinger, 1870)
Rhypagla lacernaria (Hübner, 1813)
Tathorhynchus exsiccata (Lederer, 1855)
Utetheisa pulchella (Linnaeus, 1758)
Zanclognatha lunalis (Scopoli, 1763)
Zanclognatha zelleralis (Wocke, 1850)
Zebeeba falsalis (Herrich-Schäffer, 1839)
Zekelita ravalis (Herrich-Schäffer, 1851)
Zekelita antiqualis (Hübner, 1809)
Zethes insularis Rambur, 1833
Eriocottidae
Deuterotinea instabilis (Meyrick, 1924)
Euteliidae
Eutelia adulatrix (Hübner, 1813)
Gelechiidae
Altenia mersinella (Staudinger, 1879)
Altenia wagneriella (Rebel, 1926)
Anacampsis obscurella (Denis & Schiffermuller, 1775)
Anacampsis timidella (Wocke, 1887)
Anarsia lineatella Zeller, 1839
Aristotelia calastomella (Christoph, 1873)
Aristotelia decurtella (Hübner, 1813)
Bryotropha arabica Amsel, 1952
Bryotropha azovica Bidzilia, 1997
Bryotropha desertella (Douglas, 1850)
Bryotropha domestica (Haworth, 1828)
Bryotropha figulella (Staudinger, 1859)
Bryotropha hendrikseni Karsholt & Rutten, 2005
Bryotropha hulli Karsholt & Rutten, 2005
Bryotropha plebejella (Zeller, 1847)
Bryotropha sabulosella (Rebel, 1905)
Bryotropha senectella (Zeller, 1839)
Carpatolechia decorella (Haworth, 1812)
Caryocolum marmorea (Haworth, 1828)
Crossobela trinotella (Herrich-Schäffer, 1856)
Deltophora maculata (Staudinger, 1879)
Dichomeris acuminatus (Staudinger, 1876)
Dichomeris lamprostoma (Zeller, 1847)
Dichomeris limbipunctellus (Staudinger, 1859)
Ephysteris deserticolella (Staudinger, 1871)
Ephysteris promptella (Staudinger, 1859)
Ephysteris tenuisaccus Nupponen, 2010
Epidola stigma Staudinger, 1859
Eulamprotes isostacta (Meyrick, 1926)
Eulamprotes nigromaculella (Milliere, 1872)
Exoteleia dodecella (Linnaeus, 1758)
Gelechia senticetella (Staudinger, 1859)
Isophrictis anthemidella (Wocke, 1871)
Istrianis femoralis (Staudinger, 1876)
Mesophleps corsicella Herrich-Schäffer, 1856
Mesophleps oxycedrella (Milliere, 1871)
Mesophleps silacella (Hübner, 1796)
Metanarsia modesta Staudinger, 1871
Metzneria aestivella (Zeller, 1839)
Metzneria agraphella (Ragonot, 1895)
Metzneria campicolella (Mann, 1857)
Metzneria castiliella (Moschler, 1866)
Metzneria lappella (Linnaeus, 1758)
Metzneria littorella (Douglas, 1850)
Metzneria riadella Englert, 1974
Metzneria torosulella (Rebel, 1893)
Microlechia rhamnifoliae (Amsel & Hering, 1931)
Mirificarma eburnella (Denis & Schiffermuller, 1775)
Mirificarma flavella (Duponchel, 1844)
Mirificarma mulinella (Zeller, 1839)
Monochroa melagonella (Constant, 1895)
Neotelphusa cisti (Stainton, 1869)
Nothris verbascella (Denis & Schiffermuller, 1775)
Ochrodia subdiminutella (Stainton, 1867)
Oecocecis guyonella Guenee, 1870
Ornativalva heluanensis (Debski, 1913)
Ornativalva plutelliformis (Staudinger, 1859)
Palumbina guerinii (Stainton, 1858)
Parapodia sinaica (Frauenfeld, 1859)
Pectinophora gossypiella (Saunders, 1844)
Phthorimaea operculella (Zeller, 1873)
Platyedra subcinerea (Haworth, 1828)
Ptocheuusa minimella (Rebel, 1936)
Ptocheuusa paupella (Zeller, 1847)
Recurvaria nanella (Denis & Schiffermuller, 1775)
Schneidereria pistaciella Weber, 1957
Scrobipalpa aptatella (Walker, 1864)
Scrobipalpa bigoti Povolny, 1973
Scrobipalpa instabilella (Douglas, 1846)
Scrobipalpa ocellatella (Boyd, 1858)
Scrobipalpa suaedella (Richardson, 1893)
Scrobipalpa wiltshirei Povolny, 1966
Sitotroga cerealella (Olivier, 1789)
Stomopteryx basalis (Staudinger, 1876)
Stomopteryx remissella (Zeller, 1847)
Geometridae
Agriopis bajaria (Denis & Schiffermuller, 1775)
Aplasta ononaria (Fuessly, 1783)
Aplocera plagiata (Linnaeus, 1758)
Apochima flabellaria (Heeger, 1838)
Ascotis selenaria (Denis & Schiffermuller, 1775)
Aspitates ochrearia (Rossi, 1794)
Camptogramma bilineata (Linnaeus, 1758)
Casilda consecraria (Staudinger, 1871)
Catarhoe hortulanaria (Staudinger, 1879)
Catarhoe permixtaria (Herrich-Schäffer, 1856)
Charissa subtaurica (Wehrli, 1932)
Chesias rhegmatica Prout, 1937
Chiasmia aestimaria (Hübner, 1809)
Chiasmia syriacaria (Staudinger, 1871)
Coenotephria ablutaria (Boisduval, 1840)
Colotois pennaria (Linnaeus, 1761)
Crocallis cypriaca Fischer, 2003
Culpinia prouti (Thierry-Mieg, 1913)
Cyclophora puppillaria (Hübner, 1799)
Dasycorsa modesta (Staudinger, 1879)
Dyscia innocentaria (Christoph, 1885)
Dyscia simplicaria Rebel, 1933
Ennomos lissochila (Prout, 1929)
Epirrhoe galiata (Denis & Schiffermuller, 1775)
Eumannia arenbergeri Hausmann, 1995
Eumera mulier Prout, 1929
Eupithecia breviculata (Donzel, 1837)
Eupithecia centaureata (Denis & Schiffermuller, 1775)
Eupithecia cerussaria (Lederer, 1855)
Eupithecia dubiosa Dietze, 1910
Eupithecia ericeata (Rambur, 1833)
Eupithecia marginata Staudinger, 1892
Eupithecia quercetica Prout, 1938
Eupithecia reisserata Pinker, 1976
Gnopharmia stevenaria (Boisduval, 1840)
Gnophos sartata Treitschke, 1827
Gymnoscelis rufifasciata (Haworth, 1809)
Hypomecis punctinalis (Scopoli, 1763)
Idaea albitorquata (Pungeler, 1909)
Idaea camparia (Herrich-Schäffer, 1852)
Idaea completa (Staudinger, 1892)
Idaea consanguinaria (Lederer, 1853)
Idaea consolidata (Lederer, 1853)
Idaea degeneraria (Hübner, 1799)
Idaea dimidiata (Hufnagel, 1767)
Idaea distinctaria (Boisduval, 1840)
Idaea elongaria (Rambur, 1833)
Idaea filicata (Hübner, 1799)
Idaea inclinata (Lederer, 1855)
Idaea inquinata (Scopoli, 1763)
Idaea intermedia (Staudinger, 1879)
Idaea mimosaria (Guenee, 1858)
Idaea obsoletaria (Rambur, 1833)
Idaea ochrata (Scopoli, 1763)
Idaea ostrinaria (Hübner, 1813)
Idaea palaestinensis (Sterneck, 1933)
Idaea peluraria (Reisser, 1939)
Idaea politaria (Hübner, 1799)
Idaea seriata (Schrank, 1802)
Idaea subsericeata (Haworth, 1809)
Idaea textaria (Lederer, 1861)
Idaea tineata (Thierry-Mieg, 1911)
Idaea trigeminata (Haworth, 1809)
Idaea troglodytaria (Heydenreich, 1851)
Isturgia berytaria (Staudinger, 1892)
Larentia clavaria (Haworth, 1809)
Mattia adlata (Staudinger, 1895)
Menophra berenicidaria (Turati, 1924)
Microloxia herbaria (Hübner, 1813)
Nebula achromaria (de La Harpe, 1853)
Nebula schneideraria (Lederer, 1855)
Nychiodes aphrodite Hausmann & Wimmer, 1994
Nycterosea obstipata (Fabricius, 1794)
Orthostixis cinerea Rebel, 1916
Oulobophora externaria (Herrich-Schäffer, 1848)
Pareulype lasithiotica (Rebel, 1906)
Peribatodes correptaria (Zeller, 1847)
Peribatodes rhomboidaria (Denis & Schiffermuller, 1775)
Peribatodes umbraria (Hübner, 1809)
Perizoma bifaciata (Haworth, 1809)
Phaiogramma etruscaria (Zeller, 1849)
Phaiogramma faustinata (Milliere, 1868)
Problepsis ocellata (Frivaldszky, 1845)
Proteuchloris neriaria (Herrich-Schäffer, 1852)
Protorhoe corollaria (Herrich-Schäffer, 1848)
Protorhoe unicata (Guenee, 1858)
Pseudoterpna coronillaria (Hübner, 1817)
Pseudoterpna rectistrigaria Wiltshire, 1948
Rhodometra sacraria (Linnaeus, 1767)
Rhodostrophia tabidaria (Zeller, 1847)
Rhoptria asperaria (Hübner, 1817)
Scopula decolor (Staudinger, 1898)
Scopula flaccidaria (Zeller, 1852)
Scopula imitaria (Hübner, 1799)
Scopula luridata (Zeller, 1847)
Scopula marginepunctata (Goeze, 1781)
Scopula minorata (Boisduval, 1833)
Scopula sacraria (Bang-Haas, 1910)
Scopula uberaria Zerny, 1933
Scopula ornata (Scopoli, 1763)
Scopula submutata (Treitschke, 1828)
Scopula turbulentaria (Staudinger, 1870)
Scopula vigilata (Sohn-Rethel, 1929)
Selidosema tamsi Rebel, 1939
Xanthorhoe fluctuata (Linnaeus, 1758)
Xanthorhoe oxybiata (Milliere, 1872)
Xenochlorodes olympiaria (Herrich-Schäffer, 1852)
Glyphipterigidae
Digitivalva pulicariae (Klimesch, 1956)
Gracillariidae
Caloptilia falconipennella (Hübner, 1813)
Caloptilia roscipennella (Hübner, 1796)
Parornix acuta Triberti, 1980
Phyllonorycter blancardella (Fabricius, 1781)
Phyllonorycter helianthemella (Herrich-Schäffer, 1861)
Phyllonorycter obtusifoliella Deschka, 1974
Phyllonorycter platani (Staudinger, 1870)
Phyllonorycter roboris (Zeller, 1839)
Phyllonorycter troodi Deschka, 1974
Polymitia eximipalpella (Gerasimov, 1930)
Stomphastis conflua (Meyrick, 1914)
Lasiocampidae
Chondrostega pastrana Lederer, 1858
Lasiocampa terreni (Herrich-Schäffer, 1847)
Limacodidae
Hoyosia cretica (Rebel, 1906)
Micropterigidae
Micropterix cypriensis Heath, 1985
Momphidae
Mompha miscella (Denis & Schiffermuller, 1775)
Nepticulidae
Acalyptris pistaciae van Nieukerken, 2007
Acalyptris platani (Muller-Rutz, 1934)
Ectoedemia alnifoliae van Nieukerken, 1985
Ectoedemia erythrogenella (de Joannis, 1908)
Ectoedemia heringella (Mariani, 1939)
Ectoedemia vivesi A.Lastuvka, Z. Lastuvka & van Nieukerken, 2010
Simplimorpha promissa (Staudinger, 1871)
Stigmella auromarginella (Richardson, 1890)
Stigmella azaroli (Klimesch, 1978)
Stigmella pyrellicola (Klimesch, 1978)
Stigmella pyrivora Gustafsson, 1981
Stigmella rhamnophila (Amsel, 1934)
Trifurcula rosmarinella (Chretien, 1914)
Noctuidae
Acontia lucida (Hufnagel, 1766)
Acontia trabealis (Scopoli, 1763)
Acrapex taurica (Staudinger, 1900)
Acronicta aceris (Linnaeus, 1758)
Acronicta tridens (Denis & Schiffermuller, 1775)
Acronicta rumicis (Linnaeus, 1758)
Aedia leucomelas (Linnaeus, 1758)
Aegle semicana (Esper, 1798)
Agrochola lychnidis (Denis & Schiffermuller, 1775)
Agrochola orientalis Fibiger, 1997
Agrotis bigramma (Esper, 1790)
Agrotis catalaunensis (Milliere, 1873)
Agrotis herzogi Rebel, 1911
Agrotis ipsilon (Hufnagel, 1766)
Agrotis lasserrei (Oberthur, 1881)
Agrotis puta (Hübner, 1803)
Agrotis segetum (Denis & Schiffermuller, 1775)
Agrotis spinifera (Hübner, 1808)
Agrotis trux (Hübner, 1824)
Allophyes asiatica (Staudinger, 1892)
Amephana dalmatica (Rebel, 1919)
Ammoconia aholai Fibiger, 1996
Amphipyra effusa Boisduval, 1828
Amphipyra micans Lederer, 1857
Amphipyra tragopoginis (Clerck, 1759)
Anarta dianthi (Tauscher, 1809)
Anarta trifolii (Hufnagel, 1766)
Anthracia eriopoda (Herrich-Schäffer, 1851)
Apamea monoglypha (Hufnagel, 1766)
Apamea sicula (Turati, 1909)
Aporophyla australis (Boisduval, 1829)
Aporophyla nigra (Haworth, 1809)
Atethmia ambusta (Denis & Schiffermuller, 1775)
Atethmia centrago (Haworth, 1809)
Autographa gamma (Linnaeus, 1758)
Bryophila gea (Schawerda, 1934)
Bryophila microphysa (Boursin, 1952)
Bryophila raptricula (Denis & Schiffermuller, 1775)
Bryophila rectilinea (Warren, 1909)
Bryophila tephrocharis (Boursin, 1953)
Bryophila petrea Guenee, 1852
Bryophila maeonis Lederer, 1865
Callopistria latreillei (Duponchel, 1827)
Caradrina syriaca Staudinger, 1892
Caradrina atriluna Guenee, 1852
Caradrina clavipalpis Scopoli, 1763
Caradrina flavirena Guenee, 1852
Caradrina zandi Wiltshire, 1952
Caradrina aspersa Rambur, 1834
Caradrina kadenii Freyer, 1836
Cardepia affinis (Rothschild, 1913)
Chloantha hyperici (Denis & Schiffermuller, 1775)
Chrysodeixis chalcites (Esper, 1789)
Condica viscosa (Freyer, 1831)
Conistra rubricans Fibiger, 1987
Cornutiplusia circumflexa (Linnaeus, 1767)
Craniophora ligustri (Denis & Schiffermuller, 1775)
Cryphia receptricula (Hübner, 1803)
Cryphia algae (Fabricius, 1775)
Cryphia ochsi (Boursin, 1940)
Ctenoplusia accentifera (Lefebvre, 1827)
Cucullia celsiae Herrich-Schäffer, 1850
Cucullia calendulae Treitschke, 1835
Cucullia syrtana Mabille, 1888
Cucullia barthae Boursin, 1933
Cucullia lanceolata (Villers, 1789)
Deltote pygarga (Hufnagel, 1766)
Dichagyris flammatra (Denis & Schiffermuller, 1775)
Dichagyris adelfi Fibiger, Nilsson & Svendsen, 1999
Dichagyris endemica Fibiger, Svendsen & Nilsson, 1999
Dichagyris squalorum (Eversmann, 1856)
Divaena haywardi (Tams, 1926)
Dryobota labecula (Esper, 1788)
Dryobotodes servadeii Parenzan, 1982
Egira anatolica (M. Hering, 1933)
Epilecta linogrisea (Denis & Schiffermuller, 1775)
Episema korsakovi (Christoph, 1885)
Episema kourion Nilsson, Svendsen & Fibiger, 1999
Eremohadena chenopodiphaga (Rambur, 1832)
Euxoa conspicua (Hübner, 1824)
Euxoa cos (Hübner, 1824)
Euxoa hemispherica Hampson, 1903
Hadena perplexa (Denis & Schiffermuller, 1775)
Hadena sancta (Staudinger, 1859)
Hadena syriaca (Osthelder, 1933)
Hadena adriana (Schawerda, 1921)
Hadena silenides (Staudinger, 1895)
Hecatera bicolorata (Hufnagel, 1766)
Hecatera cappa (Hübner, 1809)
Hecatera dysodea (Denis & Schiffermuller, 1775)
Helicoverpa armigera (Hübner, 1808)
Heliothis nubigera Herrich-Schäffer, 1851
Heliothis peltigera (Denis & Schiffermuller, 1775)
Hoplodrina ambigua (Denis & Schiffermuller, 1775)
Lenisa geminipuncta (Haworth, 1809)
Lenisa wiltshirei (Bytinski-Salz, 1936)
Leucania loreyi (Duponchel, 1827)
Leucania herrichi Herrich-Schäffer, 1849
Leucania palaestinae Staudinger, 1897
Leucania punctosa (Treitschke, 1825)
Leucania putrescens (Hübner, 1824)
Leucochlaena muscosa (Staudinger, 1892)
Lithophane ledereri (Staudinger, 1892)
Lithophane merckii (Rambur, 1832)
Lithophane lapidea (Hübner, 1808)
Luperina diversa (Staudinger, 1892)
Luperina dumerilii (Duponchel, 1826)
Mamestra brassicae (Linnaeus, 1758)
Melanchra persicariae (Linnaeus, 1761)
Mesapamea secalis (Linnaeus, 1758)
Metopoceras omar (Oberthur, 1887)
Mormo maura (Linnaeus, 1758)
Mythimna congrua (Hübner, 1817)
Mythimna ferrago (Fabricius, 1787)
Mythimna l-album (Linnaeus, 1767)
Mythimna languida (Walker, 1858)
Mythimna vitellina (Hübner, 1808)
Mythimna unipuncta (Haworth, 1809)
Mythimna alopecuri (Boisduval, 1840)
Mythimna sicula (Treitschke, 1835)
Noctua orbona (Hufnagel, 1766)
Noctua pronuba (Linnaeus, 1758)
Noctua tertia Mentzer & al., 1991
Noctua warreni Lodl, 1987
Nyctobrya amasina Draudt, 1931
Ochropleura leucogaster (Freyer, 1831)
Olivenebula subsericata (Herrich-Schäffer, 1861)
Oncocnemis arenbergi Hacker & Lodl, 1989
Orthosia cruda (Denis & Schiffermuller, 1775)
Orthosia cypriaca Hacker, 1996
Orthosia incerta (Hufnagel, 1766)
Peridroma saucia (Hübner, 1808)
Perigrapha wimmeri Hacker, 1996
Polymixis alaschia Hacker, 1996
Polymixis aphrodite Fibiger, 1987
Polymixis trisignata (Menetries, 1847)
Polymixis ancepsoides Poole, 1989
Polymixis manisadijani (Staudinger, 1881)
Polymixis rufocincta (Geyer, 1828)
Polymixis iatnana Hacker, 1996
Pseudenargia troodosi Svendsen, Nilsson & Fibiger, 1999
Rhyacia arenacea (Hampson, 1907)
Sesamia cretica Lederer, 1857
Sesamia nonagrioides Lefebvre, 1827
Spodoptera cilium Guenee, 1852
Spodoptera exigua (Hübner, 1808)
Spodoptera littoralis (Boisduval, 1833)
Standfussiana lucernea (Linnaeus, 1758)
Standfussiana nictymera (Boisduval, 1834)
Subacronicta megacephala (Denis & Schiffermuller, 1775)
Thysanoplusia circumscripta (Freyer, 1831)
Thysanoplusia daubei (Boisduval, 1840)
Thysanoplusia orichalcea (Fabricius, 1775)
Tiliacea cypreago (Hampson, 1906)
Trichoplusia ni (Hübner, 1803)
Trigonophora flammea (Esper, 1785)
Tyta luctuosa (Denis & Schiffermuller, 1775)
Xestia c-nigrum (Linnaeus, 1758)
Xestia cohaesa (Herrich-Schäffer, 1849)
Xestia palaestinensis (Kalchberg, 1897)
Xestia xanthographa (Denis & Schiffermuller, 1775)
Xylena exsoleta (Linnaeus, 1758)
Xylena vetusta (Hübner, 1813)
Nolidae
Earias clorana (Linnaeus, 1761)
Earias insulana (Boisduval, 1833)
Meganola togatulalis (Hübner, 1796)
Nola aegyptiaca Snellen, 1875
Nola chlamitulalis (Hübner, 1813)
Nycteola columbana (Turner, 1925)
Nycteola revayana (Scopoli, 1772)
Notodontidae
Furcula interrupta (Christoph, 1867)
Harpyia milhauseri (Fabricius, 1775)
Phalera bucephaloides (Ochsenheimer, 1810)
Thaumetopoea processionea (Linnaeus, 1758)
Thaumetopoea solitaria (Freyer, 1838)
Thaumetopoea wilkinsoni Tams, 1926
Oecophoridae
Crossotocera wagnerella Zerny, 1930
Denisia augustella (Hübner, 1796)
Epicallima formosella (Denis & Schiffermuller, 1775)
Epicallima icterinella (Mann, 1867)
Pleurota pyropella (Denis & Schiffermuller, 1775)
Schiffermuelleria schaefferella (Linnaeus, 1758)
Peleopodidae
Carcina quercana (Fabricius, 1775)
Plutellidae
Plutella xylostella (Linnaeus, 1758)
Praydidae
Prays oleae (Bernard, 1788)
Psychidae
Apterona helicoidella (Vallot, 1827)
Eumasia parietariella (Heydenreich, 1851)
Pachythelia villosella (Ochsenheimer, 1810)
Pseudobankesia aphroditae Weidlich,M.,Henderickx,H., 2002
Pterophoridae
Agdistis cypriota Arenberger, 1983
Agdistis frankeniae (Zeller, 1847)
Agdistis heydeni (Zeller, 1852)
Agdistis meridionalis (Zeller, 1847)
Agdistis nigra Amsel, 1955
Agdistis protai Arenberger, 1973
Agdistis tamaricis (Zeller, 1847)
Amblyptilia acanthadactyla (Hübner, 1813)
Capperia celeusi (Frey, 1886)
Capperia maratonica Adamczewski, 1951
Capperia marginellus (Zeller, 1847)
Cnaemidophorus rhododactyla (Denis & Schiffermuller, 1775)
Crombrugghia distans (Zeller, 1847)
Crombrugghia reichli Arenberger, 1998
Emmelina monodactyla (Linnaeus, 1758)
Hellinsia inulae (Zeller, 1852)
Hellinsia pectodactylus (Staudinger, 1859)
Merrifieldia malacodactylus (Zeller, 1847)
Puerphorus olbiadactylus (Milliere, 1859)
Stangeia siceliota (Zeller, 1847)
Stenoptilia bipunctidactyla (Scopoli, 1763)
Stenoptilia elkefi Arenberger, 1984
Stenoptilia lucasi Arenberger, 1990
Stenoptilia zophodactylus (Duponchel, 1840)
Stenoptilodes taprobanes (Felder & Rogenhofer, 1875)
Tabulaephorus punctinervis (Constant, 1885)
Wheeleria ivae (Kasy, 1960)
Wheeleria obsoletus (Zeller, 1841)
Pyralidae
Acrobasis getuliella (Zerny, 1914)
Aglossa asiatica Erschoff, 1872
Aglossa exsucealis Lederer, 1863
Ancylosis biflexella (Lederer, 1855)
Ancylosis convexella (Lederer, 1855)
Ancylosis monella (Roesler, 1973)
Ancylosis rhodochrella (Herrich-Schäffer, 1852)
Ancylosis yerburii (Butler, 1884)
Cadra cautella (Walker, 1863)
Cadra figulilella (Gregson, 1871)
Dioryctria mendacella (Staudinger, 1859)
Ephestia elutella (Hübner, 1796)
Ephestia kuehniella Zeller, 1879
Epicrocis anthracanthes Meyrick, 1934
Eurhodope cinerea (Staudinger, 1879)
Euzophera cinerosella (Zeller, 1839)
Euzophera osseatella (Treitschke, 1832)
Euzophera paghmanicola Roesler, 1973
Euzophera umbrosella (Staudinger, 1879)
Faveria sordida (Staudinger, 1879)
Homoeosoma candefactella Ragonot, 1887
Homoeosoma sinuella (Fabricius, 1794)
Hypotia mavromoustakisi (Rebel, 1928)
Hypsopygia almanalis (Rebel, 1917)
Keradere noctivaga (Staudinger, 1879)
Lamoria melanophlebia Ragonot, 1888
Metallostichodes nigrocyanella (Constant, 1865)
Myelois ossicolor Ragonot, 1893
Pempelia albicostella Amsel, 1958
Pempelia cirtensis (Ragonot, 1890)
Phycitodes albatella (Ragonot, 1887)
Phycitodes binaevella (Hübner, 1813)
Phycitodes inquinatella (Ragonot, 1887)
Phycitodes saxicola (Vaughan, 1870)
Pyralis kacheticalis (Christoph, 1893)
Scotomera caesarealis (Ragonot, 1891)
Saturniidae
Saturnia caecigena Kupido, 1825
Scythrididae
Scythris punctivittella (O. Costa, 1836)
Sesiidae
Bembecia albanensis (Rebel, 1918)
Bembecia stiziformis (Herrich-Schäffer, 1851)
Chamaesphecia alysoniformis (Herrich-Schäffer, 1846)
Chamaesphecia masariformis (Ochsenheimer, 1808)
Chamaesphecia minor (Staudinger, 1856)
Chamaesphecia proximata (Staudinger, 1891)
Paranthrene tabaniformis (Rottemburg, 1775)
Pyropteron affinis (Staudinger, 1856)
Pyropteron leucomelaena (Zeller, 1847)
Pyropteron minianiformis (Freyer, 1843)
Synanthedon myopaeformis (Borkhausen, 1789)
Sphingidae
Acherontia atropos (Linnaeus, 1758)
Agrius convolvuli (Linnaeus, 1758)
Clarina syriaca (Lederer, 1855)
Daphnis nerii (Linnaeus, 1758)
Hippotion celerio (Linnaeus, 1758)
Hyles euphorbiae (Linnaeus, 1758)
Hyles livornica (Esper, 1780)
Macroglossum stellatarum (Linnaeus, 1758)
Smerinthus kindermannii Lederer, 1857
Theretra alecto (Linnaeus, 1758)
Stathmopodidae
Neomariania partinicensis (Rebel, 1937)
Tortilia graeca Kasy, 1981
Tineidae
Ateliotum arenbergeri Petersen & Gaedike, 1985
Cephimallota angusticostella (Zeller, 1839)
Edosa fuscoviolacella (Ragonot, 1895)
Eudarcia lobata (Petersen & Gaedike, 1979)
Eudarcia echinatum (Petersen & Gaedike, 1985)
Eudarcia holtzi (Rebel, 1902)
Hapsifera luridella Zeller, 1847
Infurcitinea cyprica Petersen & Gaedike, 1985
Infurcitinea frustigerella (Walsingham, 1907)
Infurcitinea graeca Gaedike, 1983
Infurcitinea nedae Gaedike, 1983
Infurcitinea nigropluviella (Walsingham, 1907)
Monopis imella (Hübner, 1813)
Morophaga morella (Duponchel, 1838)
Myrmecozela parnassiella (Rebel, 1915)
Myrmecozela stepicola Zagulajev, 1972
Nemapogon cyprica Gaedike, 1986
Nemapogon orientalis Petersen, 1961
Nemapogon signatellus Petersen, 1957
Neurothaumasia ankerella (Mann, 1867)
Neurothaumasia macedonica Petersen, 1962
Niditinea fuscella (Linnaeus, 1758)
Niditinea tugurialis (Meyrick, 1932)
Praeacedes atomosella (Walker, 1863)
Tinea murariella Staudinger, 1859
Tinea translucens Meyrick, 1917
Trichophaga tapetzella (Linnaeus, 1758)
Tischeriidae
Coptotriche marginea (Haworth, 1828)
Tortricidae
Acleris undulana (Walsingham, 1900)
Acleris variegana (Denis & Schiffermuller, 1775)
Aethes bilbaensis (Rossler, 1877)
Aethes francillana (Fabricius, 1794)
Aethes kasyi Razowski, 1962
Aethes mauritanica (Walsingham, 1898)
Bactra lancealana (Hübner, 1799)
Bactra venosana (Zeller, 1847)
Cacoecimorpha pronubana (Hübner, 1799)
Celypha rufana (Scopoli, 1763)
Clepsis consimilana (Hübner, 1817)
Cnephasia gueneeana (Duponchel, 1836)
Cnephasia orientana (Alphéraky, 1876)
Cnephasia pumicana (Zeller, 1847)
Cnephasia tofina Meyrick, 1922
Cochylidia heydeniana (Herrich-Schäffer, 1851)
Cochylimorpha alternana (Stephens, 1834)
Cochylimorpha straminea (Haworth, 1811)
Cochylis molliculana Zeller, 1847
Cochylis posterana Zeller, 1847
Crocidosema plebejana Zeller, 1847
Cydia amplana (Hübner, 1800)
Cydia fagiglandana (Zeller, 1841)
Cydia microgrammana (Guenee, 1845)
Cydia pomonella (Linnaeus, 1758)
Cydia splendana (Hübner, 1799)
Cydia trogodana Prose, 1988
Endothenia gentianaeana (Hübner, 1799)
Eucosma campoliliana (Denis & Schiffermuller, 1775)
Grapholita funebrana Treitschke, 1835
Grapholita discretana Wocke, 1861
Grapholita lunulana (Denis & Schiffermuller, 1775)
Gypsonoma sociana (Haworth, 1811)
Hedya pruniana (Hübner, 1799)
Lobesia botrana (Denis & Schiffermuller, 1775)
Lobesia indusiana (Zeller, 1847)
Neosphaleroptera nubilana (Hübner, 1799)
Notocelia uddmanniana (Linnaeus, 1758)
Pammene crataegophila Amsel, 1935
Pammene fasciana (Linnaeus, 1761)
Pelochrista modicana (Zeller, 1847)
Phalonidia contractana (Zeller, 1847)
Phtheochroa decipiens Walsingham, 1900
Phtheochroa dodrantaria (Razowski, 1970)
Pseudococcyx tessulatana (Staudinger, 1871)
Rhyacionia buoliana (Denis & Schiffermuller, 1775)
Spilonota ocellana (Denis & Schiffermuller, 1775)
Zeiraphera isertana (Fabricius, 1794)
Yponomeutidae
Yponomeuta padella (Linnaeus, 1758)
Zelleria oleastrella (Milliere, 1864)
Ypsolophidae
Ypsolopha instabilella (Mann, 1866)
Ypsolopha persicella (Fabricius, 1787)
Zygaenidae
Jordanita graeca (Jordan, 1907)
Jordanita anatolica (Naufock, 1929)
Theresimima ampellophaga (Bayle-Barelle, 1808)
External links
Fauna Europaea
Invertebrates of Cyprus
Lepidoptera
Cyprus
Cypruss
Cyprus
Cyprus
Cyprus
Cyprus |
7112501 | https://en.wikipedia.org/wiki/Firefox%203.0 | Firefox 3.0 | Mozilla Firefox 3.0 is a version of the Firefox web browser released on June 17, 2008, by the Mozilla Corporation.
Firefox 3.0 uses version 1.9 of the Gecko layout engine for displaying web pages. This version fixes many bugs, improves standards compliance, and implements many new web APIs compared to Firefox 2.0. Other new features include a redesigned download manager, a new "Places" system for storing bookmarks and history, and separate themes for different operating systems.
Firefox 3.0 had over 8 million unique downloads the day it was released, and by July 2008 held over 5.6% of the recorded usage share of web browsers. Estimates of Firefox 3.0's global market share were generally in the range of 4–5%, and then dropped as users migrated to Firefox 3.5 and later Firefox 3.6.
Partially as a result of this, between mid-December 2009 and the end of January 2010, Firefox 3.5 was the most popular browser (when counting individual browser versions), passing Internet Explorer 7.
Mozilla ended support for Firefox 3 on March 30, 2010, with the release of 3.0.19.
Development
Firefox 3.0 was developed under the codename Gran Paradiso. This, like other Firefox codenames, is the name of an actual place; in this case the seventh-highest mountain in the Graian Alps where they first came up with the idea.
Planning began in October 2006, when the development team asked users to submit feature requests that they wished to be included in Firefox 3.
The Mozilla Foundation released the first beta on November 19, 2007, the second beta on December 18, 2007, the third beta on February 12, 2008, the fourth beta on March 10, 2008, and the fifth and final beta on April 2, 2008. The first release candidate was announced on May 16, 2008, followed by a second release candidate on June 4, 2008, and a third (differing from the second release candidate only in that it corrected a serious bug for Mac users) on June 11, 2008. Mozilla shipped the final release on June 17, 2008.
On its release date, Firefox 3 was featured in popular culture, mentioned on The Colbert Report, among others.
Changes and features
Backend changes
One of the big changes in Firefox 3 is the implementation of Gecko 1.9, an updated layout engine. The new version fixes many bugs, improves standard compliance, and implements new web APIs. In particular, it makes Firefox 3 the first official release of a Mozilla browser to pass the Acid2 test, a standards-compliance test for web-page rendering. It also receives a better score on the Acid3 test than Firefox 2.
Some of the new features are defined in the WHATWG HTML 5 specification, such as support for web-based protocol handlers, a native implementation of the getElementsByClassName method, support for safe message-passing with postMessage, and support for offline web applications. Other new features include APNG support, and EXSLT support.
A new internal memory allocator, jemalloc, is used rather than the default libc one.
Gecko 1.9 uses Cairo as a graphics backend, allowing for improved graphics performance and better consistency of the look and feel on various operating systems. Because of Cairo's lack of support for Windows 95, Windows 98, Windows ME, and Windows NT (versions 4.0 and below), and because Microsoft ended support for Windows 98 and Windows ME on July 11, 2006, Firefox 3 does not run on those operating systems. Similarly, the Mac version of Firefox 3 runs only on Mac OS X 10.4 or higher, but, unlike previous versions, has a native Cocoa widget interface.
Frontend changes
As for the frontend changes, Firefox features a redesigned download manager with built-in search and the ability to resume downloads. Also, a new plug-in manager is included in the add-ons window and extensions can be installed with a package manager. Microformats are supported for use by software that can understand their use in documents to store data in a machine-readable form.
The password manager in Firefox 3 asks the user if they would like it to remember the password after the login attempt rather than before. By doing this users are able to avoid storing an incorrect password in the password manager after a bad login attempt.
Firefox 3 uses a "Places" system for storing bookmarks and history in an SQLite backend. The new system stores more information about the user's history and bookmarks, in particular letting the user tag the pages. It is also used to implement an improved frecency-based algorithm for the new location bar auto-complete feature (dubbed the "Awesomebar").
The Mac version of Firefox 3 supports Growl notifications, the Mac OS X spell checker, and Aqua-style form controls.
Themes
To give the browser a more native look and feel on different operating systems, Firefox 3 uses separate themes for Mac OS X, Linux, Windows XP, and Windows Vista. When running on GNOME, Firefox 3 displays icons from the environment; thus, when the desktop environment icon theme changes, Firefox follows suit. Additional icons were also made to be used when no appropriate icon exists; these were made following the Tango Desktop Project guidelines. Additionally, the GTK version has replaced the non-native tab bar that was implemented in Firefox 2.0 and instead uses the native GTK+ tab style.
The default icons and icon layout for Firefox 3 also changed dramatically, taking on a keyhole shape for the forward and back buttons by default on two of the three platforms. However, the keyhole shape does not take effect in Linux or in the small-icon mode. The Iconfactory created the icons for the Microsoft Windows platform. In addition, separate icons sets are displayed for Windows XP and Vista.
Breakpad
Breakpad (previously called "Airbag") is an open-source crash reporter utility which replaced the proprietary Talkback. It has been developed by Google and Mozilla, and used in Firefox and Thunderbird. This product is significant because it is the first open-source multi-platform crash reporting system.
During development, Breakpad was first included May 27, 2007 in Firefox 3 trunk builds on Windows NT, Mac OS X, and, weeks later, on Linux. Breakpad replaced Talkback (also known as the Quality Feedback Agent) as the crash reporter used by the Mozilla software to report crashes of its products to a centralized server for aggregation or case-by-case analysis. Talkback was proprietary software licensed to the Mozilla Corporation by SupportSoft.
Usage
Net Applications noted that the use of Firefox 3 beta rapidly increased to a usage share of 0.62% in May 2008. They interpreted this increase to mean that Firefox 3 betas were stable and that users were using it as their primary browser. Within 24 hours after the release of Firefox 3.0, usage rose from under 1% to over 3% according to Net Applications. It reached a peak of 21.17% in April 2009 before declining as users switched to Firefox 3.5 and later Firefox 3.6.
Guinness World Record
The official date for the launch of Firefox 3 was June 17, 2008, named "Download Day 2008". Firefox was aiming to set the record for most software downloads in 24 hours.
Download Day officially started at 11:16 a.m. PDT (18:16 UTC) on June 17. With the announced date, the download day was June 18 for time zones greater than GMT +6, which includes half of Asia and all of Oceania.
The large number of users attempting to access the Mozilla website on June 17 caused it to become unavailable for at least a few hours, and attempts at upgrading to the new version resulted in server timeouts. The site was not updated for the download of Firefox 3 until 12:00 PDT (19:00 UTC), two hours later than originally scheduled.
When "Download Day" ended at 11:16 AM PDT (18:16 UTC) June 18, 8,249,092 unique downloads had been recorded. On July 2 Mozilla announced they had won the record, with 8,002,530 unique downloads and parties in over 25 countries.
As of July 7, 2008, more than 31 million people had downloaded Firefox 3.
Gareth Deaves, Records Manager for Guinness World Records, complimented Mozilla, saying, "Mobilizing over 8 million internet users within 24 hours is an extremely impressive accomplishment and we would like to congratulate the Mozilla community for their hard work and dedication."
Operating System support
Firefox 3 runs on Windows 2000 and later, and on Windows 98 and ME with the third-party Kernel Extender installed.
Critical response
While the new functionality of the location bar, dubbed the "Awesomebar", was overall well-received, there were those who did not like it due to user interface and performance changes, so much that extensions were made to revert it. Firefox 3 received CNET Editors' Choice in June 2008.
See also
Firefox early version history
References
External links
Mozilla Firefox homepage for end-users
Mozilla Firefox project page for developers
Mozilla EULA
Review of Firefox in PC Magazine
3
2008 software
Free software programmed in C++
Gopher clients
History of web browsers
Linux web browsers
MacOS web browsers
POSIX web browsers
Unix Internet software
Windows web browsers
Software that uses XUL |
38846447 | https://en.wikipedia.org/wiki/DB%20Networks | DB Networks | DB Networks is a privately held Information Security company founded in the United States The company is headquartered in San Diego, California, and its regional offices are located in Palo Alto, California and Seattle, Washington.
In May 2018, DB Networks announced that it will change its company name to DB CyberTech.
History
DB Networks was founded in United States in 2009 to provide database security including database infrastructure assessment, compromised credential identification, and SQL injection defense, predominantly to the financial services industry and federal government. The company was initially financed by Angel investors. In 2012 the company raised $4.5M in venture capital from Khosla Ventures. In 2014 the company closed a $17 Million round of funding led by Khosla Ventures and Grotech Ventures.
The company's first product, the ADF-4200, was launched in February 2013. Also in February 2013 the company announced a partnership with Alamo City Engineering Services (ACES) to offer its products to the US Military and civilian federal agencies. In October 2013 the company announced the IDS-6300, later renamed DBN-6300, originally as a SQL injection defense and database infrastructure security product.
In 2013 DB Networks was invited to join the Cync cybersecurity technology program under the direction of Northrop Grumman and the University of Maryland, Baltimore County Research Park Corporation. The Cync program identifies innovative technologies to combat cybersecurity threats.
In 2014 AMP Tech Solutions was announced as a channel partner to offer DB Network products through the NASA Solutions for Enterprise-Wide Procurement (SEWP) IV contract to the United States federal agencies.
In 2015 DB Networks was awarded two United States patents for their database security technologies.
DB Networks began licensing their database security software and technologies to original equipment manufacturers (OEMs) in February 2016 coinciding with the launch of their Layer 7 Database Sensor. Partnerships have been announced with FireEye, Cyphort, and Security On-Demand.
Technology
DB Networks database security technology is based on machine learning and behavioral analysis as opposed to the traditional information security approach requiring human generated blacklists or whitelists. The machine learning and behavioral analysis platform learns each applications' proper SQL transaction behavior. Compromised credentials and rogue SQL statements, such as a SQL injection attack, will deviate from the established model and will raise an alarm as a database attack. Machine learning and behavioral analysis technologies have the ability to prevent advanced and zero-day database attacks without prior threat intelligence or the need to establish and maintain signature files of known attack strings.
Products
DB Networks DBN-6300 was announced in October 2013 (originally referred to as the IDS-6300). The DBN-6300 is a 2U purpose-built database security appliance. It uses machine learning and behavioral analysis to identify database attacks in real-time. A virtual appliance version of the DBN-6300 was launched in February 2014, now referred to as the DBN-6300v.
The Layer 7 Database Sensor was launched in February 2016. The Layer 7 Database Sensor enables other information security product manufactures to integrate DB Networks database security technology into their products.
In March 2016, "insider threat" protection capabilities were added to the DBN-6300 and Layer 7 Database Sensor products.
Industry Recognition
2016
Best Data Center Security Solution for 2016 - Cyber Defense Magazine
Grand Trophy - Info Security Awards
Top-30 Cybersecurity Providers in Silicon Valley
Ranked 47 in the Cybersecurity 500
2015
The Best Data Center Security Product for 2015 - Cyber Defense Magazine
Grand Trophy - Info Security Awards
Top 100 Cyber Security Companies
2014
CIO Review's 20 Most Promising Enterprise Security Companies
"Grand Global Excellence" Trophy for the Best of the Best in Security Technology - Info Security's Global Excellence Awards
Most Innovative Intrusion Detection System for 2014 - Cyber Defense Magazine
Finalist for SC Magazine 2014 Best Database Security Solution
2013
Finalist for SC Magazine 2013 Best Database Security Solution
2013 Info Security's Global Excellence Awards
Gold winner - Database Security
Gold winner - Firewalls
Gold winner - New Product Launch
See also
SQL
Relational database
Information security
Digital forensics
Code injection
Cyberwarfare
References
External links
Computer security software companies
Networking companies
Server appliance
Software companies established in 2009
Companies based in San Diego
Networking companies of the United States
Technology companies of the United States |
5824710 | https://en.wikipedia.org/wiki/EnterpriseDB | EnterpriseDB | EnterpriseDB (EDB), a privately held company based in Massachusetts, provides software and services based on the open-source database PostgreSQL (also known as Postgres), and is one of the largest contributors to Postgres. EDB develops and integrates performance, security, and manageability enhancements into Postgres to support enterprise-class workloads. EDB has also developed database compatibility for Oracle to facilitate the migration of workloads from Oracle to EDB Postgres and to support the operation of many Oracle workloads on EDB Postgres.
EDB provides a portfolio of databases and tools that extend Postgres for enterprise workloads. This includes fully managed Postgres in the cloud, extreme high availability for Postgres, command line migration tools, Kubernetes Operator and container images, management, monitoring and optimizing of Postgres, enterprise ready Oracle migration tools and browser-based schema migration tools
Gartner positioned EnterpriseDB in the 2017 Magic Quadrant for Operational Database Management Systems, making it one of just 11 vendors to be included in the report and the fifth consecutive year the company was recognized. Gartner positioned EnterpriseDB in the Leaders Quadrant in its Magic Quadrant for Operational Database Management Systems in October 2014 and again in September 2015. EnterpriseDB was recognized in the Challengers Quadrant in the Magic Quadrant for Operational Database Management Systems in October 2016.
EnterpriseDB was purchased by Great Hill Partners in 2019.
History
EDB was founded in 2004. The growing acceptance of open source software created a market opportunity and the company wanted to challenge the database incumbents with a standards based product that was compatible with other vendor solutions. EnterpriseDB sought to develop an open source-based, enterprise-class relational database to compete with established vendors at an open source price point.
The company chose Postgres as its technology foundation because of its record of more than 20 years of large-scale commercial deployments, its thriving developer community, and its reputation as the most robust open-source database available.
EDB introduced its database, EnterpriseDB 2005, in 2005. It was named Best Database Solution at LinuxWorld that year, beating solutions from Oracle, MySQL, and IBM. EDB renamed the database EnterpriseDB Advanced Server with its March 2006 release. The database was renamed Postgres Plus Advanced Server in March 2008. In April 2016, with the introduction of the EDB Postgres Platform, EDB's fully integrated next-generation data management platform, the database was renamed to EDB Postgres Advanced Server.
In 2020 has acquired 2ndQuadrant, a global Postgres solutions and tools company based out of the UK, becaming the leader in the PostgreSQL market.
Products
BigAnimal
BigAnimal is EDB's fully managed Postgres in the cloud. This is EDB's database-as-a-Service (DBaaS) offering, initially available on Microsoft Azure, where it can be procured via the Azure marketplace, and expected to be available via AWS and Google Cloud.
Postgres-BDR
Postgres-BDR (short for Bi-Directional Multi-Master Replication) provides high availability for Postgres databases, and up to five 9s of availability and flexible deployment options.
Migration Toolkit
This is a command-line tool to migrate tables and data from a company's DBMS to Postgres or EDB Postgres Advanced Server.
Cloud Native Postgres
Cloud Native Postgres adds agility to Postgres from a deployment and an auto-healing perspective, making Postgres the data store for microservices
Postgres Enterprise Manager
Postgres Enterprise Manager manages, monitors, and optimizes Postgres and EDB Postgres Advanced Server.
EDB Postgres Advanced Server (EPAS)
EDB Postgres Advanced Server (EPAS) is EDB’s core database product, and is included in EDB's Enterprise subscription plan. It includes functionality that focuses on Oracle database compatibility, security, productivity, and performance.
Migration Portal
Migration Portal provides detail on what enterprises need to migrate their Oracle databases to PostgreSQL. It then starts the migration by producing DDLs that are compatible with EDB Postgres Advanced Server.
EDB Postgres Containers
EDB is a member of the Red Hat OpenShift Primed Program with a set of two certified Linux Container images published in the Red Hat Container Catalog. One container is preconfigured with the EDB Postgres Advanced Server 9.5 database, EDB Postgres Failover Manager, and pgPool for load balancing. The other container is preconfigured with EDB Postgres Backup and Recovery.
Postgres Vision
Postgres Vision is an annual conference started by EnterpriseDB in 2016 with Kathleen Kennedy of the Massachusetts Institute of Technology. The first event was held in October 2016 in San Francisco and the second event was held in June 2017 in Boston. Postgres Vision is multi-day event that focuses on current and future enterprise usage of Postgres and open source data management. The event draws database administrators, architects, data scientists, line of business executives, C-level executives, and developers to network, share best practices, and explore new and emerging use cases for Postgres.
References
Technology companies based in Massachusetts
Software companies based in Massachusetts
Free database management systems
American companies established in 2004
Free software companies
PostgreSQL
Cross-platform software
Software companies established in 2004
Software companies of the United States |
6639920 | https://en.wikipedia.org/wiki/DePaul%20University%20College%20of%20Computing%20and%20Digital%20Media | DePaul University College of Computing and Digital Media | 'The College of Computing and Digital Media at DePaul University (formerly the School of Computer Science, Telecommunications and Information Systems) is located in Chicago, Illinois, United States. The college is organized into three schools: the School of Cinematic Arts, which is home to the animation and cinema programs; the School of Computing, which houses programs in computer and information sciences; and the School of Design, which houses programs in game design, user experience design, and digital communication and media arts. The college is part of DePaul University’s Loop Campus.
History
The College of Computing and Digital Media originated as the Department of Computer Science in the College of Liberal Arts and Sciences in 1981 with Helmut Epp as its founding chairman. That same year, the department moved into 243 South Wabash Ave., one of three buildings that DePaul University had purchased shortly before and named as part of its Loop Campus. On 1 July 1995, the department was established as a freestanding school within DePaul, the School of Computer Science, Telecommunications, and Information Systems.
Initial degree offerings by the School of Computer Science, Telecommunications, and Information Systems included undergraduate and graduate degree programs in Computer Science, Telecommunications and Information Systems, and Software Engineering. The school also offered a Master of Science degree in Management Information Systems jointly with the School of Computer Science, Telecommunications, and Information Systems and DePaul's College of Commerce.
Over the next several years, the school added several undergraduate and graduate programs in computer science-related fields, as well as joint degree programs with other colleges at DePaul. In 2000-2001, the school expanded into graphics/animation with the BS in Computer Graphics and Animation. BS and BA programs in digital cinema were introduced during the 2003-2004 academic year. As the school became a prominent provider of technology arts degrees in addition to computer science and information technology degrees, it was renamed to better reflect its program offerings.
From the School of Computer Science, Telecommunications, and Information Systems to College of Computing and Digital Media
On April 15, 2008 the School of Computer Science, Telecommunications, and Information Systems changed its name to the College of Computing and Digital Media. The college was then organized into two schools: the School of Computing and the School of Cinema & Interactive Media. In 2015, a third school, the School of Design, was added.
Academics
The school’s original name — Computer Science, Telecommunications, and Information Systems — denotes three prominent degree programs at the time of its founding in 1995. Over the years, the school has become a prominent provider of technology arts degrees in such disciplines as cinema, animation, and game design and programming. CDM's Film & Television program is the most popular undergraduate major at DePaul.
The college offers 18 undergraduate, 24 graduate, and 2 PhD programs, as well as several joint degrees with other colleges at DePaul University.CDM currently enrolls approximately 5,000 students.
CDM has nationally ranked programs in film, animation, gaming, and graphic design. DePaul has been designated as a National Center of Academic Excellence in Information Assurance/Cybersecurity for academic years 2014-2021 by the United States Department of Homeland Security and National Security Agency.
CDM students developed the popular computer game Octodad. In 2015, animation student Carter Boyce was the school’s first recipient of a Student Academy Award for his film Die Flucht.
The Institute for Professional Development
Continuing education and certificate programs are offered by the college’s Institute for Professional Development (IPD). IPD has various programs in Big Data and Data Science Technologies, Cloud Computing Technologies, Database Technologies, Management, Network Technologies, Software Development, and Special Topics in IT.
Course OnLine
Course OnLine is the College of Computing and Digital Media's customized technology for the capture and rebroadcast of classroom activity. The system captures four components: audio, video, PC screen, and Whiteboard. Within two hours of the end of a class, the pieces are synchronized and uploaded to Desire2Learn, a course management system through which instructors may post announcements, assignments, course notes, and various other supplements to the lecture content. While Course OnLine is frequently used by students to take an entire class on the Internet, it also provides a supplement for traditional classroom students, allowing them to replay the classroom lecture throughout the term of the course.
References
DePaul University
Computer science departments in the United States |
50914993 | https://en.wikipedia.org/wiki/Bicom%20Systems | Bicom Systems | Bicom Systems is a producer and vendor of Asterisk (PBX)-based unified communications devices for VoIP businesses. Bicom Systems uses open standards telephony. Products include all of the software and hardware components involved in building a VoIP business or ITSP.
History
Bicom Systems began researching and creating telecoms management software in 2003. An Asterisk pioneer, Bicom Systems launched the first ever open standards turnkey telephony system in 2004.
Bicom Systems has seven products including a desktop and mobile softphone, an IP PBX telephony platform, and an IP Key Systems product. Supported hardware manufacturers include Polycom, Yealink, and others.
See also
Comparison of VoIP software
List of SIP software
IP PBX
References
Unified communications
Teleconferencing
Videotelephony
Telephone service enhanced features
business software
VoIP software
C (programming language) software
communication software
Telephone exchange equipment
Lua (programming language)-scriptable software
Asterisk (PBX) |
67641838 | https://en.wikipedia.org/wiki/Sito%20Pons%20500cc%20Grand%20Prix | Sito Pons 500cc Grand Prix | Sito Pons 500cc Grand Prix is a 1990 racing video game developed and published by the Spanish company Zigurat Software (previously known as Made in Spain) for Amstrad CPC, MS-DOS, MSX and ZX Spectrum. Featuring former Spanish racer Sito Pons and themed around motorcycle racing, the game pit players driving the Honda NSR500 with races against AI-controlled opponents across various countries to qualify in the 500cc class of the Grand Prix motorcycle racing.
Sito Pons 500cc Grand Prix was created in conjunction with Carlos Sainz: World Rally Championship at the end of the golden age of Spanish software by most of the same team at Zigurat who worked on previous licensed sports titles such as Paris-Dakar (1988) and Emilio Sanchez Vicario Grand Slam. Zigurat approached Sito Pons to act as endorser due to their local market and motorsports being a hobby among Zigurat staff. Conversions for Amiga and Atari ST were planned but never released.
Sito Pons 500cc Grand Prix was met with mostly positive reception from critics across all platforms since its release; praise was given to the graphics, realism and sense of speed but criticism was geared towards the high difficulty level and sound design. The title was included as part of the Pack Powersports compilation for all platforms in 1991.
Gameplay
Sito Pons 500cc Grand Prix is a top-down motorcycle racing game similar to Aspar GP Master (1988), where players observe from above and race as Spanish motorcycle driver Sito Pons driving the Honda NSR500 across 14 countries conforming the 500cc class Grand Prix motorcycle racing like the United States, Spain, Italy, Germany, Austria and Yugoslavia. The game uses a isometric perspective as Carlos Sainz: World Rally Championship to portray a TV-style viewpoint.
Sito Pons 500cc has three modes of play: Race, Practice and Demo. Race is the main championship mode where players are pitted against AI-controlled racers such as Kevin Schwantz and Wayne Rainey to qualify. Finishing each race will allow the player to achieve a certain number of standings points, depending on the position the player finishes in; a first-place finish rewards a maximum of 20 points for each track, and the total maximum number of points possible is 280. Players can also resume their progress via password. Practice mode is recommended for beginners to train in any of the available courses without opponents. Demo mode allows players to view gameplay with an AI-controlled Pons.
Development and release
Sito Pons 500cc Grand Prix was created in conjunction with Carlos Sainz: World Rally Championship at the end of the golden age of Spanish software by Zigurat Software (previously Made in Spain), whose team worked on previous licensed sports titles such as Paris-Dakar (1988) and Emilio Sanchez Vicario Grand Slam (featuring former Spanish tennis player Emilio Sánchez). The project was handled by three members: Fernando Rada and Carlos Granados acted as co-programmers. Artist Jorge Granados was responsible for the visuals and design for all versions, while all three members served as designers.
The team recounted the project's development process through interviews, stating that Zigurat approached to former Spanish motorcycle racer Sito Pons to act as endorser of the game due to their local market and motorsports being a hobby among Zigurat staff. Sito Pons shares the same isometric perspective as Carlos Sainz: World Rally, however Zigurat placed more emphasis into AI for the opposing racers. Carlos stated that Rada and Jorge were in charge of the game's driving aspect.
Sito Pons 500cc Grand Prix was published exclusively in Spain by Zigurat Software for Amstrad CPC, MS-DOS, MSX and ZX Spectrum in 1990. The MSX version was also distributed by Erbe Software. The CPC version featured more colors on-screen compared to the MSX and ZX Spectrum versions, which displayed less colors on-screen. Versions for the Amiga and Atari ST were planned but never released. In 1991, the title was included as part of the Pack Powersports compilation for all platforms.
Reception
Sito Pons 500cc Grand Prix was met with mostly positive reception from critics across all platforms since its release. Spanish magazine MicroHobby reviewed the ZX Spectrum version, praising the realism, sense of speed, television broadcast-style perspective, colorful graphics and addictive playability, describing it as a "neatly carried out" game. Micromanías José Emilio Barbero reviewed the Amstrad CPC version, commending the pseudo-3D perspective, detailed visuals, realism, accurate controls and "incredibly good" scrolling. However, Emilio Barbero criticized the excessively high difficulty and sound design due to an unpleasant and continuous sound of the motorcycle's engine. Barbero also gave a brief outlook to the ZX Spectrum version, criticizing the less colorful and detailed visuals, as well as the high difficulty but praised it for retaining the same technical performance as the Amstrad CPC version.
Reviewing the MSX version, MSX Clubs Jesús Manuel Montané drew comparison with titles such as Aspar GP Master due to its premise, criticizing the monocrome graphics and high difficulty level but remarked that its scheme was valid, stating that the game was moderately entertaining. In a retrospective outlook, IGN Spains Víctor Ayora regarded Sito Pons 500cc as one of the very best games released on 8-bit computers. Ayora stated that Sito Pons successfully implemented isometric perspective and speed but noted its extreme difficulty. When comparing each platform, he noted that the Amstrad CPC version was the best graphics-wise compared to the ZX Spectrum version due to the increased color palette, though he also reported that every version played nearly the same gameplay-wise.
References
External links
Sito Pons 500cc Grand Prix at GameFAQs
Sito Pons 500cc Grand Prix at MobyGames
1990 video games
Amstrad CPC games
Cancelled Amiga games
Cancelled Atari ST games
DOS games
Motorcycle video games
MSX games
Racing video games
Single-player video games
Top-down racing video games
Video games developed in Spain
Video games set in Australia
Video games set in Austria
Video games set in Belgium
Video games set in England
Video games set in France
Video games set in Germany
Video games set in Hungary
Video games set in Italy
Video games set in Spain
Video games set in Sweden
Video games set in the United States
ZX Spectrum games |
10807218 | https://en.wikipedia.org/wiki/Academic%20dress%20of%20universities%20in%20Queensland%2C%20Australia | Academic dress of universities in Queensland, Australia | There are a number of universities in Queensland, Australia, all with distinct academic dress.
University of Queensland
The University of Queensland follows the Cambridge pattern for its Academic regalia, the nuances in design in terms of its hoods and gowns are part of the Groves classification system of academic regalia. Previous to the current colour coding system, which consists of special dye lot colours described in the British Colour Codes, there was a more complex course based colour system in place for each degree.. Revisions took place during 1998 with the changes being visible in the graduating classes of 1999. With the merger of the Queensland College of Agriculture at Gatton in 1990, a series of other complications arose with the Gatton Council wearing robes unique to them. However, with the disbanding of the Gatton council circa-2005 their dress regulations have passed into university history.
Robes
The robe is based on the Cambridge design with bell sleeves and gathering at the shoulders and across the back, straight panels are at the front and are worn open with arms placed through the openings that are at the front of the sleeves for Certificates, Diploma, Bachelor and graduates up to Masters. Masters robes are more ornate with the pointed ends on the elongated sleeves being a type that is recognised as being a long sleeved robe for graduates with awards of Masters, Juris Doctorates and Professional Doctorates. A Doctor of Philosophy has in addition panels rich scarlet silk down the front, and the Higher and Honorary Doctorates is a rich bright red robe with luxurious gold lined flowing sleeves and panels down the front.
Stoles
Students graduating from either Certificate or Diploma programs are permitted to wear a stole over their gown. The stoles' color denotes the rank of the graduate. All stoles are without pleats at the neck.
Certificate - Pearl White
Diploma - Royal Blue
Associate Diploma Undergraduate - Saffron
Associate Diploma Graduate - Silver
Graduate Certificate - Ruby
Graduate Diploma - Gold
All awards with graduate in its title wear the award of Bachelor hood over the stole.
Special provision is made for graduates from Indigenous Australian communities (either Aboriginal or Torres Strait Islander), allowing them to wear a stole representing their communities' flag. The Aboriginal stole is divided equally down the center (length-ways), with ruby on the left and black on the right and is fringed at the ends in gold. At either end of the stole an image of the Australian continent depicting the Aboriginal flag is embroidered approximately from the base. The Torres Strait islander stole is a pale blue with thick edgings in aqua colored silk and a divider bands between the edging and central section in black, with ends fringed in pearl. At either end of the stole an image of the Australian continent depicting the Torres Strait flag is embroidered approximately from the base.
Hoods
On top (of the stole, if awarded in the case of Graduate Certificates or Graduate Diplomas; otherwise, on top of the gown) is the cambridge hood type 1 in the Groves classification system, with the special dye lots colours have long been made by one historical Robe maker. The black Cambridge hood is a design classified in Groves with the design changed by the award (whether Bachelor, Masters or Doctorate). Each award has its own unique gown and hood. Bachelor's degree holders wear hoods partially lined with pearl white silk; For Masters and Juris Doctors A hood fully lined with rich blue silk; For Doctorate of Philosophy graduates, the hood is fully lined with crimson silk; And for Professional Doctorates, it is fully lined with burgundy silk. Higher doctorates and honorary doctorates are an exception, in their case the gown is made of the finest material in a special dye lot - bright red, with gold silk panels and luxurious gold lined sleeves with a bright red hood lined in a luxurious gold silk. Denoting the highest academic achievement or excellence in achievement that The University of Queensland recognises.
Headwear
All are awards up to and including the award of Masters wear a black trencher with a black tassel. Professional Doctorates, Doctorates of Philosophy and Higher Doctorates, wear a black velvet Tudor bonnet with a red silk cord tied in decorative bow with two red silk tassels hanging on the left hand side.. A distinction is the Higher Doctorate which has a rich gold tassel.
Queensland University of Technology
The basic robes of Queensland University of Technology (QUT) are the black Cambridge style robes that are common throughout the Queensland universities along with the black mortar board. All undergraduates at official Academic functions wear these robes.
Robes
Three styles of robes are worn at academic functions. The first type is a black Cambridge gown. It is worn by undergraduates, Associate Diploma/Degree, Diploma, Bachelors, Honors, Graduate Diploma/Certificate.
The second type is a black Cambridge style master's gown. This is worn by Professional Doctorate holders, Doctors of Philosophy and members of the University Council. Differences in this case are that Professional Doctorate holder's gowns have facings in the colors of their faculty, whereas those graduating with a Doctorate of Philosophy have crimson facings. Councillors wear this gown with gold bullion and cream embossed trimming and vertical gold bullion edging.
The third type is a Cambridge doctor's gown. These are scarlet in colour and are lined in the appropriate colors. Higher Doctorate holders wear this gown with gold linings and facings. Those who are Doctors of the University wear it with facings and linings in University Blue.
Several other distinctive gowns are worn however they are reserved to the Alumni President, the Vice-Chancellor and the Chancellor. The gown of the Alumni President is a university blue Cambridge gown embellished with 1 cm wide silver braid. The Chancellors and Vice-Chancellors robes are university blue and Cambridge patterned. The facings, sleeve edges, shoulders and collar are embellished with braids. These braids' design is based on the floral emblem of Queensland, the Cooktown Orchid. The differences between the Chancellor and the Vice-Chancellor are in the color of the braid, gold for the Chancellor and silver for the Vice-Chancellor.
Hoods
All graduates of the university wear hoods. The hoods are black in all cases below the rank of the Doctorate of Philosophy. A band of silk placed 50mm from the edge of the hood in the colour of the faculty denotes the rank of the graduate. Associate Degree/Diploma holder's band is 25mm thick, Diploma 50mm thick, Bachelors and Honours are 100mm thick. Graduate Certificate and Diploma hoods are the same as Bachelor hoods with the addition of a border of pearl silk 25mm thick.
Professional Doctorates, Masters, Doctorates of Philosophy and members of the University council wear hoods of black. Professional Doctorates and Masters hoods are fully lined with the silk in the colour of the appropriate faculty. Doctorates of Philosophy wear hoods fully lined in scarlet and Members of the University Council wear hoods fully lined in gold.
Higher Doctorates and Doctors of the University wear a red Cambridge style hood. These are lined in gold and university blue respectively. The Alumni President, Chancellor and Vice-Chancellor do not wear hoods.
Headwear
Black mortar boards are worn by all graduates up to and inclusive of the master's degree.
All Doctorate holders wear black velvet Tudor bonnets, with cord and tassels of diagnostic colours. Professional Doctorates' cords and tassels are of the colour of the faculty, Doctorates of Philosophy are red, Higher Doctorates and Doctors of the University have gold cords and tassels.
University blue Trenchers are worn by the Alumni President, Chancellor and Vice-Chancellor. The tassel of the Alumni President and the Vice-Chancellor are silver, the Chancellor's is gold. The trencher of the Chancellor and Vice-Chancellor are also edged in gold or silver respectively.
Faculty and Campus Colours
Faculties and the various campuses of the university have distinctive colours. The colour of the university itself is a deep blue (PMS541). Faculties at the university and their respective colours are:
Business - Blue (PMS279)
Creative Industries - New Fuchsia (PMS247)
Education - Green (PMS341)
Health - Orange (PMS165)
Humanities Program - Eggshell Blue (PMS304)
Law - Grey (PMS430)
Science and Engineering - Teal (PMS327)
James Cook University
The following represent the prescribed academic dress standards in respect of officers and graduates of the University.
Robes
The robes are Cambridge style. Black robes are worn by Bachelors, Honours and Masters holders. All Doctorate holders wear royal blue Cambridge doctorate gowns. Professional Doctorates wear gowns with facings and linings in red. Doctorates of Philosophy wear gowns with facings and linings of red faille. The University offers three types of honorary doctorates: An Honorary Doctorate, An Honorary Doctor honoris causa, and an Honorary Doctor of the University. The doctorate robes are distinguished by their facings. Honorary doctorates have facings and linings of blue satin, Honorary Doctorates honoris causa are similar with the addition of a gold edging stripe in satin, and Honorary Doctors of the University have linings and facings in gold satin.
Members of the University Council wear black Cambridge style masters gowns. The Chancellor and Vice-Chancellor wear robes of light weight dark blue wool, trimmed and lined with gold or silver braid respectively.
Hoods
All hoods are of Oxford style. Bachelors, Honours and Masters Graduates hoods are black. Bachelor hoods are lined in blue satin, Honours hoods are the same with an edging strip in gold satin. Masters hoods are fully lined in gold satin. Members of the University council wear a Masters hood entirely lined with silver satin rather than gold.
The hoods of Professional Doctorates are made of red fabric lined in royal blue faille. All other doctorate hoods are in royal blue fabric distinguished by their linings. Doctorates of Philosophy are lined with red faille, Honorary Doctorates are lined with blue satin, Honorary Doctorates honoris causa are lined with blue satin with an edging strip in gold, and Honorary Doctors of the University have hoods lined entirely with gold satin.
Headwear
Black trenchers are worn by Bachelors, Honours and Masters graduates. Members of the University Council wear black trenchers with silver tassels. Professional Doctorates achieved by coursework also wear trenchers, these being in royal blue with royal blue tassels.
All other doctorates wear Tudor Bonnets. Professional Doctorates achieved by research wear a red Tudor bonnet with a gold cord and tassels. All other doctorate bonnets are of royal blue velvet with a gold cord and two gold tassels.
The Chancellor and Vice-Chancellor wear Tudor bonnets of dark blue velvet with braid edging, cord and tassels in either gold or silver respectively.
Central Queensland University
Central Queensland University is different from most universities in Queensland in that hoods are not worn by any graduate under the rank of doctorate.
Robes
All graduates below the rank of Masters, members of the University Council and Companions of the University wear a black Bachelor Cambridge style robe. Masters graduates wear Cambridge Masters style robes in black. CQU offers several different Doctorate programs. All of these share the same robe design. They are in the Oxford Doctorate pattern of green cloth with facings of blue, red and gold satin. The sleeves are fully lined with University blue satin, with red and gold satin edgings.
Honorary Doctorates wear Cambridge Doctorate patterned robes of scarlet cloth with facings in silver satin, sleeves fully lined in silver satin, fastened with a gold cord and button. Doctors of the university wear Cambridge Doctorate pattern of scarlet cloth with facings in University blue satin, edged with University gold, sleeves fully lined in University blue satin, fastened with a gold cord and button.
The Chancellor and Vice-Chancellor wear a black gown trimmed with braid on the front panels, the hem, and the sleeves. The robe also has a sailor-collar trimmed with braid and embroidered with an 18 cm CQU full-colour logo using metallic thread. All braid and metallic thread is either gold or silver respectively.
Stoles
Graduates below the rank of Doctor, Companions of the University and members of the University Council wear stoles. Companions wear a full length stole of university blue, with the CQU logo embroidered in full colour on the top left side. Councillors' stoles are gold in colour, full length, with the CQU logo on the top left hand side. They also have two 2 cm strips of satin diagonally across the bottom right hand side; one of University green and one of University blue. Graduates' stoles are full length with the full colour CQU logo embroidered at the top on both sides.
Graduate Stole Colours
The stole colours denote the rank of the degree. The Australian Standard Colour names and Degree ranks are:
Certificates/Advanced Certificates/Diplomas - Driftwood
Advanced Diplomas & Associate Degrees - Sunflower
Bachelors - Opaline
Bachelors With Honours - Jade
Graduate Certificate - Powder Blue
Graduate Diplomas - Ultramarine
Masters - Violet.
Hoods
Professional Doctorates, Doctorates of Education and Doctorates of Business at CQU have hoods in the Oxford pattern made of green cloth, fully lined with University gold satin with red and blue satin edging, and fastened onto the front of the gown with 'hoodlinks' designed according to CQU's Logo.
Doctors of Philosophy have similar hoods, also in the Oxford Doctorate style made of blue cloth, fully lined with University green satin with red and gold satin edging, and fastened onto the front of the gown with 'hoodlinks' designed according to CQU's Logo.
Honorary doctorates wear a hood in the Cambridge pattern, made in scarlet cloth, and fully lined in silver satin. Doctors of the University wear a similar hood in scarlet; however it is lined in university blue and edged in gold satin.
Headwear
All graduates below the rank of a Doctorate wear black mortar boards as do Companions of the university. Members of the University Council wear A black mortar board with a gold tassel.
All doctorates aside from the Doctorate of Philosophy wear Tudor bonnets of green with a gold cord and tassels. The Doctorate of Philosophy wears a blue bonnet with gold cord and tassels. Honorary Doctorate holders and Doctors of the University wear a black Tudor bonnet with a gold cord and tassels.
The Chancellor and Vice Chancellor both wear Tudor bonnets with cord and tassels in gold or silver respectively.
University of Southern Queensland
Robes
All graduates below the rank of Master, wear a black Cambridge style Bachelor's robe. Masters wear a black Cambridge style Masters robe.
Doctorates of Philosophy and other research Doctorates wear a black Cambridge style doctoral gown, with facings of cardinal red, or the colour of the faculty, school or field of study respectively. All other doctorates awarded wear a Cambridge Masters gown in black. Several honorary doctorates are offered at USQ. Honorary (higher) Doctorates honoris causa (such as D.Sc. or D.Litt. et cetera) wear a scarlet Cambridge doctoral style robe with facings and sleeve linings in the Faculty, School or field of study colour and edged with gold. Doctors of the University honoris causa wear a scarlet Cambridge doctoral style robe with facings of vitrix blue and edged with gold and sleeves fully lined in vitrix blue and edged with gold.
Fellows of the university wear black Cambridge styled Bachelors robes with facings of vitrix blue and linings of gold. Members of the University Council wear something similar with the addition of the USQ logo embroidered on the left facing.
The Chancellor, Vice-Chancellor and all Deputy Chancellors wear similar robes. They wear a black grosgrain gown, with facings down each side in front and a square collar at the back of the same material, edged with gold lace; the differences being in the placement of the lace on the sleeves. The spacing of the gold lace on the Chancellors sleeves is 4.5 cm, as opposed to Deputy Chancellors with a gold lace spacing of 11 cm. The Vice Chancellor however, has gold laced trim to the sleeve openings.
Stoles
Stoles are only worn by associate degree, Diploma and Advanced Diploma holders. The stole is black, lined to 5 cm with the colour of the field of study. One exception to this is the associate degree in education which is black and fully lined in the colour of the faculty, school, or field of study.
Hoods
All hoods at USQ are full Cambridge style hoods. Bachelors, Honours, Graduate and Postgraduate Diplomas and Combined degrees all wear black hoods, with a 10 cm lining of the colour of the field of study. Those taking combined degrees wear this hood but also with a recognition cord of the colour appropriate to the second field of study.
Masters, research Doctorates other than PhDs and any other non-honorary Doctorate awarded by USQ are assigned hoods fully lined with the colour of the field of study. There are several exceptions to this rule however. The Master of Business Administration hood is fully lined in grey with Indian yellow edging, the Master of Business Information Technology hood is fully lined in grey with signal red edging, and the Master of Philosophy hood is fully lined in cardinal red. Doctorates of Philosophy wear a full Cambridge style hood fully lined in cardinal red.
Those offered honorary doctorates at the university wear full Cambridge hoods with an edging 2.5 cm deep in gold satin. The Honorary Higher Doctorates hoods are lined in the colour of their field of study. Doctors of the university have hoods lined in the vitrix blue.
Members of the Council and the Chancellors do not wear hoods.
Headwear
Black cloth mortar boards are worn by all graduates below the rank of Doctorate, honorary Fellows of the University, and University Council members. All Chancellors wear a black cloth mortarboard with gold braid, a metallic dome button and a full metallic tassel.
Black Tudor bonnets are worn by all Doctorate holders distinguished by the colour of the cord and tassels. Doctors of Philosophy wear the red cord and tassels, all other non-honorary Doctorates wear a cord and tassels in the colour of their field of study. All honorary Doctorates at USQ wear bonnets with a gold cord and gold tassels.
Colours
USQ denotes the faculties and fields of study with colour, as do many universities. These colour names are taken from the British Colour Council Dictionary of Colour Standards. Here follows a list of colours appropriate to the Faculties or the subjects which they offer.
The Faculty of Arts - Pearl White
The Faculty of Education - Lilac
The Faculty of Business:
Business - French Grey
Commerce - Indian Yellow
Information Technology - Signal Red
The Faculty of Engineering and Surveying
Engineering - Claret
Surveying - Maroon
The Faculty of Sciences
Sciences (including Psychology) - Spectrum Blue
Nursing - Peacock Green
Information Technology - Signal Red
Bond University
Robes
All robes worn are Cambridge style. Black Bachelor gowns are worn by all below the rank of Doctor.
Stoles
Hoods
All Bachelor hoods are black with silk lining of the faculty colour. Masters hoods are dark blue with lining of the Faculty colour.
Headwear
All are awards up to and including the award of Masters wear a black trencher with a black tassel. Doctorates of Philosophy wear a black velvet Tudor bonnet.
Colours
Faculty of Law - lilac
Faculty of Society and Design - white
Faculty of Business - gold
Faculty of Health Sciences and Medicine -
University of the Sunshine Coast
Robes
All robes worn are Cambridge style. Black Bachelor gowns are worn by all below the rank of Master. Senior Fellows of the University however also wear these gowns.
Masters style Gowns in black are worn by master's degree holders, the Yeoman Bedell and members of the University Council. Councillors' robes have facings in rifle green. The Bedell wears a Councillors' robe with black ornaments.
All recipients of doctoral degrees wear doctoral robes. The gowns of honorary Doctorates are in rifle green. Doctorates of philosophy wear a black robe with facings and sleeve linings in new red. Professional Doctorates wear a black robe with sleeve linings and facings in the colour of the faculty the degree was awarded in.
The Chancellor and Vice-Chancellor wear black woollen robes with trimmings in either gold or silver respectively.
Stoles
Stoles are worn by Senior Fellows of the University and graduates holding Graduate Certificates or Graduate Diplomas. The stole of a Senior Fellow is full length in Rifle Green silk. Those of Graduate Certificates and Diplomas are full length and black, and either edged to 2.5 cm or fully lined with (respectively) the colour of the faculty.
Hoods
Hoods are worn by all other graduates of the university. All hoods are black with one exception. Honorary Doctors of the university wear a Rifle Green Cambridge style hood lined in silk.
Bachelor's degree holders hoods Oxford-burgon style and are lined with silk to a depth of 15 cm in the colour of their faculty. However, in the case of honours degrees a 2 cm thick strip of black commencing 3 cm from the edge of the silk lining. In the case of combined bachelor's degrees the lining of the hood is in two colours. On the right side of the hood the silk lining is the colour of the first named faculty, and on the left the silk is that of the second named faculty.
Master's degrees are differentiated by coursework and research at USC. Masters hoods are in the Oxford-burgon fully lined with silk in the colour of the faculty. Those obtained through course work, however, have a 2 cm black band of silk 3 cm from the edge. All doctorate hoods are in the Cambridge style. Professional Doctorate holders and Honorary Doctorates from the Faculties wear a hood lined in the colour of the faculty. Doctorates of Philosophy wear hoods fully lined in New Red.
The University of the Sunshine Coast has changed some details of its hoods, please see the universities website for more information.
Headwear
Black cloth trenchers are worn by all graduates below the rank of Doctorate. A black cloth trencher cap is also worn by Senior University Fellows and the Yeoman Bedell.
All other ranks of the university wear black velvet Tudor Bonnets differentiated by the colour of the cord and tassels. Graduates with professional doctorates have cords and tassels in black. Those holding Doctorates of Philosophy have a red cord and tassels. Honorary Higher Doctorates awarded by the university have cords and tassels of the colour of the faculty. Doctors of the University and University Council members have tassels and a cord in Rifle green. The Chancellor and Vice Chancellor have bonnets with cords and tassels in gold or silver respectively.
Award Colours
Violet - Sciences, Environmental and Related Studies
Academic Green - Business, Information Technology and Commerce
Spectrum Orange - Engineering and Related Technologies
White - Health, Nursing and Sport sciences
New Gold - Education
Royal Blue - Law, Criminology, Creative industries, Design, Communication and Social sciences
Griffith University
Three-quarter length academic gowns, open down the front, are the standard academic dress, as is a badge of the Griffith University logo. For those who graduate from a Certificate or an Advanced Certificate program, this is full academic dress; for those who graduate from an Associate Diploma or a Diploma program, a black Cambridge hood, with white silk edging, is added to the gown. No trencher is worn for non-degree graduates.
Those graduating from a bachelor's degree (with or without honours) or a Graduate Certificate program wear the gown, the badge, the black Cambridge hood with white silk edging, and also wear the trencher. Master's degree graduates wear all of the aforementioned, except that the gown is faced, on both sides of the opening, with 10 cm of white silk.
Doctoral degree graduates have significant variations, depending on the field or reason that the degree is awarded. Doctors of Philosophy wear the black gown, with 10 cm red silk facing on either side and within the sleeves, a black hood with red edging, and a black velvet Tudor bonnet with red braiding. Other doctorate graduates wear a red gown, with the inside of the sleeves lined in white, and a red Cambridge hood with white silk edging, along with the black Tudor bonnet with matching braiding. Doctors of the University essentially receive the inverse of the PhD attire, wearing a red gown, with black facing and sleeve lining, but still with a black Tudor bonnet with red braiding.
See also
Academic dress
Groves classification system
Swotvac
References
Academic Dress at UQ
Campus Life - Academic Dress at Griffith University
Academic Dress Regulations at QUT
University of the Sunshine Coast - Academic Dress Policy
Academic Dress Regulations at the University of Southern Queensland
Central Queensland University Academic Regalia
Academic Dress Regulations at James Cook University
External links
Academic Dress Hire Service
QUT Student Guild: About Academic Gown Hire
Queensland, Australia |
4718446 | https://en.wikipedia.org/wiki/Java%20%28software%20platform%29 | Java (software platform) | Java is a set of computer software and specifications developed by James Gosling at Sun Microsystems, which was later acquired by the Oracle Corporation, that provides a system for developing application software and deploying it in a cross-platform computing environment. Java is used in a wide variety of computing platforms from embedded devices and mobile phones to enterprise servers and supercomputers. Java applets, which are less common than standalone Java applications, were commonly run in secure, sandboxed environments to provide many features of native applications through being embedded in HTML pages. It makes the website more dynamic.
Writing in the Java programming language is the primary way to produce code that will be deployed as byte code in a Java virtual machine (JVM); byte code compilers are also available for other languages, including Ada, JavaScript, Python, and Ruby. In addition, several languages have been designed to run natively on the JVM, including Clojure, Groovy, and Scala. Java syntax borrows heavily from C and C++, but object-oriented features are modeled after Smalltalk and Objective-C. Java eschews certain low-level constructs such as pointers and has a very simple memory model where objects are allocated on the heap (while some implementations e.g. all currently supported by Oracle, may use escape analysis optimization to allocate on the stack instead) and all variables of object types are references. Memory management is handled through integrated automatic garbage collection performed by the JVM.
On November 13, 2006, Sun Microsystems made the bulk of its implementation of Java available under the GNU General Public License (GPL).
The latest version is Java 17, released in September 2021. As an open source platform, Java has many distributors, including Amazon, IBM, Azul Systems, and AdoptOpenJDK. Distributions include Amazon Corretto, Zulu, AdoptOpenJDK, and Liberica. Regarding Oracle, it distributes Java 8, and also makes available e.g. Java 11, a currently supported long-term support (LTS) version, released on September 25, 2018. Oracle (and others) "highly recommend that you uninstall older versions of Java" than Java 8, because of serious risks due to unresolved security issues. Since Java 9 (and 10, 12, 13, 14, 15 and 16) is no longer supported, Oracle advises its users to "immediately transition" to a supported version. Oracle released the last free-for-commercial-use public update for the legacy Java 8 LTS in January 2019, and will continue to support Java 8 with public updates for personal use indefinitely. Oracle extended support for Java 6 ended in December 2018.
Platform
The Java platform is a suite of programs that facilitate developing and running programs written in the Java programming language. A Java platform includes an execution engine (called a virtual machine), a compiler and a set of libraries; there may also be additional servers and alternative libraries that depend on the requirements. Java platforms have been implemented for a wide variety of hardware and operating systems with a view to enable Java programs to run identically on all of them. Different platforms target different classes of device and application domains:
Java Card: A technology that allows small Java-based applications (applets) to be run securely on smart cards and similar small-memory devices.
Java ME (Micro Edition): Specifies several different sets of libraries (known as profiles) for devices with limited storage, display, and power capacities. It is often used to develop applications for mobile devices, PDAs, TV set-top boxes, and printers.
Java SE (Standard Edition): For general-purpose use on desktop PCs, servers and similar devices.
Jakarta EE (Enterprise Edition): Java SE plus various APIs which are useful for multi-tier client–server enterprise applications.
The Java platform consists of several programs, each of which provides a portion of its overall capabilities. For example, the Java compiler, which converts Java source code into Java bytecode (an intermediate language for the JVM), is provided as part of the Java Development Kit (JDK). The Java Runtime Environment (JRE), complementing the JVM with a just-in-time (JIT) compiler, converts intermediate bytecode into native machine code on the fly. The Java platform also includes an extensive set of libraries.
The essential components in the platform are the Java language compiler, the libraries, and the runtime environment in which Java intermediate bytecode executes according to the rules laid out in the virtual machine specification.
Java Virtual Machine
The heart of the Java platform is the "virtual machine" that executes Java bytecode programs. This bytecode is the same no matter what hardware or operating system the program is running under. However, new versions, such as for Java 10 (and earlier), have made small changes, meaning the bytecode is in general only forward compatible. There is a JIT (Just In Time) compiler within the Java Virtual Machine, or JVM. The JIT compiler translates the Java bytecode into native processor instructions at run-time and caches the native code in memory during execution.
The use of bytecode as an intermediate language permits Java programs to run on any platform that has a virtual machine available. The use of a JIT compiler means that Java applications, after a short delay during loading and once they have "warmed up" by being all or mostly JIT-compiled, tend to run about as fast as native programs.
Since JRE version 1.2, Sun's JVM implementation has included a just-in-time compiler instead of an interpreter.
Although Java programs are cross-platform or platform independent, the code of the Java Virtual Machines (JVM) that execute these programs is not. Every supported operating platform has its own JVM.
Class libraries
In most modern operating systems (OSs), a large body of reusable code is provided to simplify the programmer's job. This code is typically provided as a set of dynamically loadable libraries that applications can call at runtime. Because the Java platform is not dependent on any specific operating system, applications cannot rely on any of the pre-existing OS libraries. Instead, the Java platform provides a comprehensive set of its own standard class libraries containing many of the same reusable functions commonly found in modern operating systems. Most of the system library is also written in Java. For instance, the Swing library paints the user interface and handles the events itself, eliminating many subtle differences between how different platforms handle components.
The Java class libraries serve three purposes within the Java platform. First, like other standard code libraries, the Java libraries provide the programmer a well-known set of functions to perform common tasks, such as maintaining lists of items or performing complex string parsing. Second, the class libraries provide an abstract interface to tasks that would normally depend heavily on the hardware and operating system. Tasks such as network access and file access are often heavily intertwined with the distinctive implementations of each platform. The java.net and java.io libraries implement an abstraction layer in native OS code, then provide a standard interface for the Java applications to perform those tasks. Finally, when some underlying platform does not support all of the features a Java application expects, the class libraries work to gracefully handle the absent components, either by emulation to provide a substitute, or at least by providing a consistent way to check for the presence of a specific feature.
Languages
The word "Java", alone, usually refers to Java programming language that was designed for use with the Java platform. Programming languages are typically outside of the scope of the phrase "platform", although the Java programming language was listed as a core part of the Java platform before Java 7. The language and runtime were therefore commonly considered a single unit. However, an effort was made with the Java 7 specification to more clearly treat the Java language and the Java virtual machine as separate entities, so that they are no longer considered a single unit.
Third parties have produced many compilers or interpreters that target the JVM. Some of these are for existing languages, while others are for extensions to the Java language. These include:
BeanShell – A lightweight scripting language for Java (see also JShell)
Ceylon – An object-oriented, strongly statically typed programming language with an emphasis on immutability
Clojure – A modern, dynamic, and functional dialect of the Lisp programming language on the Java platform
Gosu – A general-purpose Java Virtual Machine-based programming language released under the Apache License 2.0
Groovy – A fully Java interoperable, Java-syntax-compatible, static and dynamic language with features from Python, Ruby, Perl, and Smalltalk
JRuby – A Ruby interpreter
Jython – A Python interpreter
Kotlin – An industrial programming language for JVM with full Java interoperability
Rhino – A JavaScript interpreter
Scala – A multi-paradigm programming language with non-Java compatible syntax designed as a "better Java"
Similar platforms
The success of Java and its write once, run anywhere concept has led to other similar efforts, notably the .NET Framework, appearing since 2002, which incorporates many of the successful aspects of Java. .NET was built from the ground-up to support multiple programming languages, while the Java platform was initially built to support only the Java language, although many other languages have been made for JVM since. Like Java, .NET languages compile to byte code and are executed by the Common
Language Runtime (CLR), which is similar in purpose to the JVM. Like the JVM, the CLR provides memory management through automatic garbage collection, and allows .NET byte code to run on multiple operating systems.
.NET included a Java-like language first named J++, then called Visual J# that was incompatible with the Java specification. It was discontinued 2007, and support for it ended in 2015.
Java Development Kit
The Java Development Kit (JDK) is a Sun product aimed at Java developers. Since the introduction of Java, it has been by far the most widely used Java software development kit (SDK). It contains a Java compiler, a full copy of the Java Runtime Environment (JRE), and many other important development tools.
Java Runtime Environment
The Java Runtime Environment (JRE) released by Oracle is a freely available software distribution containing a stand-alone JVM (HotSpot), the Java standard library (Java Class Library), a configuration tool, and—until its discontinuation in JDK 9—a browser plug-in. It is the most common Java environment installed on personal computers in the laptop and desktop form factor. Mobile phones including feature phones and early smartphones that ship with a JVM are most likely to include a JVM meant to run applications targeting Micro Edition of the Java platform. Meanwhile, most modern smartphones, tablet computers, and other handheld PCs that run Java apps are most likely to do so through support of the Android operating system, which includes an open source virtual machine incompatible with the JVM specification. (Instead, Google's Android development tools take Java programs as input and output Dalvik bytecode, which is the native input format for the virtual machine on Android devices.)
Performance
The JVM specification gives a lot of leeway to implementors regarding the implementation details. Since Java 1.3, JRE from Oracle contains a JVM called HotSpot. It has been designed to be a high-performance JVM.
To speed-up code execution, HotSpot relies on just-in-time compilation. To speed-up object allocation and garbage collection, HotSpot uses generational heap.
Generational heap
The Java virtual machine heap is the area of memory used by the JVM for dynamic memory allocation.
In HotSpot the heap is divided into generations:
The young generation stores short-lived objects that are created and immediately garbage collected.
Objects that persist longer are moved to the old generation (also called the tenured generation). This memory is subdivided into (two) Survivors spaces where the objects that survived the first and next garbage collections are stored.
The permanent generation (or permgen) was used for class definitions and associated metadata prior to Java 8. Permanent generation was not part of the heap. The permanent generation was removed from Java 8.
Originally there was no permanent generation, and objects and classes were stored together in the same area. But as class unloading occurs much more rarely than objects are collected, moving class structures to a specific area allowed significant performance improvements.
Security
Oracle's JRE is installed on a large number of computers. End users with an out-of-date version of JRE therefore are vulnerable to many known attacks. This led to the widely shared belief that Java is inherently insecure. Since Java 1.7, Oracle's JRE for Windows includes automatic update functionality.
Before the discontinuation of the Java browser plug-in, any web page might have potentially run a Java applet, which provided an easily accessible attack surface to malicious web sites. In 2013 Kaspersky Labs reported that the Java plug-in was the method of choice for computer criminals. Java exploits are included in many exploit packs that hackers deploy onto hacked web sites. Java applets were removed in Java 11, released on September 25, 2018.
History
The Java platform and language began as an internal project at Sun Microsystems in December 1990, providing an alternative to the C++/C programming languages. Engineer Patrick Naughton had become increasingly frustrated with the state of Sun's C++ and C application programming interfaces (APIs) and tools, as well as with the way the NeWS project was handled by the organization. Naughton informed Scott McNealy about his plan of leaving Sun and moving to NeXT; McNealy asked him to pretend he was God and send him an e-mail explaining how to fix the company. Naughton envisioned the creation of a small team that could work autonomously without the bureaucracy that was stalling other Sun projects. McNealy forwarded the message to other important people at Sun, and the Stealth Project started.
The Stealth Project was soon renamed to the Green Project, with James Gosling and Mike Sheridan joining Naughton. Together with other engineers, they began work in a small office on Sand Hill Road in Menlo Park, California. They aimed to develop new technology for programming next-generation smart appliances, which Sun expected to offer major new opportunities.
The team originally considered using C++, but rejected it for several reasons. Because they were developing an embedded system with limited resources, they decided that C++ needed too much memory and that its complexity led to developer errors. The language's lack of garbage collection meant that programmers had to manually manage system memory, a challenging and error-prone task. The team also worried about the C++ language's lack of portable facilities for security, distributed programming, and threading. Finally, they wanted a platform that would port easily to all types of devices.
Bill Joy had envisioned a new language combining Mesa and C. In a paper called Further, he proposed to Sun that its engineers should produce an object-oriented environment based on C++. Initially, Gosling attempted to modify and extend C++ (a proposed development that he referred to as "C++ ++ --") but soon abandoned that in favor of creating a new language, which he called Oak, after the tree that stood just outside his office.
By the summer of 1992, the team could demonstrate portions of the new platform, including the Green OS, the Oak language, the libraries, and the hardware. Their first demonstration, on September 3, 1992, focused on building a personal digital assistant (PDA) device named Star7 that had a graphical interface and a smart agent called "Duke" to assist the user. In November of that year, the Green Project was spun off to become Firstperson, a wholly owned subsidiary of Sun Microsystems, and the team relocated to Palo Alto, California. The Firstperson team had an interest in building highly interactive devices, and when Time Warner issued a request for proposal (RFP) for a set-top box, Firstperson changed their target and responded with a proposal for a set-top box platform. However, the cable industry felt that their platform gave too much control to the user, so Firstperson lost their bid to SGI. An additional deal with The 3DO Company for a set-top box also failed to materialize. Unable to generate interest within the television industry, the company was rolled back into Sun.
Java meets the Web
In June and July 1994 after three days of brainstorming with John Gage (the Director of Science for Sun), Gosling, Joy, Naughton, Wayne Rosing, and Eric Schmidt the team re-targeted the platform for the World Wide Web. They felt that with the advent of graphical web browsers like Mosaic the Internet could evolve into the same highly interactive medium that they had envisioned for cable TV. As a prototype, Naughton wrote a small browser, WebRunner (named after the movie Blade Runner), renamed HotJava in 1995.
Sun renamed the Oak language to Java after a trademark search revealed that Oak Technology used the name Oak. Although Java 1.0a became available for download in 1994, the first public release of Java, Java 1.0a2 with the HotJava browser, came on May 23, 1995, announced by Gage at the SunWorld conference. Accompanying Gage's announcement, Marc Andreessen, Executive Vice President of Netscape Communications Corporation, unexpectedly announced that Netscape browsers would include Java support. On January 9, 1996, Sun Microsystems formed the JavaSoft group to develop the technology.
While the so-called Java applets for web browsers no longer are the most popular use of Java (with it e.g. more used server-side) or the most popular way to run code client-side (JavaScript took over as more popular), it still is possible to run Java (or other JVM-languages such as Kotlin) in web browsers, even after JVM-support has been dropped from them, using e.g. TeaVM.
Version history
The Java language has undergone several changes since the release of JDK (Java Development Kit) 1.0 on January 23, 1996, as well as numerous additions of classes and packages to the standard library. Since J2SE 1.4 the Java Community Process (JCP) has governed the evolution of the Java Language. The JCP uses Java Specification Requests (JSRs) to propose and specify additions and changes to the Java platform. The Java Language Specification (JLS) specifies the language; changes to the JLS are managed under JSR 901.
Sun released JDK 1.1 on February 19, 1997. Major additions included an extensive retooling of the AWT event model, inner classes added to the language, JavaBeans and JDBC.
J2SE 1.2 (December 8, 1998) – Codename Playground. This and subsequent releases through J2SE 5.0 were rebranded Java 2 and the version name "J2SE" (Java 2 Platform, Standard Edition) replaced JDK to distinguish the base platform from J2EE (Java 2 Platform, Enterprise Edition) and J2ME (Java 2 Platform, Micro Edition). Major additions included reflection, a collections framework, Java IDL (an interface description language implementation for CORBA interoperability), and the integration of the Swing graphical API into the core classes. A Java Plug-in was released, and Sun's JVM was equipped with a JIT compiler for the first time.
J2SE 1.3 (May 8, 2000) – Codename Kestrel. Notable changes included the bundling of the HotSpot JVM (the HotSpot JVM was first released in April, 1999 for the J2SE 1.2 JVM), JavaSound, Java Naming and Directory Interface (JNDI) and Java Platform Debugger Architecture (JPDA).
J2SE 1.4 (February 6, 2002) – Codename Merlin. This became the first release of the Java platform developed under the Java Community Process as JSR 59. Major changes included regular expressions modeled after Perl, exception chaining, an integrated XML parser and XSLT processor (JAXP), and Java Web Start.
J2SE 5.0 (September 30, 2004) – Codename Tiger. It was originally numbered 1.5, which is still used as the internal version number. Developed under JSR 176, Tiger added several significant new language features including the for-each loop, generics, autoboxing and var-args.
Java SE 6 (December 11, 2006) – Codename Mustang. It was bundled with a database manager and facilitates the use of scripting languages with the JVM (such as JavaScript using Mozilla's Rhino engine). As of this version, Sun replaced the name "J2SE" with Java SE and dropped the ".0" from the version number. Other major changes include support for pluggable annotations (JSR 269), many GUI improvements, including native UI enhancements to support the look and feel of Windows Vista, and improvements to the Java Platform Debugger Architecture (JPDA) & JVM Tool Interface for better monitoring and troubleshooting.
Java SE 7 (July 28, 2011) – Codename Dolphin. This version developed under JSR 336. It added many small language changes including strings in switch, try-with-resources and type inference for generic instance creation. The JVM was extended with support for dynamic languages, while the class library was extended among others with a join/fork framework, an improved new file I/O library and support for new network protocols such as SCTP. Java 7 Update 76 was released in January 2015, with expiration date April 14, 2015.
In June 2016, after the last public update of Java 7, "remotely exploitable" security bugs in Java 6, 7, and 8 were announced.
Java SE 8 (March 18, 2014) Codename Kenai. Notable changes include language-level support for lambda expressions (closures) and default methods, the Project Nashorn JavaScript runtime, a new Date and Time API inspired by Joda Time, and the removal of PermGen. This version is not officially supported on the Windows XP platform. However, due to the end of Java 7's lifecycle it is the recommended version for XP users. Previously, only an unofficial manual installation method had been described for Windows XP SP3. It refers to JDK8, the developing platform for Java that also includes a fully functioning Java Runtime Environment. Java 8 is supported on Windows Server 2008 R2 SP1, Windows Vista SP2 and Windows 7 SP1, Ubuntu 12.04 LTS and higher (and some other OSes).
Java SE 9 and 10 had higher system requirements, i.e. Windows 7 or Server 2012 (and web browser minimum certified is upped to Internet Explorer 11 or other web browsers), and Oracle dropped 32-bit compatibility for all platforms, i.e. only Oracle's "64-bit Java virtual machines (JVMs) are certified".
Java SE 11 was released September 2018, the first LTS release since the rapid release model was adopted starting with version 9. For the first time, OpenJDK 11 represents the complete source code for the Java platform under the GNU General Public License, and while Oracle still dual-licenses it with an optional proprietary license, there are no code differences nor modules unique to the proprietary-licensed version. Java 11 features include two new garbage collector implementations, Flight Recorder to debug deep issues, a new HTTP client including WebSocket support.
Java SE 12 was released March 2019.
Java SE 13 was released September 2019.
Java SE 14 was released March 2020.
Java SE 15 was released September 2020.
Java SE 16 was released March 2021.
Java SE 17 was released September 2021.
In addition to language changes, significant changes have been made to the Java class library over the years, which has grown from a few hundred classes in JDK 1.0 to over three thousand in J2SE 5.0. Entire new APIs, such as Swing and Java 2D, have evolved, and many of the original JDK 1.0 classes and methods have been deprecated.
Usage
Desktop use
According to Oracle in 2010, the Java Runtime Environment was found on over 850 million PCs. Microsoft has not bundled a Java Runtime Environment (JRE) with its operating systems since Sun Microsystems sued Microsoft for adding Windows-specific classes to the bundled Java runtime environment, and for making the new classes available through Visual J++. Apple no longer includes a Java runtime with OS X as of version 10.7, but the system prompts the user to download and install it the first time an application requiring the JRE is launched. Many Linux distributions include the OpenJDK runtime as the default virtual machine, negating the need to download the proprietary Oracle JRE.
Some Java applications are in fairly widespread desktop use, including the NetBeans and Eclipse integrated development environments, and file sharing clients such as LimeWire and Vuze. Java is also used in the MATLAB mathematics programming environment, both for rendering the user interface and as part of the core system. Java provides cross platform user interface for some high end collaborative applications such as Lotus Notes.
Oracle plans to first deprecate the separately installable Java browser plugin from the Java Runtime Environment in JDK 9 then remove it completely from a future release, forcing web developers to use an alternative technology.
Mascot
Duke is Java's mascot.
When Sun announced that Java SE and Java ME would be released under a free software license (the GNU General Public License), they released the Duke graphics under the free BSD license at the same time. A new Duke personality is created every year. For example, in July 2011 "Future Tech Duke" included a bigger nose, a jetpack, and blue wings.
Licensing
The source code for Sun's implementations of Java (i.e. the de facto reference implementation) has been available for some time, but until recently, the license terms severely restricted what could be done with it without signing (and generally paying for) a contract with Sun. As such these terms did not satisfy the requirements of either the Open Source Initiative or the Free Software Foundation to be considered open source or free software, and Sun Java was therefore a proprietary platform.
While several third-party projects (e.g. GNU Classpath and Apache Harmony) created free software partial Java implementations, the large size of the Sun libraries combined with the use of clean room methods meant that their implementations of the Java libraries (the compiler and VM are comparatively small and well defined) were incomplete and not fully compatible. These implementations also tended to be far less optimized than Sun's.
Free software
Sun announced in JavaOne 2006 that Java would become free and open source software, and on October 25, 2006, at the Oracle OpenWorld conference, Jonathan I. Schwartz said that the company was set to announce the release of the core Java Platform as free and open source software within 30 to 60 days.
Sun released the Java HotSpot virtual machine and compiler as free software under the GNU General Public License on November 13, 2006, with a promise that the rest of the JDK (that includes the JRE) would be placed under the GPL by March 2007 ("except for a few components that Sun does not have the right to publish in distributable source form under the GPL"). According to Richard Stallman, this would mean an end to the "Java trap". Mark Shuttleworth called the initial press announcement, "A real milestone for the free software community".
Sun released the source code of the Class library under GPL on May 8, 2007, except some limited parts that were licensed by Sun from third parties who did not want their code to be released under a free software and open-source license. Some of the encumbered parts turned out to be fairly key parts of the platform such as font rendering and 2D rasterising, but these were released as open-source later by Sun (see OpenJDK Class library).
Sun's goal was to replace the parts that remain proprietary and closed-source with alternative implementations and make the class library completely free and open source. In the meantime, a third-party project called IcedTea created a completely free and highly usable JDK by replacing encumbered code with either stubs or code from GNU Classpath. However OpenJDK has since become buildable without the encumbered parts (from OpenJDK 6 b10) and has become the default runtime environment for most Linux distributions.
In June 2008, it was announced that IcedTea6 (as the packaged version of OpenJDK on Fedora 9) has passed the Technology Compatibility Kit tests and can claim to be a fully compatible Java 6 implementation.
Because OpenJDK is under the GPL, it is possible to redistribute a custom version of the JRE directly with software applications, rather than requiring the enduser (or their sysadmin) to download and install the correct version of the proprietary Oracle JRE onto each of their systems themselves.
Criticism
In most cases, Java support is unnecessary in Web browsers, and security experts recommend that it not be run in a browser unless absolutely necessary. It was suggested that, if Java is required by a few Web sites, users should have a separate browser installation specifically for those sites.
Generics
When generics were added to Java 5.0, there was already a large framework of classes (many of which were already deprecated), so generics were chosen to be implemented using erasure to allow for migration compatibility and re-use of these existing classes. This limited the features that could be provided by this addition as compared to some other languages. The addition of type wildcards made Java unsound.
Unsigned integer types
Java lacks native unsigned integer types. Unsigned data are often generated from programs written in C and the lack of these types prevents direct data interchange between C and Java. Unsigned large numbers are also used in many numeric processing fields, including cryptography, which can make Java less convenient to use for these tasks.
Although it is possible to partially circumvent this problem with conversion code and using larger data types, it makes using Java cumbersome for handling the unsigned data. While a 32-bit signed integer may be used to hold a 16-bit unsigned value with relative ease, a 32-bit unsigned value would require a 64-bit signed integer. Additionally, a 64-bit unsigned value cannot be stored using any integer type in Java because no type larger than 64 bits exists in the Java language. If abstracted using functions, function calls become necessary for many operations which are native to some other languages. Alternatively, it is possible to use Java's signed integers to emulate unsigned integers of the same size, but this requires detailed knowledge of complex bitwise operations.
Floating point arithmetic
While Java's floating point arithmetic is largely based on IEEE 754 (Standard for Binary Floating-Point Arithmetic), certain features are not supported even when using the strictfp modifier, such as Exception Flags and Directed Roundings capabilities mandated by IEEE Standard 754. Additionally, the extended precision floating-point types permitted in 754 and present in many processors are not permitted in Java.
Performance
In the early days of Java (before the HotSpot VM was implemented in Java 1.3 in 2000) there were some criticisms of performance. Benchmarks typically reported Java as being about 50% slower than C (a language which compiles to native code).
Java's performance has improved substantially since the early versions. Performance of JIT compilers relative to native compilers has in some optimized tests been shown to be quite similar.
Java bytecode can either be interpreted at run time by a virtual machine, or it can be compiled at load time or runtime into native code which runs directly on the computer's hardware. Interpretation is slower than native execution, and compilation at load time or runtime has an initial performance penalty for the compilation. Modern performant JVM implementations all use the compilation approach, so after the initial startup time the performance is equivalent to native code.
Security
The Java platform provides a security architecture which is designed to allow the user to run untrusted bytecode in a "sandboxed" manner to protect against malicious or poorly written software. This "sandboxing" feature is intended to protect the user by restricting access to certain platform features and APIs which could be exploited by malware, such as accessing the local filesystem, running arbitrary commands, or accessing communication networks.
In recent years, researchers have discovered numerous security flaws in some widely used Java implementations, including Oracle's, which allow untrusted code to bypass the sandboxing mechanism, exposing users to malicious attacks. These flaws affect only Java applications which execute arbitrary untrusted bytecode, such as web browser plug-ins that run Java applets downloaded from public websites. Applications where the user trusts, and has full control over, all code that is being executed are unaffected.
On August 31, 2012, Java 6 and 7 (both supported back then) on Microsoft Windows, OS X, and Linux were found to have a serious security flaw that allowed a remote exploit to take place by simply loading a malicious web page. was later found to be flawed as well.
On January 10, 2013, three computer specialists spoke out against Java, telling Reuters that it was not secure and that people should disable Java. Jaime Blasco, Labs Manager with AlienVault Labs, stated that "Java is a mess. It's not secure. You have to disable it."
This vulnerability affects and it is unclear if it affects , so it is suggested that consumers disable it. Security alerts from Oracle announce schedules of critical security-related patches to Java.
On January 14, 2013, security experts said that the update still failed to protect PCs from attack. This exploit hole prompted a response from the United States Department of Homeland Security encouraging users to disable or uninstall Java. Apple blacklisted Java in limited order for all computers running its OS X operating system through a virus protection program.
In 2014 and responding to then-recent Java security and vulnerability issues, security blogger Brian Krebs has called for users to remove at least the Java browser plugin and also the entire software. "I look forward to a world without the Java plugin (and to not having to remind readers about quarterly patch updates) but it will probably be years before various versions of this plugin are mostly removed from end-user systems worldwide." "Once promising, it has outlived its usefulness in the browser, and has become a nightmare that delights cyber-criminals at the expense of computer users." "I think everyone should uninstall Java from all their PCs and Macs, and then think carefully about whether they need to add it back. If you are a typical home user, you can probably do without it. If you are a business user, you may not have a choice."
Adware
The Java runtime environment has a history of bundling sponsored software to be installed by default during installation and during the updates which roll out every month or so. This includes the "Ask.com toolbar" that will redirect browser searches to ads and "McAfee Security Scan Plus". These offers can be blocked through a setting in the Java Control Panel, although this is not obvious. This setting is located under the "Advanced" tab in the Java Control Panel, under the "Miscellaneous" heading, where the option is labelled as an option to suppress "sponsor offers".
Update system
Java has yet to release an automatic updater that does not require user intervention and administrative rights unlike Google Chrome and Flash player.
See also
List of Java APIs
Java logging frameworks
Java performance
JavaFX
Jazelle
Java ConcurrentMap
Comparison of the Java and .NET platforms
List of JVM languages
List of computing mascots
References
External links
sun.com – Official developer site
Presentation by James Gosling about the origins of Java, from the JVM Languages Summit 2008
Java forums organization
Java Introduction, May 14, 2014, Java77 Blog
Computing platforms
Cross-platform software |
418030 | https://en.wikipedia.org/wiki/PowerBook%20Duo | PowerBook Duo | The PowerBook Duo is a line of subnotebooks manufactured and sold by Apple Computer from 1992 until 1997 as a more compact companion to the PowerBook line. Improving upon the PowerBook 100's portability (its immediate predecessor and Apple's third-smallest laptop), the Duo came in seven different models. They were the Duo 210, 230, 250, 270c, 280, 280c, and 2300c, with the 210 and 230 being the earliest, and the 2300c being the final incarnation before the entire line was dropped in early 1997.
Weighing in at a mere and slightly smaller than a sheet of paper at , and only thick, it was the lightest and smallest of all of Apple's PowerBooks at the time, and remains one of Apple's smallest notebooks ever produced. Only the MacBook Air, the Retina MacBook Pro and the Retina MacBook weigh less, though they are wider and deeper (but considerably thinner). The Duo had the most in common with the original MacBook Air which only included one USB 2.0 port, one video port (requiring an adapter) and one speaker port, but no ability for expansion.
The PowerBook Duo line was replaced by the PowerBook 2400, which was slightly larger in size than the Duos, but still only the fifth-smallest behind the 12-inch PowerBook G4 which succeeded it as fourth-smallest. Although both featured much more onboard functionality, they lacked docking ability.
Features
The Duo line offered an ultraportable design that was light and functional for travel and expandable via its unique docking connector. However certain compromises were made to achieve this level of portability. The Duo series used an 88% of standard desktop-sized keyboard which was criticized for being difficult to type on. Likewise, the trackball was reduced in size from even that used on the PowerBook 100. The only usable port which came standard on the Duo was a dual printer/modem EIA-422 serial port.
There was a slot for an expensive, optional, internal 14.4 Express Modem and no provision for built-in Ethernet. This somewhat limited configuration meant the only way to move data in or out of the laptop in a stock configuration, without purchasing additional accessories, was via a relatively slow AppleTalk connection, which was not practical in the event of hard drive problems. Compensating for these limitations, the initial Duo offering provided for a considerably higher RAM limit of 24 MB (as compared to the 100 series' 14 MB), and a standard 80 MB hard drive (versus the 100's 40 MB drive). The debut year for the Duo only offered a passive matrix display on both the mid-level and high-end models. In contrast to the high end of the 100-series line with which the Duos shared the same processors, the PowerBook 170 and 180, with their crisp active matrix displays, were both already in great demand over the lower-powered models with passive matrix displays. The following year, Apple replaced the earlier models with both an active matrix display and a color active matrix display, the latter becoming the de facto standard of the PowerBook line. The respective Duo models are easily differentiated by their display method and processor. All other features are identical.
Specifications
The 200-series Duos were powered by either Motorola 68030 or 68LC040 processors, ranging from 25-33 MHz. When Apple debuted its next-generation PowerPC processors in 1994, it took over a year for the first PowerPC Duo (the 2300c) to debut. The original PowerPC 601, like the original 68040 before it, produced too much heat and consumed too much power for Apple to use in any laptop but, by the end of 1995, the more efficient PowerPC 603e had been developed, which was featured in the Duo 2300c and its full-size companion, the PowerBook 5300 series. The PowerPC 603e was designed for a 64-bit bus, but was engineered by Apple to run on an older 32-bit bus to maintain compatibility with the Duo Docks. This led to poor system and video performance.
Docking stations
PowerBook Duos lacked most common ports (featuring only one internal printer/modem serial port and an optional fax/modem card port). In their place was docking ability, accomplished via a unique 156-pin Processor Direct Slot (PDS) giving the docks full access to the Duo's central processing unit (CPU) and data buses. Several dock options were offered by Apple and third parties.
Duo Dock (M7779) (1992), Duo Dock II (1994), Duo Dock Plus (1995)
This was the largest and most expensive dock for the PowerBook Duo, in a form factor common for that period: the Duo Dock (M7779) was first offered by Apple on October 19, 1992, and the similar docks presented by Compaq (as the LTE Lite Desktop Expansion Base) and IBM (as the 3550 Expansion Unit) were introduced in the same year. Unlike the smaller docks, or "port replicators" that plugged into the back of laptops, the listed docks pulled the laptop inside the dock's metal and plastic case via an internal sliding mechanism (similar to that of a VHS player). The Duo Dock turned the PowerBook Duo into a full-size, AC-powered, fully functional desktop computer with all the standard ports. Like a desktop computer, the dock could physically support a heavy, high-resolution CRT display on top. The Duo Dock included a floppy drive on the side, two NuBus expansion slots, an optional floating-point unit (FPU), level 2 cache, a slot for more VRAM to enable more colors at higher resolutions, and space for a second hard drive.
The original Duo Dock was replaced by the Duo Dock II on May 16, 1994, which added AAUI networking and compatibility with the newer color-screen PowerBook Duos. A replacement lid was offered to allow use of the thicker color Duos with the original Duo Dock.
The Dock II was followed by the Duo Dock Plus on May 15, 1995, which was identical to the Duo Dock II, but lacked the FPU and level 2 cache—which were not compatible with the 68LC040-processor Duo 280 and PowerPC-processor Duo 2300c. While the laptop's LCD display obviously could not be opened when inside the dock, additional NuBus video cards could be installed to drive up to three monitors.
Aging Duo Docks are known to have problems with the failing of the capacitors which drive the docking mechanism. This is colloquially known as 'The Duo Dock Tick of Death'.
MiniDock (M7780) (1992)
The Mini Dock was a port expander for the PowerBook Duo and was popularly offered by many third-party manufacturers and Apple. When attached, the PowerBook Duo could be plugged into various standard desktop devices including SCSI, Apple Desktop Bus (ADB), serial, floppy disk, external speakers, and an external display. This type of dock also allowed the Duo's internal LCD and battery to be used. Third-party contributions to the Mini Dock added a variety of specialized custom options including Ethernet connectivity, NTSC and PAL video ports. The only significant difference between these docks and a full desktop configuration was the lack of custom PDS or NuBus expansion slots, which were included on all standard desktop Macs, a shortfall made up in task-specific third-party dock offerings.
MicroDock
This type of dock was manufactured by both Apple and many third parties, and gave the PowerBook Duo up to three extra ports in a minimal configuration. Examples include floppy, SCSI, video and Ethernet docks, each typically included one ADB port as well. This was the least expensive, and most basic of the docks. This type of dock allowed the Duo's internal LCD to be used as well, and could run on the Duo's internal battery for a reduced amount of time. Popular due to the minimal impact in accessories that must be carried with the Duo, they offered a practical alternative to emergency hard disk and software situations and task-specific needs.
Design
The 2300 was the last Apple product to carry any vestige of the Snow White design language, which Apple had been phasing out since 1990. Drawing heavily upon improvements made to the original PowerBook 140 design, the Duo series continues many of the styling traits of the PowerBook 100, which is approximately equivalent in size and weight. In addition to the Snow White features, the Duo takes the 100's radius curves a step further along the display top, front, and sides, and which is also heavily mirrored in the various docks.
PenLite
The PenLite was an early tablet computer prototyped by Apple Computer in 1993 around the same time as the Apple Newton. It was not a PDA but rather a complete computer. The project was canceled in 1994 due to its similarity to the Newton.
The PenLite was based on the PowerBook Duo and was meant to be a tablet-style addition, with a stylus as the input device. It was designed to be compatible with PowerBook Duo docks and accessories and ran the standard classic Mac OS.
In popular culture
One of the most stylish and iconic of the laptops available at the time, the Duo was widely used in advertising, film and television.
Sandra Bullock, starring as systems analyst Angela Bennett, prominently uses a PowerBook Duo 280c (and also a PowerBook 540c) in the movie The Net (1995)
In the TV sitcom Home Improvement, a Powerbook Duo was used in the fifth episode of season 6 (Episode #131: Al's Video - Original air date: October 15, 1996) as Jill's new computer. Just as she gets some work done on it, Tim comes along and destroys it by playing a game.
Anne Heche, playing Amy Barnes, a geologist and seismologist, uses a PowerBook Duo 280c in the movie Volcano (1997) to calculate the speed of lava flowing beneath the city streets.
In the film Wag The Dog (1997), the President's team members, including Anne Heche and Dustin Hoffman among others, use several PowerBook Duo 280c.
In the critically acclaimed TV sitcom NewsRadio (1994-1999), Dave Nelson used a PowerBook Duo almost exclusively for the first four seasons, the only exceptions being the first few episodes in which he used a PowerBook 100 series. In the fourth season (beginning in 1997), PowerBook Duos were also used prominently by Lisa Miller and occasionally by other characters. In the fifth season (beginning in 1998), all computers on the show were replaced with PowerBook G3s and a first-generation iMac.
A complete PowerBook Duo system, including Dock, is featured prominently throughout seasons five through seven of Seinfeld.
In the movie Hackers (1995), Kate Libby owns a PowerBook Duo 280c (with a 2300c logic board infamously remarked to have a "28.8 bps modem" The 28.8 for the 2300 was never produced and remained a prototype only). She claims it has a "P6 chip". While people believed she was referring to the Intel Pentium Pro, it was actually a PowerPC 603e in the 2300 motherboard (which was referred to by Apple as the "P6" chip for marketing purposes.) Dade Murphy is sent a clear-cased prototype 280c[1] by Eugene "The Plague" Belford.
See also
List of Macintosh models grouped by CPU type
Docking station
References
External links
collection of powerbooks and duos
PowerBook Duo Resources
German PowerBook Duo Site
Lowendmac.com - PowerBooks
Prototypes of Apple hardware at The Apple Museum
Prototypes of Apple hardware at The Apple Collection
Apple Powerbook Duo 2300c Pictures
20 Macs for 2020 podcast #19
Duo
68k Macintosh computers
PowerPC Macintosh computers
Computer-related introductions in 1992 |
7696400 | https://en.wikipedia.org/wiki/LIRC | LIRC | LIRC (Linux Infrared remote control) is an open source package that allows users to receive and send infrared signals with a Linux-based computer system.
There is a Microsoft Windows equivalent of LIRC called WinLIRC.
With LIRC and an IR receiver the user can control their computer with almost any infrared remote control (e.g. a TV remote control). The user may for instance control DVD or music playback with their remote control.
One GUI frontend is KDELirc, built on the KDE libraries.
See also
RC-5
External links
LIRC - Linux Infrared Remote Control
SourceForge.net: Linux Infrared Remote Control
WinLIRC Homepage
KDELirc Homepage
Free software programmed in C
Software related to embedded Linux
Infrared technology |
7856683 | https://en.wikipedia.org/wiki/Info%20%28Unix%29 | Info (Unix) | Info is a software utility which forms a hypertextual, multipage documentation and help viewer working on a command-line interface.
Info reads info files generated by the texinfo program and presents the documentation as a tree with simple commands to traverse the tree and to follow cross references. For instance, pressing the space bar scrolls down within the current tree node or goes to the next node in the current document if already at the bottom of the current node, allowing to read the contents of an info file sequentially. Pressing the backspace key moves in the opposite direction. Furthermore:
goes to the next node in the current document.
goes to the previous node in the current document.
goes to the next node on the same level as the current node.
goes to the previous node on the same level as the current node.
("up") goes to the parent of the current node.
goes to the last visited node.
Moving the cursor over a link (a word preceded by an asterisk) and pressing the enter key follows the link.
Pressing the tab key will move the cursor to the next nearest link.
The C implementation of info was designed as the main documentation system of GNU based operating systems and was then ported to other Unix-like operating systems. However, info files had already been in use on ITS emacs. On the TOPS-20 operating system INFO was called XINFO.
List of Info readers
GNU info, distributed with Texinfo
pinfo
tkman
tkinfo
khelpcenter (click "Browse Info Pages")
emacs
info.vim (Vim plugin)
vinfo (Vim help-files fashioned reading)
GNOME Yelp
See also
Manual page (Unix)
List of Unix commands
References
Unix software
Online help
Command-line software |
827965 | https://en.wikipedia.org/wiki/Systrace | Systrace | Systrace is a computer security utility which limits an application's access to the system by enforcing access policies for system calls. This can mitigate the effects of buffer overflows and other security vulnerabilities. It was developed by Niels Provos and runs on various Unix-like operating systems.
Systrace is particularly useful when running untrusted or binary-only applications and provides facilities for privilege elevation on a system call basis, helping to eliminate the need for potentially dangerous setuid programs. It also includes interactive and automatic policy generation features, to assist in the creation of a base policy for an application.
Systrace used to be integrated into OpenBSD, but was removed in April 2016 (in favour of pledge post OpenBSD 5.9). It is available for Linux and Mac OS X, although the OS X port is currently unmaintained. It was removed from NetBSD at the end of 2007 due to several unfixed implementation issues. As of version 1.6f Systrace supports 64-bit Linux 2.6.1 via kernel patch.
Features
Systrace supports the following features:
Confines untrusted binary applications: An application is allowed to make only those system calls specified as permitted in the policy. If the application attempts to execute a system call that is not explicitly permitted, an alarm gets raised.
Interactive policy generation with graphical user interface: Policies can be generated interactively via a graphical frontend to Systrace. The frontend shows system calls and their parameters not currently covered by policy and allows the user to refine the policy until it works as expected.
Supports different emulations: Linux, BSDI, etc..
Non-interactive policy enforcement: Once a policy has been trained, automatic policy enforcement can be used to deny all system calls not covered by the current policy. All violations are logged to Syslog. This mode is useful when protecting system services like a web server.
Remote monitoring and intrusion detection: Systrace supports multiple frontends by using a frontend that makes use of the network, very advanced features are possible.
Privilege elevation: Using Systrace's privilege elevation mode, it's possible to get rid of setuid binaries. A special policy statement allows selected system calls to run with higher privileges, for example, creating a raw socket.
Vulnerability history
Systrace has had some vulnerabilities in the past, including:
Exploiting Concurrency Vulnerabilities in System Call Wrappers Paper by Robert Watson from the First USENIX Workshop On Offensive Technologies (WOOT07) analyzing system call wrapper traces across several wrapper platforms including systrace
Google Security discovers local privilege escalation in Systrace
Local root exploit on NetBSD
Vulnerabilities in systrace
See also
Seccomp
AppArmor
SELinux
Mandatory access control
References
External links
BSD software
OpenBSD
Unix security-related software |
35554143 | https://en.wikipedia.org/wiki/October%20Horse | October Horse | In ancient Roman religion, the October Horse (Latin Equus October) was an animal sacrifice to Mars carried out on October 15, coinciding with the end of the agricultural and military campaigning season. The rite took place during one of three horse-racing festivals held in honor of Mars, the others being the two Equirria on February 27 and March 14.
Two-horse chariot races (bigae) were held in the Campus Martius, the area of Rome named for Mars, after which the right-hand horse of the winning team was transfixed by a spear, then sacrificed. The horse's head (caput) and tail (cauda) were cut off and used separately in the two subsequent parts of the ceremonies: two neighborhoods staged a fight for the right to display the head, and the freshly bloodied cauda was carried to the Regia for sprinkling the sacred hearth of Rome.
Ancient references to the Equus October are scattered over more than six centuries: the earliest is that of Timaeus (3rd century BC), who linked the sacrifice to the Trojan Horse and the Romans' claim to Trojan descent, with the latest in the Calendar of Philocalus (354 AD), where it is noted as still occurring, even as Christianity was becoming the dominant religion of the Empire. Most scholars see an Etruscan influence on the early formation of the ceremonies.
The October Horse is the only instance of horse sacrifice in Roman religion; the Romans typically sacrificed animals that were a normal part of their diet. The unusual ritual of the October Horse has thus been analyzed at times in light of other Indo-European forms of horse sacrifice, such as the Vedic ashvamedha and the Irish ritual described by Giraldus Cambrensis, both of which have to do with kingship. Although the ritual battle for possession of the head may preserve an element from the early period when Rome was ruled by kings, the October Horse's collocation of agriculture and war is characteristic of the Republic. The sacred topography of the rite and the role of Mars in other equestrian festivals also suggest aspects of initiation and rebirth ritual. The complex or even contradictory aspects of the October Horse probably result from overlays of traditions accumulated over time.
Description
The rite of the October Horse took place on the Ides of October, but no name is recorded for a festival on that date. The grammarian Festus describes it as follows:
The October Horse is named from the annual sacrifice to Mars in the Campus Martius during the month of October. It is the right-hand horse of the winning team in the two-horse chariot races. The customary competition for its head between the residents of the Suburra and those of the Sacra Via was no trivial affair; the latter would get to attach it to the wall of the Regia, or the former to the Mamilian Tower. Its tail was transported to the Regia with sufficient speed that the blood from it could be dripped onto the hearth for the sake of becoming part of the sacred rite (res divina).
In a separate passage, the Augustan antiquarian Verrius Flaccus adds the detail that the horse's head is adorned with bread. The Calendar of Philocalus notes that on October 15 "the Horse takes place at the Nixae," either an altar to birth deities (di nixi) or less likely an obscure landmark called the Ciconiae Nixae. According to Roman tradition, the Campus Martius had been consecrated to Mars by their ancestors as horse pasturage and an equestrian training ground for youths.
The "sacred rite" that the horse's blood became part of is usually taken to be the Parilia, a festival of rural character on April 21, which became the date on which the founding of Rome was celebrated.
War and agriculture
Verrius Flaccus notes that the horse ritual was carried out ob frugum eventum, usually taken to mean "in thanks for the completed harvest" or "for the sake of the next harvest", since winter wheat was sown in the fall. The phrase has been connected to the divine personification Bonus Eventus, "Good Outcome," who had a temple of unknown date in the Campus Martius and whom Varro lists as one of the twelve agricultural deities. But like other ceremonies in October, the sacrifice occurred during the time of the army's return and reintegration into society, for which Verrius also accounted by explaining that a horse is suited for war, an ox for tilling. The Romans did not use horses as draft animals for farm work, nor chariots in warfare, but Polybius specifies that the victim is a war horse.
The ritual was held outside the pomerium, Rome's sacred boundary, presumably because of its martial character, but agriculture was also an extra-urban activity, as Vitruvius indicates when he notes that the correct sacred place for Ceres was outside the city (extra urbem loco). In Rome's early history, the roles of soldier and farmer were complementary:
In early Rome agriculture and military activity were closely bound up, in the sense that the Roman farmer was also a soldier. … In the case of the October Horse, for example, we should not be trying to decide whether it is a military, or an agricultural festival; but see it rather as one of the ways in which the convergence of farming and warfare (or more accurately of farmers and fighters) might be expressed.
This polyvalence was characteristic of the god for whom the sacrifice was conducted, since among the Romans Mars brought war and bloodshed, agriculture and virility, and thus both death and fertility within his sphere of influence.
The Parilia and suffimen
The Augustan poets Propertius and Ovid both mention horse as an ingredient in the ritual preparation suffimen or suffimentum, which the Vestals compounded for use in the lustration of shepherds and their sheep at the Parilia. Propertius may imply that this horse was not an original part of the preparation: "the purification rites (lustra) are now renewed by means of the dismembered horse". Ovid specifies that the horse's blood was used for the suffimen. While the blood from the tail was dripped or smeared on the sacred hearth of Rome in October, blood or ashes from the rest of the animal could have been processed and preserved for the suffimen as well. Although no other horse sacrifice in Rome is recorded, Georges Dumézil and others have attempted to exclude the Equus October as the source of equine blood for the Parilia.
Another important ingredient for the suffimen was the ash produced from the holocaust of an unborn calf at the Fordicidia on April 15, along with the stalks from which beans had been harvested. One source, from late antiquity and not always reliable, notes that beans were sacred to Mars.
Suffimentum is a general word for a preparation used for healing, purification, or warding off ill influence. In his treatise on veterinary medicine, Vegetius recommends a suffimentum as an effective cure for draft animals and for humans prone to emotional outbursts, as well as for driving off hailstorms, demons and ghosts (daemones and umbras).
The victim
Sacrificial victims were most often domestic animals normally part of the Roman diet, and the meat was eaten at a banquet shared by those celebrating the rite. Horse meat was distasteful to the Romans, and Tacitus classes horses among "profane" animals. Inedible victims such as the October Horse and dogs were typically offered to chthonic deities in the form of a holocaust, resulting in no shared meal. In Greece dog sacrifices were made to Mars' counterpart Ares and the related war god Enyalios. At Rome, dogs were sacrificed at the Robigalia, a festival for protecting the crops at which chariot races were held for Mars along with the namesake deity, and at a very few other public rites. Birth deities, however, also received offerings of puppies or bitches, and infant cemeteries show a high concentration of puppies, sometimes ritually dismembered. Inedible victims were offered to a restricted group of deities mainly involved with the cycle of birth and death, but the reasoning is obscure.
The importance of the horse to the war god is likewise not self-evident, since the Roman military was based on infantry. Mars' youthful armed priests the Salii, attired as "typical representatives of the archaic infantry," performed their rituals emphatically on foot, with dance steps. The equestrian order was of lesser social standing than the senatorial patres, "fathers", who were originally the patricians only. The Magister equitum, "Master of the Horse," was subordinate to the Dictator, who was forbidden the use of the horse except through special legislation. By the late Republic, the Roman cavalry was formed primarily from allies (auxilia), and Arrian emphasizes the foreign origin of cavalry training techniques, particularly among the Celts of Gaul and Spain. Roman technical terms pertaining to horsemanship and horse-drawn vehicles are mostly not Latin in origin, and often from Gaulish.
Under some circumstances, Roman religion placed the horse under an explicit ban. Horses were forbidden in the grove of Diana Nemorensis, and the patrician Flamen Dialis was religiously prohibited from riding a horse. Mars, however, was associated with horses at his Equirria festivals and the equestrian "Troy Game", which was one of the events Augustus staged for the dedication of the Temple of Mars Ultor in 2 BC.
Horse sacrifice was regularly offered by peoples the Romans classified as "barbarians," such as Scythians, but also at times by Greeks. In Macedonia, "horses in armor" were sacrificed as a lustration for the army. Immediately after describing the October Horse, Festus gives three other examples: the Spartans sacrifice a horse "to the winds" on Mount Taygetus; among the Sallentini, horses were burnt alive for an obscure Jove Menzana; and every year the Rhodians dedicated a four-horse chariot (quadriga) to the Sun and cast it into the sea. The quadriga traditionally represented the sun, as the biga did the moon. A Persian horse-sacrifice to "Hyperion clothed in rays of light" was noted by Ovid and Greek sources.
In contrast to cultures that offered a horse to the war god in advance to ask for success, the Roman horse sacrifice marked the close of the military campaigning season. Among the Romans, horse- and chariot-races were characteristic of "old and obscure" religious observances such as the Consualia that at times propitiated chthonic deities. The horse races at the shadowy Taurian Games in honor of the underworld gods (di inferi) were held in the Campus Martius as were Mars' Equirria. The horse had been established as a funerary animal among the Greeks and Etruscans by the Archaic period. Hendrik Wagenvoort even speculated about an archaic form of Mars who "had been imagined as the god of death and the underworld in the shape of a horse."
The chariot
The two-horse chariot races (bigae) that preceded the October Horse sacrifice determined the selection of the optimal victim. In a dual yoke, the right-hand horse was the lead or strongest animal, and thus the one from the winning chariot was chosen as the most potent offering for Mars.
Chariots have a rich symbolism in Roman culture, but the Romans never used chariots in war, though they faced enemies who did. The chariot was part of Roman military culture primarily as the vehicle of the triumphing general, who rode in an ornamented four-horse car markedly impractical for actual war. Most Roman racing practices were of Etruscan origin, part of the Etruscan tradition of public games (ludi) and equestrian processions. Chariot racing was imported from Magna Graecia no earlier than the 6th century BC.
Images of chariot races were considered good luck, but the races themselves were magnets for magic in attempts to influence the outcome. One law from the Theodosian Code prohibits charioteers from using magic to win, on pain of death. Some of the ornaments placed on horses were good-luck charms or devices to ward off malevolence, including bells, wolves' teeth, crescents, and brands. This counter-magic was directed at actual practices; binding spells (defixiones) have been found at race tracks. The defixio sometimes employed the spirits of the prematurely dead to work harm. On Greek racetracks, the turning posts were heroes' tombs or altars for propitiating malevolent spirits who might cause harm to the men or horses. The design of the turning posts (metae) on a Roman race course was derived from Etruscan funerary monuments.
Pliny attributes the invention of the two-horse chariot to the "Phrygians", an ethnic designation that the Romans came to regard as synonymous with "Trojan." In the Greek narrative tradition, chariots played a role in Homeric warfare, reflecting their importance among the historical Mycenaeans. By the time the Homeric epics were composed, however, fighting from chariot was no longer a part of Greek warfare, and the Iliad has warriors taking chariots as transportation to the battlefield, then fighting on foot. Chariot racing was a part of funeral games quite early, as the first reference to a chariot race in Western literature is as an event in the funeral games held for Patroclus in the Iliad. Perhaps the most famous scene from the Iliad involving a chariot is Achilles dragging the body of Hector, the Trojan heir to the throne, three times around the tomb of Patroclus; in the version of the Aeneid, it is the city walls that are circled. Variations of the scene occur throughout Roman funerary art.
Gregory Nagy sees horses and chariots, and particularly the chariot of Achilles, as embodying the concept of ménos, which he defines as "conscious life, power, consciousness, awareness," associated in the Homeric epics with thūmós, "spiritedness," and , "soul," all of which depart the body in death. The gods endow both heroes and horses with ménos through breathing into them, so that "warriors eager for battle are literally 'snorting with ménos.'" A metaphor at Iliad 5.296 compares a man falling in battle to horses collapsing when they are unharnessed after exertions. Cremation frees the psychē from both thūmós and ménos so that it may pass into the afterlife; the horse, which embodies ménos, races off and leaves the chariot behind, as in the philosophical allegory of the chariot from Plato. The anthropological term mana has sometimes been borrowed to conceptualize the October Horse's potency, also expressed in modern scholarship as numen. The physical exertions of the hard-breathing horse in its contest are thought to intensify or concentrate this mana or numen.
In honoring the god who presided over the Roman census, which among other functions registered the eligibility of young men for military service, the festivals of Mars have a strongly lustral character. A lustration was performed in the Campus Martius following the census. Although lustral ceremonies are not recorded as occurring before the chariot races of the Equirria or the October Horse, it is plausible that they were, and that they were seen as a test or assurance of the lustration's efficacy.
The head
The significance of the October Horse's head as a powerful trophy may be illuminated by the caput acris equi, "head of a spirited ('sharp') horse," which Vergil says was uncovered by Dido and her colonists when they began the dig to found Carthage: "by this sign it was shown that the race (gens) would be distinguished in war and abound with the means of life." The 4th-century agricultural writer Palladius advised farmers to place the skull of a horse or ass on their land; the animals were not to be "virgin," because the purpose was to promote fertility. The practice may be related to the effigies known as oscilla, figures or faces that Vergil says were hung from pine trees by mask-wearing Ausonian farmers of Trojan descent when they were sowing seed.
The location of sexual vitality or fertility in the horse's head suggests its talismanic potency. The substance hippomanes, which was thought to induce sexual passion, was supposedly exuded from the forehead of a foal; Aelian (ca. 175–235 AD) says either the forehead or "loins." Called amor by Vergil, it is an ingredient in Dido's ritual preparations before her suicide in the Aeneid.
On Roman funerary reliefs, the deceased is often depicted riding on a horse for his journey to the afterlife, sometimes pointing to his head. This gesture signifies the Genius, the divine embodiment of the vital principle found in each individual conceived of as residing in the head, in some ways comparable to the Homeric thumos or the Latin numen.
Bread pendants
Pendants of bread were attached to the head of the Equus October: a portion of the inedible sacrifice was retained for humans and garnished with an everyday food associated with Ceres and Vesta. The shape of the "breads" is not recorded. Equines decorated with bread are found also on the Feast of Vesta on June 9, when the asses who normally worked in the milling and baking industry were dressed with garlands from which decorative loaves dangled. According to Ovid, the ass was honored at the Vestalia as a reward for its service to the Virgin Mother, who is portrayed in Augustan ideology as simultaneously native and Trojan. When the ithyphallic god Priapus, an imported deity who was never the recipient of public cult, was about to rape Vesta as she slept, the braying ass woke her. In revenge, Priapus thereafter demanded the ass as a customary sacrifice to him. The early Christian writer Lactantius says that the garland of bread pendants commemorates the preservation of Vesta's sexual integrity (pudicitia). Aelian recounts a myth in which the ass misplaces a pharmakon entrusted to him by the king of the gods, thereby causing humanity to lose its eternal youth.
[[File:Amber horse Met 24.97.117.jpg|thumb|left|Horse head pendant in amber (Italic, 5th century BC)]]
The symbolism of bread for the October Horse is unstated in the ancient sources. Robert Turcan has seen the garland of loaves as a way to thank Mars for protecting the harvest. Mars was linked to Vesta, the Regia, and the production of grain through several religious observances. In his poem on the calendar, Ovid thematically connects bread and war throughout the month of June (Iunius, a name for which Ovid offers multiple derivations including Juno and "youths", iuniores). Immediately following the story of Vesta, Priapus, and the ass, Ovid associates Vesta, Mars, and bread in recounting the Gallic siege of Rome. The Gauls were camped in the Field of Mars, and the Romans had taken to their last retreat, the Capitoline citadel. At an emergency council of the gods, Mars objects to the removal of the sacred talismans of Trojan Vesta which guarantee the safety of the state, and is indignant that the Romans, destined to rule the world, are starving. Vesta causes flour to materialize, and the process of breadmaking occurs miraculously during the night, resulting in an abundance (ops) of the gifts of Ceres. Jupiter wakes the sleeping generals and delivers an oracular message: they are to throw that which they least want to surrender from the citadel onto the enemy. Puzzled at first, as is conventional in receiving an oracle, the Romans then throw down the loaves of bread as weapons against the shields and helmets of the Gauls, causing the enemy to despair of starving Rome into submission.
J.G. Frazer pointed to a similar throwing away of food abundance as a background to the October Horse, which he saw as the embodiment of the "corn spirit". According to tradition, the fields consecrated to Mars had been appropriated by the Etruscan king Tarquinius Superbus for his private use. Accumulated acts of arrogance among the royal family led to the expulsion of the king. The overthrow of the monarchy occurred at harvest time, and the grain from the Campus Martius had already been gathered for threshing. Even though the tyrant's other property had been seized and redistributed among the people, the consuls declared that the harvest was under religious prohibition. In recognition of the new political liberty, a vote was taken on the matter, after which the grain and chaff were willingly thrown into the Tiber river. Frazer saw the October Horse as a harvest festival in origin, because it took place on the king's farmland in the autumn. Since no source accounts for what happens to the horse apart from the head and tail, it is possible that it was reduced to ash and disposed of in the same manner as Tarquin's grain.
The tail
George Devereux and others have argued that cauda, or οὐρά (oura) in Greek sources, is a euphemism for the penis of the October Horse, which could be expected to contain more blood for the suffimen. The tail itself, however, was a magico-religious symbol of fertility or power. The practice of attaching a horse's tail to a helmet may originate in a desire to appropriate the animal's power in battle; in the Iliad, Hector's horse-crested helmet is a terrifying sight. In the iconography of the Mithraic mysteries, the tail of the sacrificial bull is often grasped, as is the horse's tail in depictions of the Thracian Rider god, as if to possess its power. A pinax from Corinth depicts a dwarf holding his phallus with both hands while standing on the tail of a stallion carrying a rider; although the dwarf has sometimes been interpreted as the horse-threatening Taraxippus, the phallus is more typically an apotropaic talisman (fascinum) to ward off malevolence.
Satyrs and sileni, though later characterized as goat-like, in the Archaic period were regularly depicted with equine features, including a prominent horsetail; they were known for uncontrolled sexuality, and are often ithyphallic in art. Satyrs are first recorded in Roman culture as part of ludi, appearing in the preliminary parade (pompa circensis) of the first Roman Games. The tail of the wolf, an animal regularly associated with Mars, was said by Pliny to contain amatorium virus, aphrodisiac power. Therefore, a phallic-like potency may be attributed to the October Horse's tail without requiring cauda to mean "penis," since the ubiquity of phallic symbols in Roman culture would make euphemism or substitution unnecessary.
The Trojan Horse
Timaeus (3rd century BC) attempted to explain the ritual of the October Horse in connection with the Trojan Horse—an attempt mostly regarded by ancient and modern scholars as "hardly convincing." As recorded by Polybius (2nd century BC),
he tells us that the Romans still commemorate the disaster at Troy by shooting (κατακοντίζειν, "to spear down") on a certain day a war-horse before the city in the Campus Martius, because the capture of Troy was due to the wooden horse — a most childish statement. For at that rate we should have to say that all barbarian tribes were descendants of the Trojans, since nearly all of them, or at least the majority, when they are entering on a war or on the eve of a decisive battle sacrifice a horse, divining the issue from the manner in which it falls. Timaeus in dealing with the foolish practice seems to me to exhibit not only ignorance but pedantry in supposing that in sacrificing a horse they do so because Troy was said to have been taken by means of a horse.
Plutarch (d. 120 AD) also offers a Trojan origin as a possibility, noting that the Romans claimed to have descended from the Trojans and would want to punish the horse that betrayed the city. Festus said that this was a common belief, but rejects it on the same grounds as Polybius.
Mars and a horse's head appear on opposite sides of the earliest Roman didrachm, introduced during the Pyrrhic War, which was the subject of Timaeus's book. Michael Crawford attributes Timaeus's interest in the October Horse to the appearance of this coinage in conjunction with the war.
Walter Burkert has suggested that while the October Horse cannot be taken as a sacrificial reenactment against the Trojan Horse, there may be some shared ritualistic origin. The Trojan Horse succeeded as a stratagem because the Trojans accepted its validity as a votive offering or dedication to a deity, and they wanted to transfer that power within their own walls. The spear that the Trojan priest Laocoön drives into the side of the wooden horse is paralleled by the spear used by the officiating priest at the October sacrifice.
Spear and officiant
Timaeus, who interpreted the October Horse in light of Rome's claim to Trojan origins, is both the earliest source and the only one that specifies a spear as the sacrificial implement. The spear was an attribute of Mars in the way that Jupiter wielded the thunderbolt or Neptune the trident. The spear of Mars was kept in the Regia, the destination of the October Horse's tail. Sacrificial victims were normally felled with a mallet and securis (sacrificial axe), and other implements would have been necessary for dismembering the horse. A spear was used against the bull in a taurobolium, perhaps as a remnant of the ritual's origin as a hunt, but otherwise it is a sacrificial oddity.
Because the sacrifice took place in the Campus Martius, during a religious festival celebrated for Mars, it is often assumed that the Flamen Martialis presided. This priest of Mars may have wielded a spear ritually on other occasions, but no source names the officiant over the October Horse rite.
On the calendar
The Equus October occurred on the Ides of October. All Ides were sacred to Jupiter. Here as at a few other points in the calendar, a day sacred to Mars doubles up with that of another god. The Equus preceded the Armilustrium ("Purification of Arms") on October 19. Although most of Mars' festivals cluster in his namesake month of March (Martius), ceremonies pertaining to Mars in October are seen as concluding the season in which he was most active.
André Dacier, an early editor of Festus, noted in regard to the October Horse the tradition that Troy had fallen in October. The October Horse figured in the elaborate efforts of the 19th-century chronologist Edward Greswell to ascertain the date of that event. Greswell assumed that the Equus October commemorated the date Troy fell, and after accounting for adjustments to the original Roman calendar as a result of the Julian reform, arrived at October 19, 1181 BC.
The festival diametrically opposed to the October Horse on the calendar was the Fordicidia on the Ides of April. The two festivals were divided by six lunations, with a near-perfect symmetry of days (177 and 178) between them in the two halves of the year. The peculiar sacrifice of unborn calves on the Fordicidia provided the other animal ingredient for the suffimen of the Parilia on April 21.
Plutarch places the horse sacrifice on the Ides of December, presumably because it occurred in the tenth month, which in the original Roman calendar was December instead of October, as indicated by the month's name (from decem, "ten").
Topography
Most religious events at Rome were set in a single place, or held simultaneously in multiple locations, such as neighborhoods or private households. But like the ritual of the Argei, the October Horse links several sites within Roman religious topography. The mapping of sites may be part of the ritual's meaning, accumulated in layers over time.
The chariot races and sacrifice take place in the Campus Martius, formerly ager Tarquiniorum, Tarquin land, an alluvial plain along the Tiber that was outside the pomerium, Rome's sacred boundary. Religious rituals involving war, agriculture, and death are regularly held outside the pomerium. The race seems to have been staged with temporary facilities on the Trigarium, near the Tarentum, the precinct within which the Altar of Dis and Proserpina was located. Father Dis was the Roman equivalent of the Greek Plouton (Pluto), and his consort Proserpina (Persephone) embodied the vegetative cycle of growth symbolizing the course of the human soul through birth, death, and rebirth into the afterlife, over which the couple presided in the mysteries. The cult may have been imported to Rome when the Saecular Games were instituted in 249 BC.
Ad Nixas
The sacrifice itself took place within the Tarentum ad Nixas, probably an altar to the deities of birth (di nixi), invoked as Ilithyis and given a nocturnal sacrifice in 17 BC at the Saecular Games, which originated at the site as the ludi tarentini. According to Festus, the ludi tarentini were instituted in honor of Mars under Tarquinius Superbus. Birth deities appear both in the epigraphic record of the 17 BC games and prominently in Horace's Carmen Saeculare, composed for the occasion and performed by a children's choir: "In accordance with rite, open up full-term births, Ilithyia: watch over mothers and keep them calm, whether you are best called Lucina or Genitalis".
The Campus Martius continued in the Imperial era to be a place for equestrian and military training for youth. The Temple of Mars Ultor dedicated in 2 BC by Augustus in the Campus became the site at which young men sacrificed to conclude their rite of passage into adulthood when assuming the toga virilis ("man's toga") around age 14. The October Horse sacrifice for Mars at an altar for birth deities suggests his role as a patron to young warriors who undergo the symbolic rebirth of initiation ritual, a theme also of the equestrian Troy Game. The emperor Julian mentions the sacrifice of a horse in Roman initiation rites, without specifying further. To prove themselves, younger, less experienced drivers usually started out with the two-horse chariots that were used in the October Horse race. Chariot races are the most common scene depicted on the sarcophagi of Roman children, and typically show Cupids driving bigae. Roman rituals of birth and death were closely related, given the high rate of infant mortality and death in childbirth. The Taurian Games, horse races held in the Campus Martius to propitiate gods of the underworld (di inferi), were instituted in response to an epidemic of infant mortality.
Some scholars think Roman conceptions of Mars were influenced by the Etruscan child-god Maris and the centaur Mares, ancestor of the Ausones. Maris is depicted with a cauldron symbolizing rebirth, and the half-man, half-horse Mares three times underwent death and rebirth. In association with Etruscan-influenced horse-racing festivals, John F. Hall saw Mars as a god having "power over death."Ad Nixas may, however, refer to a landmark called the Ciconiae Nixae ("Travailing Storks"), which did not exist during the Republican period. In that case, the original site for the sacrifice was likely to have been the Altar of Mars (Ara Martis) in the Campus Martius, the oldest center in Rome for the cultivation of Mars as a deity.
Ritual bifurcation
The dismemberment of the horse led to a ritual bifurcation into ceremonies involving the head and tail separately. The tail was speedily transported by foot to the Regia. The route would have crossed east of the center of the Campus Martius, and along the outside of the Servian Wall to the Porta Fontinalis (in present-day Rome, to the northeast of the Altare della Patria). A monumental portico built in 193 BC connected the Porta Fontinalis to the Altar of Mars in the Campus. Once within the walls, the route would have followed the Clivus Lautumiarum up to the Comitium, then along the Via Sacra to the Regia, for about a mile. The blood from the tail was then dripped or smeared onto the sacred hearth. This collocation of divine functions recalls the annual renewal of the fire of Vesta on March 1, the "birthday" of Mars, when laurel was hung on the Regia and New Year's Day originally was celebrated on the archaic Roman calendar.
The head became the object of contention between two factions, residents of the Via Sacra and of the Subura. The battle decided where the head would be displayed for the coming year. If the Suburan faction won, it would be mounted in their neighborhood on the Tower of the Mamilii (Turris Mamilia). If the residents of the Via Sacra won, the head would go to the Regia, formerly the residence of the king, as well as the destination of the tail.
The claim of the Mamilii to the head may be based on their family history, which connected them by marriage to the ruling dynasty of the Tarquins. A Mamilius who was the son-in-law of Tarquinius Superbus, the last Etruscan king, had given him refuge after he was expelled from Rome and the monarchy abolished. Despite this questionable beginning, the Mamilii were later known for loyalty and outstanding service to the Republic.
The Subura had equine associations in the Imperial era. Martial mentions mule teams on its steep slope, though normally traffic from draft animals was not permitted within Rome during daylight hours. An inscription found there indicates that the muleteers sought the divine protection of Hercules, Silvanus, and Epona. Silvanus had an association with Mars dating back to the archaic agricultural prayer preserved by Cato's farming treatise, in which the two are invoked either as one or jointly to protect the health of livestock. Epona was the Celtic horse goddess, the sole deity with a Gaulish name whose cult can be documented in Rome.
Exactly where the ceremonial struggle took place, or how, is unclear, but it implies a final procession to either site.
Modern interpretations
Corn spirit
During the era of Wilhelm Mannhardt, J.G. Frazer and the Cambridge Ritualists, the October Horse was regarded as the embodiment of the "corn spirit", "conceived in human or animal form" in Frazer's view, so that "the last standing corn is part of its body—its neck, its head, or its tail." ("Corn" here means "grain" in general, not "maize".) In The Golden Bough (1890), Frazer regarded the horse's tail and blood as "the chief parts of the corn-spirit's representative," the transporting of which to the Regia brought the corn-spirit's blessing "to the king's house and hearth" and the community. He conjectured that horses were also sacrificed at the grove of Diana Nemorensis at Aricia, as a mythic retaliation because the resurrected Virbius, the first divine "King of the Wood" (the priest called rex nemorensis), had been killed by horses—an explanation also of why horses were banned from the grove. As early as 1908, William Warde Fowler expressed his doubts that the corn-spirit concept sufficiently accounted for all the ritual aspects of the Equus October.
Dumézil and functionality
Dumézil argued that the October Horse preserved vestiges of a common Indo-European rite of kingship, evidenced also by the Vedic ashvamedha and the Irish inaugural sacrifice described by Giraldus Cambrensis as taking place in Ulster in the early medieval period. Perhaps the most striking similarity between the Vedic ritual and the Roman is that the sacrificial victim was the right-hand horse of a chariot team, though not the winner of a race in the Vedic rite. The head in the ashvamedha, signifying spiritual energy, was reserved as a talisman for the king afterwards; the middle of the horse embodied physical force; and the tail was grasped by the officiant and represented the fertility of livestock. No race was involved in the Celtic ritual, either; the horse, a mare who seems to have been the sexual surrogate of the goddess of sovereignty, was consumed communally by king and people from a cauldron in which he was immersed and inaugurated. (In the ashvamedha, the gender of horse and human is reversed.) Both the chariot race and an implied cauldron of initiation (to the extent that the latter might be relevant to the October Horse through the comparanda of the Troy Game and Mars' assimilation to the child-god Maris) are generally regarded as the elements of the Roman festival most likely to be Etruscan, and thus of uncertain value as to an Indo-European origin.
Some fundamental differences between the Roman rite and the Vedic and Celtic forms pose obstacles to situating the Equus October within the trifunctional schema. The equus is sacrificed to the Roman god of war, not kingship. Dumézil's follower Jaan Puhvel deals with the Roman rite only glancingly in his essay "Aspects of Equine Functionality," exploring mainly the Vedic and Celtic evidence for an "Indo-European equine myth" that "involves the mating of a kingship-class representative with the hippomorphous transfunctional goddess, and the creation of twin offspring belonging to the level of the third estate."
Puhvel finds few linkages between the October Horse and the ásvamedha, primarily because the method of killing the horse differs so dramatically, and the crucial element of ritual mating is absent. He observes, however, that "the absence of the sexual element in Roman horse sacrifice is no surprise, for early Roman ritual is exceedingly nonerotic"—an avoidance he attributes to the Romans' desire to differentiate their sexual probity from the supposed license of the Etruscans.
Homo Necans
In Homo Necans, Walter Burkert saw the October Horse as a "sacrifice of dissolution" (hence his willingness to entertain the ancient tradition that associated it with the Fall of Troy), and the struggle for the head as an agon, a competitive contest that vents violence and rage, as do funeral games.
Julius Caesar and human sacrifice
In 46 BCE, discontent arose among the troops supporting Julius Caesar in the civil wars. His lavish public expenditures, they complained, came at their expense: Instead of raising the army's pay, Caesar was using his newly confiscated wealth for such displays as a silk canopy to shelter spectators at the games he staged. The disgruntled soldiers rioted. Caesar came upon them, and shocked them back into discipline by killing one on sight. According to Cassius Dio, the sole source for the episode:
Two others were slain as a sort of ritual observance (hierourgia, ἱερουργία). The true cause I am unable to state, inasmuch as the Sibyl made no utterance and there was no other similar oracle, but at any rate they were sacrificed in the Campus Martius by the pontifices and the priest of Mars, and their heads were set up near the Regia.
Both Wissowa and Dumézil read Dio's sardonic take on these events to mean that an actual sacrifice occurred with human victims replacing the October Horse. The two killings have no common elements other than the site and the display of the heads at the Regia, but the passage has been used as evidence that the flamen of Mars presided over the October Horse as well, even though the officiant is never mentioned in sources that deal explicitly with the Equus. Human sacrifice had always been rare at Rome, and had been formally abolished as a part of public religion about fifty years earlier. Some executions took on a sacral aura, but Dio seems to regard the soldiers' deaths as a grotesque parody of a sacrifice, whatever Caesar's intent may have been. Jörg Rüpke thought that Dio's account, while "muddled", might indicate that Caesar as pontifex maximus took up the Trojan interpretation of the October Horse, in light of the Julian family's claim to have descended directly from Iulus, the son of the Trojan refugee Aeneas.In Colleen McCullough's novel The October Horse, Caesar himself becomes the sacrificial victim.
Notes
References
Further reading
Jens Henrik Vanggaard, "The October Horse," Temenos'' 15 (1979) 81–95.
Animal festival or ritual
Roman animal sacrifice
Campus Martius
Horses in religion
Ancient chariot racing
Processions in ancient Rome
October observances
Equestrian festivals
Festivals of Mars |
2262333 | https://en.wikipedia.org/wiki/Software%20as%20a%20service | Software as a service | Software as a service (SaaS ) is a software licensing and delivery model in which software is licensed on a subscription basis and is centrally hosted. SaaS is also known as "on-demand software" and Web-based/Web-hosted software.
SaaS is considered to be part of cloud computing, along with infrastructure as a service (IaaS), platform as a service (PaaS), desktop as a service (DaaS), managed software as a service (MSaaS), mobile backend as a service (MBaaS), datacenter as a service (DCaaS), integration platform as a service (iPaaS), and information technology management as a service (ITMaaS).
SaaS apps are typically accessed by users using a thin client, e.g. via a web browser. SaaS became a common delivery model for many business applications, including office software, messaging software, payroll processing software, DBMS software, management software, CAD software, development software, gamification, virtualization, accounting, collaboration, customer relationship management (CRM), management information systems (MIS), enterprise resource planning (ERP), invoicing, field service management, human resource management (HRM), talent acquisition, learning management systems, content management (CM), geographic information systems (GIS), and service desk management.
For example, with software as a service (SaaS) products, you can deploy software hosted on AWS infrastructure and grant buyers access to the software in your AWS environment. You can be responsible for managing customer access, account creation, resource provisioning, and account management in your software.
SaaS has been incorporated into the strategy of nearly all enterprise software companies.
History
Centralized hosting of business applications dates back to the 1960s. Starting in that decade, IBM and other mainframe computer providers conducted a service bureau business, often referred to as time-sharing or utility computing. Such services included offering computing power and database storage to banks and other large organizations from their worldwide data centers.
The expansion of the Internet during the 1990s brought about a new class of centralized computing, called application service providers (ASP). ASPs provided businesses with the service of hosting and managing specialized business applications, to reduce costs through central administration and the provider's specialization in a particular business application. Two of the largest ASPs were USI, which was headquartered in the Washington, DC area, and Futurelink Corporation, headquartered in Irvine, California.
Software as a service essentially extends the idea of the ASP model. The term software as a service (SaaS), however, is commonly used in more specific settings:
While most initial ASP's focused on managing and hosting third-party independent software vendors' software, SaaS vendors typically develop and manage their own software.
Whereas many initial ASPs offered more traditional client–server applications, which require installation of software on users' personal computers, later implementations can be Web applications which only require a web browser to use.
Whereas the software architecture used by most initial ASPs mandated maintaining a separate instance of the application for each business, SaaS services can utilize a multi-tenant architecture, in which the application serves multiple businesses and users, and partitions its data accordingly.
The acronym first appeared in the goods and services description of a USPTO trademark, filed on September 23, 1985. DbaaS (database as a service) has emerged as a sub-variety of SaaS and is a type of cloud database.
Microsoft referred to SaaS as "software plus services" for a few years.
Distribution and pricing
The cloud (or SaaS) model has no physical need for indirect distribution because it is not distributed physically and is deployed almost instantaneously, thereby negating the need for traditional partners and middlemen. Unlike traditional software, which is conventionally sold as a perpetual license with an up-front cost (and an optional ongoing support fee), SaaS providers generally price applications using a subscription fee, most commonly a monthly fee or an annual fee. Consequently, the initial setup cost for SaaS is typically lower than the equivalent enterprise software. SaaS vendors typically price their applications based on some usage parameters, such as the number of users using the application. However, because in a SaaS environment customers' data reside with the SaaS vendor, opportunities also exist to charge per transaction, event, or other units of value, such as the number of processors required.
The relatively low cost for user provisioning (i.e., setting up a new customer) in a multi-tenant environment enables some SaaS vendors to offer applications using the freemium model. In this model, a free service is made available with limited functionality or scope, and fees are charged for enhanced functionality or larger scope.
A key driver of SaaS growth is SaaS vendors' ability to provide a price that is competitive with on-premises software. This is consistent with the traditional rationale for outsourcing IT systems, which involves applying economies of scale to application operation, i.e., an outside service provider may be able to offer better, cheaper, more reliable applications.
Architecture
Most SaaS providers offer a multi-tenant architecture. With this model, a single version of the application, with a single configuration (hardware, network, operating system), is used for all customers ("tenants"). To support scalability, the application can be installed on multiple machines (called horizontal scaling). In some cases, a second version of the application is set up to offer a select group of customers access to pre-release versions of the applications (e.g., a beta version) for testing purposes. This is contrasted with traditional software, where multiple physical copies of the software — each potentially of a different version, with a potentially different configuration, and often customized — are installed across various customer sites.
Although an exception rather than the norm, some SaaS providers do not use multitenancy or use other mechanisms—such as virtualization—to cost-effectively manage a large number of customers in place of multitenancy. Whether multitenancy is a necessary component for software as a service is debatable.
Vertical vs horizontal SaaS
Horizontal SaaS and vertical SaaS are different models of cloud computing services. Horizontal SaaS targeting a broad variety of customers, generally without regard to their industry. Some popular examples of horizontal SaaS vendors are Salesforce and HubSpot. Vertical SaaS, on the other hand, refers to niche market targeting a narrower variety of customers to meet their specific requirements.
Characteristics
Although not all software-as-a-service applications share all the following traits, the characteristics below are common among many of them:
Configuration and customization
SaaS applications similarly support what is traditionally known as application configuration. In other words, like traditional enterprise software, a single customer can alter the set of configuration options (a.k.a. parameters) that affect its functionality and look-and-feel. Each customer may have its own settings (or: parameter values) for the configuration options. The application can be customized to the degree it was designed for based on a set of predefined configuration options.
For example, to support customers' common need to change an application's look-and-feel so that the application appears to be having the customer's brand (or—if so desired—co-branded), many SaaS applications let customers provide (through a self-service interface or by working with application provider staff) a custom logo and sometimes a set of custom colors. The customer cannot, however, change the page layout unless such an option was designed for.
Accelerated feature delivery
SaaS applications are often updated more frequently than traditional software, in many cases on a weekly or monthly basis. This is enabled by several factors:
The application is hosted centrally, so an update is decided and executed by the provider, not by customers.
The application only has a single configuration, making development testing faster.
The application vendor does not have to expend resources updating and maintaining backdated versions of the software, because there is only a single version.
The application vendor has access to all customer data, expediting design and regression testing.
The service provider has access to user behavior within the application (usually via web analytics), making it easier to identify areas worthy of improvement.
Accelerated feature delivery is further enabled by agile software development methodologies. Such methodologies, which have evolved in the mid-1990s, provide a set of software development tools and practices to support frequent software releases.
Open integration protocols
Because SaaS applications cannot access a company's internal systems (databases or internal services), they predominantly offer integration protocols and application programming interfaces (APIs) that operate over a wide area network.
The ubiquity of SaaS applications and other Internet services and the standardization of their API technology has spawned the development of mashups, which are lightweight applications that combine data, presentation, and functionality from multiple services, creating a compound service. Mashups further differentiate SaaS applications from on-premises software as the latter cannot be easily integrated outside a company's firewall.
Collaborative (and "social") functionality
Inspired by the development of the different internet networking services and the so-called web 2.0 functionality, many SaaS applications offer features that let their users collaborate and share information.
For example, many project management applications delivered in the SaaS model offer—in addition to traditional project planning functionality—collaboration features letting users comment on tasks and plans and share documents within and outside an organization. Several other SaaS applications let users vote on and offer new feature ideas.
Although some collaboration-related functionality is also integrated into on-premises software, (implicit or explicit) collaboration between users or different customers is only possible with centrally hosted software.
OpenSaaS
OpenSaaS refers to software as a service (SaaS) based on open-source code. Similar to SaaS applications, Open SaaS is a web-based application that is hosted, supported and maintained by a service provider. While the roadmap for Open SaaS applications is defined by its community of users, upgrades and product enhancements are managed by a central provider. The term was coined in 2011 by Dries Buytaert, creator of the Drupal content management framework.
Andrew Hoppin, a former Chief Information Officer for the New York State Senate, has been a vocal advocate of OpenSaaS for government, calling it "the future of government innovation." He points to WordPress and Why Unified as a successful example of an OpenSaaS software delivery model that gives customers "the best of both worlds, and more options. The fact that it is open source means that they can start building their websites by self-hosting WordPress and customizing their website to their heart’s content. Concurrently, the fact that WordPress is SaaS means that they don’t have to manage the website at all -- they can simply pay WordPress.com to host it."
Adoption drivers
Several important changes to the software market and technology landscape have facilitated the acceptance and growth of SaaS:
The growing use of web-based user interfaces by applications, along with the proliferation of associated practices (e.g., web design), continuously decreased the need for traditional client-server applications. Consequently, traditional software vendor's investment in software based on fat clients has become a disadvantage (mandating ongoing support), opening the door for new software vendors offering a user experience perceived as more "modern".
The standardization of web page technologies (HTML, JavaScript, CSS), the increasing popularity of web development as a practice, and the introduction and ubiquity of web application frameworks like Ruby on Rails or Laravel (PHP) gradually reduced the cost of developing new software services, and enabled new providers to challenge traditional vendors.
The increasing penetration of broadband Internet access enabled remote centrally hosted applications to offer speed comparable to on-premises software.
The standardization of the HTTPS protocol as part of the web stack provided universally available lightweight security that is sufficient for most everyday applications.
The introduction and wide acceptance of lightweight integration protocols such as Representational State Transfer (REST) and SOAP enabled affordable integration between SaaS applications (residing in the cloud) with internal applications over wide area networks and with other SaaS applications.
Adoption challenges
Some limitations slow down the acceptance of SaaS and prohibit it from being used in some cases:
Because data is stored on the vendor's servers, data security becomes an issue.
SaaS applications are hosted in the cloud, far away from the application users. This introduces latency into the environment; for example, the SaaS model is not suitable for applications that demand response times in the milliseconds (OLTP).
Multi-tenant architectures, which drive cost efficiency for service providers, limit customization of applications for large clients, inhibiting such applications from being used in scenarios (applicable mostly to large enterprises) for which such customization is necessary.
Some business applications require access to or integration with customer's current data. When such data are large in volume or sensitive (e.g. end-users' personal information), integrating them with remotely hosted software can be costly or risky, or can conflict with data governance regulations.
Constitutional search/seizure warrant laws do not protect all forms of SaaS dynamically stored data. The end result is that a link is added to the chain of security where access to the data, and, by extension, misuse of these data, are limited only by the assumed honesty of third parties or government agencies able to access the data on their own recognizance.
Switching SaaS vendors may involve the slow and difficult task of transferring very large data files over the Internet.
Organizations that adopt SaaS may find they are forced into adopting new versions, which might result in unforeseen training costs, an increase in the probability that a user might make an error or instability from bugs in the newer software.
Should the vendor of the software go out of business or suddenly EOL the software, the user may lose access to their software unexpectedly, which could destabilize their organization's current and future projects, as well as leave the user with older data they can no longer access or modify.
Relying on an Internet connection means that data is transferred to and from a SaaS firm at Internet speeds, rather than the potentially higher speeds of a firm's internal network.
Ability of the SaaS hosting company to guarantee the uptime level agreed in the SLA (Service Level Agreement)
The standard model also has limitations:
Compatibility with hardware, other software, and operating systems.
Licensing and compliance problems (unauthorized copies of the software program putting the organization at risk of fines or litigation).
Maintenance, support, and patch revision processes.
Healthcare applications
According to a survey by the Healthcare Information and Management Systems Society , 83% of US IT healthcare organizations are now using cloud services with 9.3% planning to, whereas 67% of IT healthcare organizations are currently running SaaS-based applications.
Data escrow
Software as a service data escrow is the process of keeping a copy of critical software-as-a-service application data with an independent third party. Similar to source code escrow, where critical software source code is stored with an independent third party, SaaS data escrow applies the same logic to the data within a SaaS application. It allows companies to protect and insure all the data that resides within SaaS applications, protecting against data loss.
There are many and varied reasons for considering SaaS data escrow including concerns about vendor bankruptcy, unplanned service outages, and potential data loss or corruption.
Many businesses either ensure that they are complying with their data governance standards or try to enhance their reporting and business analytics against their SaaS data.
Criticism
One notable criticism of SaaS comes from Richard Stallman of the Free Software Foundation, who refers to it as Service as a Software Substitute (SaaSS). He considers the use of SaaSS to be a violation of the principles of free software. According to Stallman:
See also
Application security
Application service provider
Cloud-based integration
List of 'as a service' service types
Servicizing
Subscription computing
References
As a service
Cloud applications
Software delivery methods
Software distribution
Software industry
Revenue models |
27631646 | https://en.wikipedia.org/wiki/EvaluNet | EvaluNet | EvaluNet (Pty) Ltd is a South African-based developer of educational software headquartered in Cape Town. The company was founded in 1995 as Interactive Learning Solutions and changed name to EvaluNet in 1999. In September 1999, Naspers, a multinational JSE-listed media company, purchased a 49% share holding in EvaluNet. In 2002, the company founder and MD, Dereck Marnewick, purchased this share holding back from Naspers.
In 2003, the South African National Department of Education commissioned EvaluNet and four other local companies to form a national body representing educational vendors in their dealings with government and educational institutions. The organization is called the Associated Distributors of Educational Supplies in Southern Africa and currently represents 40 members.
Products
In 1996, EvaluNet become the first company to introduce the concept of assessment and testing software to the South African school and business market. By September 2000, 300 South African schools were using EvaluNet's flagship software product e-SCHOOL to conduct paperless tests and examinations.
In 2005, e-SCHOOL was criticized for being too open-ended for most South African teachers that require more content-rich resources. The company has since revised its product offering and has released new titles under the Biblion, GetAhead, XT (the replacement to e-SCHOOL) and MathPRO brands.
EvaluNet has also had a role to play in developing online resources for teachers in South Africa, namely with its online publication Teacher's Monthly and its involvement in developing an online publication entitled Teaching English Today for the English Academy of Southern Africa - the only academy for the English language in the world.
Partnerships
In 1997, EvaluNet was contracted by MWEB, one of South Africa's biggest Internet service providers, to develop MWEB School, which became South Africa's first online learning portal. This partnership remained until late 1999. In that same year, EvaluNet became the first South African educational software developer to engage in ecommerce to market products online.
In 2006, EvaluNet became a partner in the NEPAD E-School Programme initiative by way of the Oracle Consortium. This consortium comprised a total of fourteen companies, including the Oracle Corporation.
In September 2010, EvaluNet announced a partnership with Woolworths (South Africa) MySchool MyPlanet MyVillage corporate social investment initiative. This partnership saw EvaluNet software marketed via the MySchool initiative with a resulting contribution of income used to uplift schools in disadvantaged communities.
In 2011, EvaluNet announced a partnership with EPA Technologies, a Jamaican-based educational software developer serving the Caribbean region. This partnership took EvaluNet products into the overseas market and formed a new direction for business development. This new venture is headed up by Adrian Marnewick who is the EvaluNet Business Development Manager and son of EvaluNet MD and founder Dereck Marnewick.
References
External links
EvaluNet, homepage
Educational software companies |
3266317 | https://en.wikipedia.org/wiki/PS/2%20port | PS/2 port | The PS/2 port is a 6-pin mini-DIN connector used for connecting keyboards and mice to a PC compatible computer system. Its name comes from the IBM Personal System/2 series of personal computers, with which it was introduced in 1987. The PS/2 mouse connector generally replaced the older DE-9 RS-232 "serial mouse" connector, while the PS/2 keyboard connector replaced the larger 5-pin/180° DIN connector used in the IBM PC/AT design. The PS/2 keyboard port is electrically and logically identical to the IBM AT keyboard port, differing only in the type of electrical connector used. The PS/2 platform introduced a second port with the same design as the keyboard port for use to connect a mouse; thus the PS/2-style keyboard and mouse interfaces are electrically similar and employ the same communication protocol. However, unlike the otherwise similar Apple Desktop Bus connector used by Apple, a given system's keyboard and mouse port may not be interchangeable since the two devices use different sets of commands and the device drivers generally are hard-coded to communicate with each device at the address of the port that is conventionally assigned to that device. (That is, keyboard drivers are written to use the first port, and mouse drivers are written to use the second port.)
Communication protocol
Each port implements a bidirectional synchronous serial channel. The channel is slightly asymmetrical: it favors transmission from the input device to the computer, which is the majority case. The bidirectional IBM AT and PS/2 keyboard interface is a development of the unidirectional IBM PC keyboard interface, using the same signal lines but adding capability to send data back to the keyboard from the computer; this explains the asymmetry.
The interface has two main signal lines, Data and Clock. These are single-ended signals driven by open-collector drivers at each end. Normally, the transmission is from the device to the host. To transmit a byte, the device simply outputs a serial frame of data (including 8 bits of data and a parity bit) on the Data line serially as it toggles the Clock line once for each bit. The host controls the direction of communication using the Clock line; when the host pulls it low, communication from the attached device is inhibited. The host can interrupt the device by pulling Clock low while the device is transmitting; the device can detect this by Clock staying low when the device releases it to go high as the device-generated clock signal toggles. When the host pulls Clock low, the device must immediately stop transmitting and release Clock and Data to both float high. (So far, all of this is the same as the unidirectional communication protocol of the IBM PC keyboard port, though the serial frame formats differ.) The host can use this state of the interface simply to inhibit the device from transmitting when the host is not ready to receive. (For the IBM PC keyboard port, this was the only normal use of signalling from the computer to the keyboard. The keyboard could not be commanded to retransmit a keyboard scan code after it had been sent, since there was no reverse data channel to carry commands to the keyboard, so the only way to avoid losing scan codes when the computer was too busy to receive them was to inhibit the keyboard from sending them until the computer was ready. This mode of operation is still an option on the IBM AT and PS/2 keyboard port.)
To send a byte of data back to the device, the host pulls Clock low, waits briefly, pulls Data low and releases the Clock line again. The device then generates a Clock signal while the host outputs a frame of bits on the Data line, one bit per Clock pulse, similar to what the attached device would do to transmit in the other direction. However, while device-to-host transmission reads bits on falling Clock edges, transmission in the other direction reads bits on rising edges. After the data byte, the host releases the Data line, and the device will pull the Data line low for one clock period to indicate successful reception. A keyboard normally interprets the received byte as a command or a parameter for a preceding command. The device will not attempt to transmit to the host until both Clock and Data have been high for a minimum period of time.
Transmission from the device to the host is favored because from the normal idle state, the device does not have to seize the channel before it can transmit—the device just begins transmitting immediately. In contrast, the host must seize the channel by pulling first the Clock line and then the Data line low and waiting for the device to have time to release the channel and prepare to receive; only then can the host begin to transmit data.
Port availability
Older laptops and most contemporary motherboards have a single port that supports either a keyboard or a mouse. Sometimes the port also allows one of the devices to be connected to the two normally unused pins in the connector to allow both to be connected at once through a special splitter cable. This configuration is common on IBM/Lenovo Thinkpad notebooks among many others.
The PS/2 keyboard interface is electrically the same as the 5-pin DIN connector on earlier AT keyboards, and keyboards designed for one can be connected to the other with a simple wiring adapter. Such wiring adapters and adapter cables were once commonly available for sale. Note that IBM PC and PC XT keyboards use a different unidirectional protocol with the same DIN connector as AT keyboards, so though a PC or XT keyboard can be connected to PS/2 port using a wiring adapter intended for an AT keyboard, the earlier keyboard will not work with the PS/2 port. (At least, it cannot work with normal PS/2 keyboard driver software, including the system BIOS keyboard driver.)
In contrast to this, the PS/2 mouse interface is substantially different from RS-232 (which was generally used for mice on PCs without PS/2 ports), but nonetheless many mice were made that could operate on both with a simple passive wiring adapter, where the mice would detect the presence of the adapter based on its wiring and then switch protocols accordingly.
PS/2 mouse and keyboard connectors have also been used in non-IBM PC-compatible computer systems, such as the DEC AlphaStation line, early IBM RS/6000 CHRP machines and SGI Indy, Indigo 2, and newer (Octane, etc.) computers. Macintosh clone computers based on the "LPX-40" logic board design featured PS/2 mouse and keyboard ports, including the Motorola StarMax and the Power Computing PowerBase.
Legacy port status and USB
PS/2 is now considered a legacy port, with USB ports now normally preferred for connecting keyboards and mice. This dates back at least as far as the Intel/Microsoft PC 2001 specification of 2000.
However, PS/2 ports continue to be included on many computer motherboards, and are favored by some users, for various reasons including the following:
PS/2 ports may be favored for security reasons in a corporate environment as they allow USB ports to be totally disabled, preventing the connection of any USB removable disks and malicious USB devices.
The PS/2 interface provides no restriction on key rollover, although USB keyboards have no such restriction either, unless operated in BOOT mode, which is the exception.
To free USB ports for other uses like removable USB devices.
Some USB keyboards may not be able to operate the BIOS on certain motherboards due to driver issues or lack of support. The PS/2 interface has near-universal compatibility with BIOS.
Latency of mice
USB mice send data more quickly than PS/2 mice because standard USB mice are polled at a default rate of 125 hertz while standard PS/2 mice send interrupts at a default rate of 100 Hz when they have data to send to the computer. However, PS2 mice and keyboards are favored by many gamers because they essentially have zero latency through the port. There is no "polling" needed by the OS. The device notifies the OS when it's time to receive a packet of data from it.
Also, USB mice do not cause the USB controller to interrupt the system when they have no status change to report according to the USB HID specification's default profile for mice. Both PS/2 and USB allow the sample rate to be overridden, with PS/2 supporting a sampling rate of up to 200 Hz and USB supporting a polling rate up to 1 kHz as long as the mouse runs at full-speed USB speeds or higher.
USB key rollover limitations
The USB HID keyboard interface requires that it explicitly handle key rollover, with the full HID keyboard class supporting n-key rollover. However, the USB boot keyboard class (designed to allow the BIOS to easily provide a keyboard in the absence of OS USB HID support) only allows 6-key rollover. Some keyboard peripherals support only the latter class, and some OSes may fail to switch to using the full HID keyboard class with a device after boot.
Conversion between PS/2 and USB
Many keyboards and mice were specifically designed to support both the USB and the PS/2 interfaces and protocols, selecting the appropriate connection type at power-on. Such devices are generally equipped with a USB connector and ship with a passive wiring adapter to allow connection to a PS/2 port. Such passive adapters are not standardized and may therefore be specific to the device they came with. Connecting them to a PS/2 port would require a protocol converter, actively translating between the protocols. Such adapters only support certain classes of USB devices such as keyboards and mice, but are not model- or vendor-specific.
Older PS/2-only peripherals can be connected to a USB port via an active converter, which generally provides a pair of PS/2 ports (which may be designated as one keyboard and one mouse, even though both ports may support both protocols) at the cost of one USB port on the host computer.
Color code
Original PS/2 connectors were black or had the same color as the connecting cable (mainly white). Later the PC 97 standard introduced a color code: the keyboard port, and the plugs on compliant keyboards, were purple; mouse ports and plugs were green. (Some vendors initially used a different color code; Logitech used the color orange for the keyboard connector for a short period, but soon switched to purple.) Today this code is still used on most PCs. The pinouts of the connectors are the same, but most computers will not recognize devices connected to the wrong port.
Hardware issues
Hotplugging
PS/2 ports are designed to connect the digital I/O lines of the microcontroller in the external device directly to the digital lines of the microcontroller on the motherboard. They are not designed to be hot swappable. Hot swapping PS/2 devices usually does not cause damage because more modern microcontrollers tend to have more robust I/O lines built into them which are harder to damage than those of older controllers; however, hot swapping can still potentially cause damage on older machines, or machines with less robust port implementations.
If they are hot swapped, the devices must be similar enough that the driver running on the host system recognizes and can be used with the new device. Otherwise, the new device will not function properly. While this is seldom an issue with standard keyboard devices, the host system rarely recognizes the new device attached to the PS/2 mouse port. In practice most keyboards can be hot swapped but this should be avoided.
Durability
PS/2 connectors are not designed to be plugged in and out very often, which can lead to bent or broken pins. Additionally, PS/2 connectors only insert in one direction and must be rotated correctly before attempting connection. (If a user attempts to insert the connector in the wrong orientation and then tries to rotate it to the correct orientation without first pulling it out, then bent pins could result.)
Most but not all connectors include an arrow or flat section which is usually aligned to the right or top of the jack before being plugged in. The exact direction may vary on older or non-ATX computers and care should be taken to avoid damaged or bent pins when connecting devices. This issue is slightly alleviated in modern times with the advent of the PS/2-to-USB adapter: users can just leave a PS/2 connector plugged into the PS/2-to-USB adapter at all times and not risk damaging the pins this way. A USB-to-PS/2 adapter does not have this problem.
Fault isolation
In a standard implementation both PS/2 ports are usually controlled by a single microcontroller on the motherboard. This makes design and manufacturing extremely simple and cheap. However, a rare side effect of this design is that a malfunctioning device can cause the controller to become confused, resulting in both devices acting erratically. (A well designed and programmed controller will not behave in this way.) The resulting problems can be difficult to troubleshoot (e.g., a bad mouse can cause problems that appear to be the fault of the keyboard and vice versa).
See also
BIOS interrupt call
DIN connector on IBM PC keyboards
Bus mouse
Connections on mice
DE-9 connector
USB
References
External links
.
.
.
Computer connectors
Deutsches Institut für Normung
Computer keyboards
Pointing devices
Computer-related introductions in 1987
Computer hardware standards |
598134 | https://en.wikipedia.org/wiki/Join%20%28Unix%29 | Join (Unix) | join is a command in Unix and Unix-like operating systems that merges the lines of two sorted text files based on the presence of a common field. It is similar to the join operator used in relational databases but operating on text files.
Overview
The join command takes as input two text files and a number of options. If no command-line argument is given, this command looks for a pair of lines from the two files having the same first field (a sequence of characters that are different from space), and outputs a line composed of the first field followed by the rest of the two lines.
The program arguments specify which character to be used in place of space to separate the fields of the line, which field to use when looking for matching lines, and whether to output lines that do not match. The output can be stored to another file rather than printing using redirection.
As an example, the two following files list the known fathers and the mothers of some people. Both files have been sorted on the join field — this is a requirement of the program.
george jim
kumar gunaware
albert martha
george sophie
The join of these two files (with no argument) would produce:
george jim sophie
Indeed, only "george" is common as a first word of both files.
History
is intended to be a relation database operator. It is part of the X/Open Portability Guide since issue 2 of 1987. It was inherited into the first version of POSIX.1 and the Single Unix Specification.
The version of join bundled in GNU coreutils was written by Mike Haertel. The command is available as a separate package for Microsoft Windows as part of the UnxUtils collection of native Win32 ports of common GNU Unix-like utilities.
See also
Textutils
Join (SQL)
Relational algebra
List of Unix commands
References
External links
join command
Unix text processing utilities
Unix SUS2008 utilities
Plan 9 commands |
28290748 | https://en.wikipedia.org/wiki/TOOLS%20conference%20series | TOOLS conference series | The TOOLS conference series is a long-running conferences on object technology, component-based development, model-based development and other advanced software technologies. The name originally stood for "Technology of Object-Oriented Languages and Systems" although later it was usually no longer expanded, the conference being known simply as "the TOOLS conference". The conferences ran from 1988 to 2012, with a hiatus in 2003-2007, and was revived in 2019.
The first conference was held in Paris in 1988, organized by Eiffel Software. The TOOLS conference series started next year, in 1989. Over the following 15 years it held 45 sessions: TOOLS EUROPE (usually in Paris), TOOLS USA (usually in Santa Barbara, California), TOOLS PACIFIC (usually Sydney), TOOLS ASIA (Peking) and TOOLS EASTERN EUROPE.
TOOLS has played a major role in the development of object technology; many seminal software concepts now taken for granted were first discussed at TOOLS. Invited speakers have included countless luminaries of science and industry such as Kent Beck, Robert Binder, Peter Coad, Alistair Cockburn, Steve Cook, James Coplien, Brad Cox, Miguel de Icaza, John Dvorak, Martin Fowler, Erich Gamma, Adele Goldberg, Richard Helms, Tony Hoare, Ivar Jacobson, Philippe Kahn, Alan Kay, Bertrand Meyer, Jim Miller, Robin Milner, David Parnas, Trygve Reenskaug, Michael Stal, Dave Thomas, David Taylor, Tony Wasserman and many others.
In 2007, after an interruption of four years, TOOLS started again as an annual conference with an extended scope, encompassing not only objects and components but all modern approaches to software technology. TOOLS turned at that time from a single conference to a federated event hosting other conferences listed below. For these later editions of the conference, see the TOOLS page. In 2012, in its 50th edition, The Triumph of Objects was declared and the series provisionally closed.
In 2013, after the temporary end of TOOLS, the STAF conference, Software Technologies: Applications and Foundations (STAF) was established to host some of the conferences previously hosted by TOOLS.
In 2019, the TOOLS conference series restarted with TOOLS 50+1 organized by Innopolis University near Kazan, Russia.
Goals and scope
TOOLS is devoted to the study and application of advanced software development methods, techniques, tools and languages, with a special emphasis on object-oriented techniques, component-based development, model-based development and design patterns. The conference distinguishes itself by combining selective peer review process, as in academic conferences, with a strong practical slant and concern about relevance to industry.
Hosted conferences
The TOOLS federated conference has hosted the following associated conferences in recent years:
Software Composition (SC): since 2007.
Test and Proofs (TAP): since 2007.
Software Engineering Advances For Outsourced and Offshore Development (SEAFOOD): 2008 - 2009.
International Conference on Model Transformation (ICMT): since 2008.
List of past conferences
This list includes a reference to the proceedings when known.
TOOLS 50: TOOLS EUROPE 2012 The Triumph of Objects: Prague (Czech Republic), May 29–31, 2012, http://toolseurope2012.fit.cvut.cz/. Conference chair: Bertrand Meyer. Proceedings: Springer Verlag LNCS no. 7304, ed. Carlo A. Furia, Sebastian Nanz.
TOOLS 49: TOOLS EUROPE 2011: Zurich(Switzerland), http://toolseurope2011.lcc.uma.es/
TOOLS 48: TOOLS EUROPE 2010: Málaga (Spain), June 28-July 2, 2010. Program chair: Jan Vitek; Organizing chair: Antonio Valecillo. Proceedings: Springer Verlag LNCS no. 6141, ed. Jan Vitek.
TOOLS 47: TOOLS EUROPE 2009: Zurich (Switzerland), June 29-July 3, 2009. Program chair: Manuel Oriol; Conference chair: Bertrand Meyer. Proceedings: Springer Verlag LNBIP no. 33, ed. Manuel Oriol, Bertrand Meyer.
TOOLS 46: TOOLS EUROPE 2008: Zurich (Switzerland), June 30-July 4, 2008. Program chair: Richard Paige; Conference chair: Bertrand Meyer. Proceedings: Springer Verlag LNBIP no. 11, ed. Richard Paige.
TOOLS 45: TOOLS EUROPE 2007: Zurich (Switzerland), June 25–27, 2007. Program chair: Jean Bézivin. Proceedings published in the Journal of Object Technology, Vol 6, No. 9, July 2007. http://www.jot.fm/contents/issue_2007_10.html
TOOLS 19: TOOLS EUROPE 1996: Paris(France); program chair: Richard Mitchell. Proceedings edited by Prentice Hall.
TOOLS 17: TOOLS USA 1995: Santa Barbara (USA); Program chair: Raimund Ege. Proceedings edited by Prentice Hall,
TOOLS 16: TOOLS EUROPE 1995: Versailles (France); Program chairs: Ian Graham & Boris Magnuusson. Proceedings edited by Prentice Hall,
TOOLS 15: TOOLS PACIFIC 1994: Melbourne (Australia); Program chair: Christine Mingins. Proceedings edited by Prentice Hall,
TOOLS 13: TOOLS EUROPE 1994: Versailles (France); Program chairs: Boris Magnusson & Jean-François Perrot. Proceedings edited by Prentice Hall,
TOOLS 12: TOOLS PACIFIC 1993: Melbourne (Australia); Program chair: William Haebich. Proceedings edited by Prentice Hall,
TOOLS 11: TOOLS USA 1993: Santa Barbara (USA); Program chair: Raimund Ege. Proceedings edited by Prentice Hall,
TOOLS 10: TOOLS EUROPE 1993: Versailles (France); Program chairs: Boris Magnusson & Jean-François Perrot. Proceedings edited by Prentice Hall,
TOOLS 9: TOOLS PACIFIC 1992: Sydney (Australia); Program chair: John Potter. Proceedings edited by Prentice Hall,
TOOLS 4: TOOLS EUROPE 1991: Paris (France); Program chair: Jean Bézivin. Proceedings edited by Prentice Hall,
TOOLS 3: TOOLS PACIFIC 1990: Sydney (Australia); Program chair: John Potter.
TOOLS 2: TOOLS EUROPE 1990: Paris (France); Program chair: Jean Bézivin. Proceedings edited by Angkor,
TOOLS 1: TOOLS EUROPE 1988: Paris (France); Program chair: Jean Bézivin.
References
External links
TOOLS series page
2010 conference (Málaga)
Past TOOLS Conferences
Computer science conferences
Programming languages conferences |
46872333 | https://en.wikipedia.org/wiki/Dhar%20Polytechnic%20Mahavidhyalaya | Dhar Polytechnic Mahavidhyalaya | Government Polytechnic Dhar, (शासकीय पॉलिटेक्निक कॉलेज) is a polytechnic college in Dhar. It was established in the city of Dhar, near Indore, in 1998 and is affiliated with Rajiv Gandhi Proudyogiki Vishwavidyalaya.
Dhar Polytechnic College is one of the oldest technical colleges the region. This institute is running special Hi-Tech four year Advanced Diploma post Diploma in Industrial Electronics Engineering, Manufacturing Engineering, Mechatronics Engineering and three year Diploma in Information technology, Computer Science. and with intake capacity of 60 in each discipline Based on Sandwich Pattern Designed to Meet the Requirements of Growing and upcoming industries in Advance disciplines.
Geograph / Campus
Dhar Polytechnic College is located at Indore Naka, Dhar, Madhya Pradesh 454001, India
The institute is spread over 7.985 hectares of land. It is situated about 1 km from Indore Naka in Dhar. The building and campus was constructed under world bank project in 1998. There are following infrastructure facilities available in the campus :
Rich Laboratories
Council hall
Conference room
LRUC( Learning and Utilization Cell)
Play grounds
Modern computer centre
Workshop
Boys Hostel
Staff quarters
Computerised library
A garden
Two shades for Parking of vehicles
Campus
Department & Laboratory
Industrial Electronics Engineering
The Department of Industrial Electronics Engineering was established in 1995. The department offers a 4-year advanced diploma course in Industrial Electronics Engineering. Industrial electronics engineering focuses on various aspects of the electronics engineering and its modern approach towards industrial automation.
"Department is having following labs”
Basic Electronics Lab
Electronics Workshop
Instrumentation Lab
Control Lab
Microprocessor Lab
Digital Electronics Lab
Linear Integrated circuit Lab
Mechatronics Engineering]
The Department of Mechatronics Engineering was established in 1995. The department offers a 4-year advanced diploma course in Mechatronics Engineering based on sandwich pattern. Mechatronics engineering focuses on designing the mechanical system having complex electronic control.
"Department is having following labs”
Applied Mechanics Lab Electronics Workshop
CAD Lab
Material Testing Lab
Robotics Lab
CNC Lab
FMS Lab
Machine Tool Lab
Manufacturing Engineering
The Department of Manufacturing Engineering was established in November 1994. The department offers a 4-year advanced diploma course in Manufacturing Engineering. Manufacturing engineering focuses on engineering design, manufacturing processes and materials, and the management and control of man-made systems.
"Department is having following labs”
Engineering Metrology
Maintenance Lab
Material Testing Lab
Control Lab
Industrial Control lab
Fluid Mechanics and Hydraulic Machines lab
Computer Science and Engineering
The Department of Computer Science was established in 2000. The department offers 3 year diploma in Computer Science and Engineering. Computer Lab No.1 and 2 contains 25 computer each all inter connected with LAN. Courses like Linux, RDBMS, Software Engineering, Computer Networks etc. are taught in this programme.
Information Technology
The Department of Information Technology was established in 2000. The department offers 3 year diploma in information Technology. Computer Lab No.1 and 2 contains 25 computer each all inter connected with LAN. Courses like Linux, RDBMS, Software Engineering, Computer Networks etc. are taught in this programme.
Library
The Library consists of three storied building having more than 25,000 volumes & National, International journals and periodicals. The Library provides Xerox facility & is under process of computerisation. Dhar Polytechnic Mahavidhyalaya Library is a member .
Special features of new Library complex :
Air conditioned Reading Hall which is open for 12 hours a day throughout the year including Sundays & Holidays.
Fully computerized, having its own Database with Bibliographical search facilities.
It provides more than 30,000 text and reference books along with periodicals and National & International journals
Admission
Admission on the institute is based on Pre-Polytechnic Test conducted by Vyavshaik Parikchmandal Bhopal ( VYAPAM ) is published by Directorate of Technical Education Madhya Pradesh, Bhopal in the month of May–June every Year.
Training and Placement
Training and Placement Cell is the spring board for career development of students, their ultimate objective of pursuing technical education. The pre-placement training has become mandatory for students. The training part is taken care of by a dedicated wing of the Cell:
Continuing Education Cell which offers need-based training programmers and courses on modern technologies.
Hostel
The College provides separate accommodation for girls & boys both for Polytechnic College Dhar Students.
There are 24 rooms and 3 floor search has 8 room 3 rooms are 2 seater and another 3 seater. one is for official. One Hall for Communication or meetings.
References
Universities and colleges in Madhya Pradesh
Dhar district
1946 establishments in India |
8643895 | https://en.wikipedia.org/wiki/List%20of%20University%20of%20Wisconsin%E2%80%93Milwaukee%20people | List of University of Wisconsin–Milwaukee people | This is a list of people who attended, or taught at, the University of Wisconsin–Milwaukee, including those who attended Milwaukee State Normal School, Wisconsin State Teacher’s College, Wisconsin State College–Milwaukee and the University of Wisconsin-Extension Center in Milwaukee:
Notable alumni
Academics
George R. Blumenthal (B.S. Physics), astrophysicist, the 10th chancellor of University of California, Santa Cruz
Christopher Bratton (1994 MFA Film), President of School of the Museum of Fine Arts, Boston, former president of the San Francisco Art Institute
Robert R. Caldwell (1992 Ph.D. Physics), Professor of Physics and Astronomy at Dartmouth College, Fellow of the American Physical Society
Juan Carlos Campuzano (1978 Ph.D. Physics), fellow of American Physical Society; 2011 Buckley Prize winner; Argonne distinguished fellow; distinguished professor of physics at University of Illinois, Chicago
Carlos Castillo-Chavez (1977 MS Mathematics), Regents and Joaquin Bustoz Jr. Professor at Arizona State University; fellow of American Mathematical Society
Alok R. Chaturvedi (1989 Ph.D. MIS), professor of MIS at Purdue University; founder and director of Krannert School of Management SEAS Laboratory
James Elsner (1988 Ph.D.), Earl and Sophia Shaw Professor of Geography at Florida State University
Keith Hamm (1977 PhD Political Science), Edwards Professor of Political Science at Rice University
William D. Haseman (MBA in MIS), Wisconsin Distinguished Professor of MIS at University of Wisconsin–Milwaukee
James G. Henderson (M.A. Education ), professor of education at Kent State University, creator of 3S Understanding curriculum structure
Gary Hoover (1993 B.A. economics), Chair of Economics Department at University of Oklahoma
George L. Kelling (M.S.W.), professor of criminal justice at Rutgers University
Jack Kilby (1950 M.S. Electronic Engineering ), Nobel Prize Laureate in Physics 2000, inventor of the Integrated Circuit.
Oded Lowengart (PhD Marketing), professor of marketing at the Ben-Gurion University of the Negev (BGU) in Israel
Justin Marlowe (2004 PhD Political Science), Endowed Professor of Public Finance and associate dean at University of Washington
Laura Mersini-Houghton (2000 PhD Physics), theoretical physicist-cosmologist and professor at the University of North Carolina at Chapel Hill
James Otteson (1992 MA Philosophy), philosopher
Prakash Panangaden (PhD), fellow of the Royal Society of Canada, founding chair of the Association for Computing Machinery Special Interest Group on Logic and Computation
Jack Nusan Porter, sociologist, rabbi, and pioneer in genocide studies
Havidan Rodriguez (1986 MA Sociology), sociologist, 20th president of the University at Albany, SUNY
Jeanne W. Ross (PhD MIS), organizational theorist and Principal Research Scientist at MIT Sloan School of Management; Director of MIT Sloan School’s Center for Information Systems Research
Eileen Schwalbach (PhD Urban Education), 11th President of Mount Mary College
Jean Schwarzbauer (BS Chemistry), Eugene Higgins Professor of Molecular Biology at Princeton University
Eugenie Scott (BS, MS), physical anthropologist, executive director of the National Center for Science Education
Robert M. Stein (1977 PhD), Lena Gohlman Fox Professor of Political Science at Rice University, former Dean of Rice University School of Social Sciences
Jerry Straka (1986 MS Geophysical Sciences), tornado expert
Ron Tanner (1989 PhD American Literature), professor of writing at Loyola University Maryland, two-term president of the Association of Writers & Writing Programs
Aaron Twerski (BS Philosophy)(born 1939), the Irwin and Jill Cohen Professor of Law at Brooklyn Law School, as well as a former Dean and professor of tort law at Hofstra University School of Law
Larry N. Vanderhoef (M.S. Biology), 5th chancellor of University of California, Davis.
Wayne A. Wiegand (1970 MA History), library historian, author, and academic.
Ahmed I. Zayed (1979 PhD), mathematician, chair and professor, Department of Mathematical Sciences, DePaul University
Architecture and urban planning
Will Bruder (BA Fine Art), architect
Andres Mignucci (1979 BS Arch), FAIA, architect and urbanist, fellow of the American Institute of Architects
Thomas Vonier (1974, M.Arch.), FAIA, RIBA, Paris-based architect, founding president of the Continental Europe chapter of American Institute of Architects, president of the International Union of Architects
Business
Steven Burd (1973 MA Economics), retired president and CEO of Safeway Inc.
Steven Davis (1980 BA Business), CEO of Bob Evans Restaurants; former president of Long John Silver's and A&W Restaurants
Roger Fitzsimonds (1960 BA Business, '71 MBA Finance), retired chairman and CEO, Firstar Corp (now U.S. Bank)
Dennis R. Glass (1971 BA Business, '73 MBA), president and CEO of Lincoln National Corporation
David Herro (1985 MA Economics), Morningstar International Stock Fund Manager of the Decade, Republican donor
Albert Beckford Jones (MA), chief advisor to the U.S. Civilian Research & Development Foundation
Gale E. Klappa (1972 BBA Communication), chairman, president and CEO of Wisconsin Energy Corporation
Dennis J. Kuester (1966 BBA), former chairman and CEO of Marshall & Ilsley Corporation; member of the Federal Reserve Advisory Council
William H. Lacy Jr. (1968, BBA), former president and chief executive officer, MGIC Investment Corp.
Satya Nadella (1990 MS Computer Science), CEO of Microsoft
Keith Nosbusch (MBA), president and CEO of Rockwell Automation, formerly Rockwell International
Richard Notebaert (1983, MBA), former chairman and CEO, Qwest Communications International, Inc., and Ameritech
Jack F. Reichert (1957), former president of Brunswick Corp.
Deven Sharma (1980 MS, Industrial Engineering), former executive and president of Standard & Poor's
James L. Ziemer (1975 BBA, 1986 EMBA), former president and CEO of Harley-Davidson, Inc.
Edward J. Zore (1968 BS Economics, 1970 MS Economics), president and CEO of Northwestern Mutual
Fine arts and pop culture
Film, television and performing arts
Pamela Britton, Broadway, film and television actress (D.O.A.; My Favorite Martian)
Frank Caliendo (1996 BA, Mass Communication-Broadcast Journalism), comedian
Willem Dafoe (1974), actor
Angna Enters, dancer, mime, painter, writer, novelist and playwright
Jed Allen Harris (1974 BFA), stage director
Tom Hewitt (1981 Professional Theatre Training Program), Broadway performer; 2001 Tony Award nominee for best actor in Rocky Horror Show
Scott Jaeck (1977 BS, Architecture), television and stage actor
Trixie Mattel (2012 BFA, Music, Inter-Arts), drag queen, singer-songwriter, comedian and television personality
Jim Rygiel (1977 BFA, Painting and Drawing), Oscar winner of digital effects for Lord of the Rings
Tina Salaks, author, former ASPCA officer, star of Animal Precinct on Animal Planet
Bryan W. Simon (1979 BA, Political Science), film and stage director
Chris M. Smith (1999 MFA, Film), filmmaker and founder of Bluemark Production and ZeroTV.com
Ray Szmanda, television personality
Music
Naima Adedapo (2007 BFA, Dance), American Idol finalist
Victor DeLorenzo, drummer for Violent Femmes
Herschel Burke Gilbert, (1939) composer of film and television theme songs
Frederick Hemke (1961 BS, Music Education), saxophonist
Guy Hoffman (1978 BFA, Art), drummer and vocalist, former Violent Femmes and BoDeans member
Andy Hurley (2014 BA, Committee Interdiciplinary), drummer for Fall Out Boy
Jerome Kitzke (1978 BFA, Music), composer
Willie Pickens (1958 BS Music Education), jazz pianist, composer, arranger, and educator
Jessica Suchy-Pilalis, harpist, Byzantine singer and composer
Warren Wiegratz (1968 BBA), saxophonist, leader of the band Streetlife
Visual arts
Ruth Asawa (1998 BFA, Art Education), Japanese American sculptor, a driving force behind the creation of Ruth Asawa San Francisco School of the Arts
Thomasita Fessler (1935 BA), painter
Michelle Grabner (1984 BFA Art, 1987 MA Art History), painter
Hanna Jubran (1980 BFA Art, 1983 MFA Sculpture), sculptor
Denis Kitchen (1968 BS, Mass Communications-Journalism), underground comics artist, publisher, author, founder of the Comic Book Legal Defense Fund
Dennis Kois (1995, BA), museum designer (Metropolitan Museum of Art, Smithsonian); Director, DeCordova Museum and Sculpture Park, Boston, MA
David Lenz (1963 BA, Spanish), painter
Jan Serr (1968 MFA, Art), visual artist
Roy Staab (1968 BFA, Art), artist
Donald George Vogl (1958 MS, Art Education), artist, retired professor of art at University of Notre Dame
Journalism and public media
Maureen Bunyan, television journalist, lead co-anchor at WJLA-TV
Milton Coleman (1968 BFA), deputy managing editor of The Washington Post, president of Inter American Press Association; former president of American Society of News Editors
Dorothy Fuldheim, journalist and anchor, "First Lady of Television News"
John K. Iglehart (1961 Journalism), founding editor of Health Affairs
Derrick Zane Jackson, opinion columnist/associate editor for the Boston Globe
Marc Jampole, public relations executive; former television news reporter
Ross A. Lewis (1923), Pulitzer Prize-winning editorial cartoonist
Jim Ott, WTMJ-TV meteorologist; Wisconsin state representative
Scott Shuster (B.A. in Mass Communication), broadcast journalist
Peter James Spielmann (BA, Journalism), international desk editor for Associated Press; professor of journalism at Columbia
Raquel Rutledge (1990 Journalism), Pulitzer Prize-winning journalist
Terry Zahn, television reporter and anchorman
Literature
Antler (1970 BA, Anthropology), poet
Emily Ballou, Australian-American poet, novelist and screenwriter
Sandra Tabatha Cicero, author
José Dalisay, Jr. (1988 Ph.D English), writer, poet, playwright
John Gurda (1978 MA Cultural Geography), writer and narrator of The Making of Milwaukee; eight-time winner of the Wisconsin Historical Society’s Award of Merit
Ellen Hunnicutt, writer, Drue Heinz Literature Prize winner
Adrienne L. Kaeppler, anthropologist, curator of Oceanic Ethnology at the National Museum of Natural History at the Smithsonian Institution
Caroline Knox, poet
Marie Kohler (1979 MA English Literature), writer and playwright; member of the Kohler family of Wisconsin
James Lowder (1999, MA Literary Studies), author and editor
Mary Rose O'Reilley, poet, Walt Whitman Award recipient
Lynne Rae Perkins (1991 MA), Newbery Award-winning writer
Virginia Satir (1936 BA Education), author and psychotherapist
Gordon Weaver, novelist and short-story writer, O. Henry Award recipient
Politics and government
Luis E. Arreaga, U.S. Ambassador to Iceland, U.S. Ambassador to Guatemala
Peter W. Barca, U.S. Congressman and Wisconsin State Representative from Kenosha
Jeannette Bell, Wisconsin State Representative, mayor of West Allis, Wisconsin
Tim Carpenter, Wisconsin State Senator from Milwaukee
Spencer Coggs, (1976 BS Community Education), Wisconsin State Senator, 2003–2013; Treasurer of the City of Milwaukee
Dennis Conta, politician and consultant, Secretary of the Wisconsin Department of Revenue
Alberta Darling, Wisconsin State Representative 1990-92, Wisconsin State Senator, 1992–present from River Hills
John E. Douglas, former special agent of U.S. Federal Bureau of Investigation (FBI), one of the first criminal profilers, and criminal psychology author
Alberto Fujimori, (1972 MS Mathematics), President of Peru, 1990–2000
Eric Genrich, Mayor of Green Bay, Wisconsin State Representative
Randall Gnant, (1967 BS History and Political Science), Arizona State Senate, 1995–2003; Senate President 2001–2003
Richard Grobschmidt, Wisconsin State Senate (1995–2003); Wisconsin State Assembly (1985–1995)
Jeff Halper (Ph.D. in Cultural and Applied Anthropology), anthropologist, political activist, co-founder and director of the Israeli Committee Against House Demolitions
Mildred Harnack, German resistance fighter during World War II, executed under order from Adolf Hitler
Nikiya Harris Dodd, Wisconsin State Senator from Milwaukee
Zuhdi Jasser, President and founder of American Islamic Forum for Democracy
Jerry Kleczka, U.S. Congressman 1984–2005
James A. Krueck, U.S. National Guard general
Chris Larson, Wisconsin State Senator from Milwaukee
Mary Lazich, President of the Wisconsin State Senate
Henry Maier (1964 MA Political Science), Milwaukee mayor 1960–1988
Golda Meir (1917, Education), fourth Prime Minister of Israel; one of the signers of the Declaration of Independence of the State of Israel
Robert J. Modrzejewski (1957 BS Education), U.S. Marine Colonel (retired), Medal of Honor from President Lyndon B. Johnson in 1968
Paa Kwesi Nduom, former Minister for Economic Planning & Regional Cooperation, Energy, and Public Sector Reform of Republic of Ghana
Jim Ott, Wisconsin State Representative from Mequon
Rudolph T. Randa, Article III federal judge in the United States District Court for the Eastern District of Wisconsin.
Jim Risch, U.S. Senator from Idaho
Margaret A. Rykowski, U.S. Navy admiral
Karen Sasahara, U.S. ambassadors to Jordan, Consul General in Jerusalem
Dawn Marie Sass, Treasurer of Wisconsin 2007–2011
Brad Schimel (1987 BA Political Science), the 44th Wisconsin Attorney General
Martin E. Schreiber, assemblyman and Milwaukee alderman, father of Martin J. Schreiber
Martin J. Schreiber (1960, 3+3 Program), 38th Lieutenant Governor of Wisconsin and 39th Governor of Wisconsin
Steve Sisolak (1974, BS Business), 30th Governor of Nevada
Lawrence H. Smith, congressman from Racine
Jim Steineke, Wisconsin State Representative from Kaukauna
Lena Taylor (1990, English), Wisconsin state senator; elected to Assembly in April 2003 special election; elected to Senate 2004
Wayne F. Whittow, Wisconsin senator, City Treasurer of Milwaukee
Brian E. Winski,United States Army Major general, commander of the 101st Airborne Division
Science and technology
Michael Dhuey, electrical and computer engineer, co-inventor of the Macintosh II and the iPod
Luther Graef (1961 MS Structural Engineering), Founder of Graef Anhalt Schloemer & Associates Inc. and former president of American Society of Civil Engineers
Justin Jacobs (2005 MS Mathematics), recipient of Presidential Early Career Award for Scientists and Engineers
Phil Katz (1984, BS Computer Science), computer programmer known as the author of PKZIP
Jack Kilby (1950 MS Electrical Engineering), engineer
Satya Nadella (MS Computer Science), CEO of Microsoft
Gustavo R. Paz-Pujalt (1985 PhD Physical Chemistry), scientist and inventor
Cheng Xu (1997 Ph.D Turbomachinery), aerodynamic design engineer, American Society of Mechanical Engineers fellow
Scott Yanoff (1993 BS Computer Science), Internet pioneer
Sports
Athletes
Christine Boskoff, world-class mountaineer, reached more record summits than any other female in history
Tighe Dombrowski, MLS soccer player
Ricky Franklin, American basketball player
Don Gramenz, Minnesota Thunder defender
Sarah Hagen, American footballer of FC Kansas City United States women's national soccer team
Demetrius Harris, NFL football player, tight end of Kansas City Chiefs
Chris Hill, Spirou Basket Charleroi basketball player
P. J. Johns, soccer goalkeeper, member of the United States national futsal team
Ken Kranz, NFL football player
Alan Kulwicki (1977 BS Mechanical Engineering), 1992 NASCAR Winston Cup champion, was named one of NASCAR's 50 Greatest Drivers and was inducted into the International Motorsports Hall of Fame
Manny Lagos, MLS soccer player; U.S. Olympian
Greg Mahlberg, MLB baseball player
Paul Meyers, professional football player
Clem Neacy, NFL football player
Dylan Page, Chorale Roanne Basket basketball player in France
Allison Pottinger (MBA Marketing), curler; 2003 gold medalist and 2006 silver medalist at the World Curling Championships
Mike Reinfeldt (1975 BA Business), NFL All-Pro defensive back, General Manager of Tennessee Titans, former Seattle Seahawks Chief Financial Officer
Tony Sanneh, MLS soccer player; U.S. National team and U.S. World Cup team member
George H. Sutton, professional billiard player, the "handless billiard player"
Clay Tucker basketball player
Joventut Badalona basketball player
Mitchell Whitmore, speed skater
Whitey Wolter, NFL football player
Coaches and referees
Jimmy Banks (1987 Education), Milwaukee School of Engineering men's soccer team head coach
Bill Carollo (1974 BBA Industrial Relations), NFL referee
Sasho Cirovski (1985 BBA, 1989 MBA), University of Maryland men's soccer team head coach
Warren Giese, South Carolina Gamecocks football head coach
Jeff Rohrman, UW–Madison men's soccer team head coach
Bruce Weber (1978 BA Education), Kansas State University men's basketball head coach
Others
Lynde Bradley Uihlein (MS Social Welfare), philanthropist
Clara Stanton Jones, the first African-American president of the American Library Association and the first African-American director of a major city public library in the United States
Patricia Wells (BA Journalism), cookbook author
Notable faculty
Bruce Allen, physicist and professor, fellow of American Physical Society and fellow of Institute of Physics (UK)
David Backes, author; professor in journalism, advertising, and media studies
Anne Basting, professor of theater and expert on aging, dementia and the arts; 2016 MacArthur Fellowship winner
Robert J. Beck, scholar of international law
Sandra Braman, former professor of communication (no longer at UW-M)
Y. Austin Chang, former professor and department chair of material engineering; elected member of the National Academy of Engineering; elected foreign member of the Chinese Academy of Sciences; fellow of The Minerals, Metals & Materials Society; fellow of ASM International
Francis D.K. Ching, professor of architecture, known for architectural and design graphics
Cecelia Condit, video artist, professor in film, video, animation and new genres
Melvyn Dubofsky, professor of history and sociology
Rebecca Dunham, poet, professor of English
Hugo O. Engelmann, sociologist, anthropologist and general systems theorist
Millicent (Penny) Ficken, – ornithologist who specialized in birds' vocalizations and their social behaviors
Louis Fortis, economist; state legislator; newspaper editor and publisher
Jane Gallop, writer, University distinguished professor
Al Ghorbanpoor, civil engineer and professor, fellow of the American Society of Civil Engineers
Robert G. Greenler, physicist, former president of Optical Society of America in 1987
Martin Haberman, educator, University distinguished professor
John Brian Harley, professor of geography
William D. Haseman, Wisconsin Distinguished Professor of Business
Ihab Hassan, Vilas Research Professor of English and comparative literature
Thomas Hubka, professor of architecture
Richard Klein, paleoanthropologist
Mark L. Knapp, professor of communication
John Koethe, professor of philosophy, poet and essayist
Markos Mamalakis, economist and professor
Christina Maranci, former associate professor, expert on the history and development of Armenian architecture
Kenneth J. Meier, political scientist
Jim Moody, federal government economist 1967-1969, US Congressman 1979-1982, former associate professor of economics
Marjorie "Mo" Mowlam Associate Professor of Political Science, later Labour Member of the British Parliament and Secretary of State for Northern Ireland
Satish Nambisan, professor of entrepreneurship & technology management, Sheldon B. Lubar School of Business; professor of industrial & manufacturing engineering, College of Engineering & Applied Science; author, The Global Brain.
Harold L. Nieburg, political scientist
Leonard Parker, physicist and professor, fellow of American Physical Society
Brett Peters, industrial engineer, fellow of the Institute of Industrial Engineers, dean of the College of Engineering and Applied Science
Stephen Pevnick, inventor of Graphical Waterfall, professor of art
Amos Rapoport, University Distinguished Professor of Architecture
Pradeep Rohatgi, Wisconsin Distinguished Professor of Engineering
Herbert H. Rowen, historian of Early Modern Europe
Richard P. Smiraglia, knowledge organization
Leonard Sorkin, violinist
William H. Starbuck, organizational scientist
Anastasios Tsonis, distinguished professor of mathematical science
Hiroomi Umezawa, physicist and former Distinguished Professor at the Department of Physics
Harriet Werley, professor of nursing; charter fellow and a Living Legend of the American Academy of Nursing; fellow of the American College of Medical Informatics; founding editor of Research in Nursing and Health; co-creator of the Nursing Minimum Data Set
University chancellors
Mark Mone (2014–present)
Michael Lovell (2010–2014)
Carlos E. Santiago (2004–2010)
Nancy L. Zimpher (1998–2003)
John H. Schroeder (1991–1998)
Clifford V. Smith, Jr. (1985–1990)
Frank E. Horton (1980–1985)
Werner A. Baum (1973–1979)
J. Martin Klotsche (1956–1973)
References
University of Wisconsin-Milwaukee people
University of Wisconsin-Milwaukee
University of Wisconsin–Milwaukee |
502785 | https://en.wikipedia.org/wiki/Phthia | Phthia | In Greek mythology Phthia (; or Φθίη Phthía, Phthíē) was a city or district in ancient Thessaly. It is frequently mentioned in Homer's Iliad as the home of the Myrmidones, the contingent led by Achilles in the Trojan War. It was founded by Aeacus, grandfather of Achilles, and was the home of Achilles' father Peleus, mother Thetis (a sea nymph), and son Neoptolemus (who reigned as king after the Trojan War).
Phthia is referenced in Plato's Crito, where Socrates, in jail and awaiting his execution, relates a dream he has had (43d–44b): "I thought that a beautiful and comely woman dressed in white approached me. She called me and said: 'Socrates, may you arrive at fertile Phthia on the third day. The reference is to Homer's Iliad (ix.363), when Achilles, upset at having his war-prize, Briseis, taken by Agamemnon, rejects Agamemnon's conciliatory presents and threatens to set sail in the morning; he says that with good weather he might arrive on the third day "in fertile Phthia"—his home.
Phthia is the setting of Euripides' play Andromache, a play set after the Trojan War, when Achilles' son Neoptolemus (in some translations named Pyrrhus) has taken Andromache, the widow of the Trojan hero Hector as a slave.
Mackie (2002) notes the linguistic association of Phthia with the Greek word phthisis, meaning "consumption, decline; wasting away" (In English, the word has been used as a synonym for tuberculosis) and the connection of the place name with a withering death, suggesting a wordplay in Homer, associating Achilles' home with such a withering death.
Location of Phthia
The Homeric Catalogue of Ships speaks of Achilles' kingdom as follows (Hom. Il. 2.680-5):
Now again all those who dwelt in Pelasgic Argos:
those who dwelt in Alos and Alope and Trachis
and those who held Phthia and Hellas with its fair women,
and who were called Myrmidons and Hellenes and Achaians;
of those fifty ships the leader was Achilles.
These names are generally believed to have referred to places in the Spercheios valley in what is now Phthiotis in central Greece. The river Spercheios was associated with Achilles, and at Iliad 23.144 Achilles states that his father Peleus had vowed that Achilles would dedicate a lock of his hair to the river when he returned home safely.
However, a number of ancient sources, such as Euripides' Andromache, also located Phthia further north in the area of Pharsalus. Strabo also notes that near the cities of Palaepharsalus and Pharsalus there was a shrine dedicated to Achilles' mother Thetis, the Thetideion. Mycenean remains have been found in Pharsalus, and also in other sites nearby, but according to Denys Page, whether the Homeric Phthia is to be identified with Pharsalus "remains as doubtful as ever".
It has been suggested that "Pelasgic Argos" is a general name for the whole of northern Greece, and that line 2.681 of the Iliad is meant to serve as a general introduction to the remaining nine contingents of the Catalogue.
See also
Phthiotis (modern Greece)
References
Achaea Phthiotis
Locations in the Iliad
Populated places in ancient Thessaly |
294725 | https://en.wikipedia.org/wiki/ROX%20Desktop | ROX Desktop | The ROX Desktop is a graphical desktop environment for the X Window System. It is based on the ROX-Filer which is a drag and drop spatial file manager. It is free software released under the GNU General Public License. The environment was inspired by the user interface of RISC OS (not to be confused with RISC/os). The name "ROX" comes from "RISC OS on X". Programs can be installed or removed easily using Zero Install.
The project was started by Thomas Leonard as a student at University of Southampton in 1999 and was still led by him in 2012.
Software components
The ROX Desktop is a desktop environment based on the ROX-Filer file manager. Files are loaded by applications by using drag and drop from the filer to the application, and saved by dragging back to the filer. Applications are executable directories, and are thus also installed (copied), uninstalled (deleted), and run through the filer interface. ROX has a strong link with Zero Install, a method of identifying and executing programs via a URL, to make software installation completely automatic.
The desktop uses the GTK toolkit, like the GNOME and Xfce desktops. The design focuses on small, simple programs using drag-and-drop to move data between them. For example, a user might load a compressed file into a spreadsheet from the web by dragging the data from the web browser to the archiver, and from there into the spreadsheet. A program would be installed in the same way, by dragging the archive from the web to the archiver, and from there to the applications directory in the filer.
Drag-and-drop saving allows the user to save the text file to any directory they please, or directly to another application, such as the archiver on the panel.
ROX Filer
ROX-Filer is a graphical spatial file manager for the X Window System. It can be used on its own as a file manager, or can be used as part of ROX Desktop. It is the file manager provided by default in certain Linux distributions such as Puppy Linux and Dyne:bolic, and was used in Xubuntu until Thunar became stable.
ROX-Filer is built using the GTK+ toolkit. Available under the terms of the GPL-2.0-or-later license, ROX-Filer is free software.
See also
Comparison of X Window System desktop environments
Package manager
References
Notes
Bruce Byfield (7 February 2007) ROX Desktop provides light, quirky alternative to GNOME and KDE, Linux.com
Jo Moskalewski (July 2002) ' RISC rocks. Jo´s alternativer Desktop: ROX LinuxUser
External links
Desktop environments based on GTK
Discontinued software
Free desktop environments
Free file managers
Free software programmed in C
Free software programmed in Python
Linux windowing system-related software
Science and technology in Hampshire
University of Southampton |
14168168 | https://en.wikipedia.org/wiki/Sonic%20Studio | Sonic Studio | Sonic Studio is an American company manufacturing digital audio production tools for engineering professionals. The company was created when Sonic Solutions divested itself of its audio product lines in order to concentrate on DVD and multimedia–oriented products.
History
Overview
Under the auspices of Sonic Solutions, the Sonic Studio audio workstation has driven the professional production and delivery of commercial Compact Discs. The original “Sonic System” pioneered the desktop delivery of Red Book masters on recordable CD, in the same way that the original Macintosh and LaserWriter spawned the desktop publishing revolution. Prior to the introduction of the Sonic System, Compact Disc were assembled and premastered using bulky, expensive and unreliable U-matic videotape–based systems.
Early years
The Sonic System began life as research into real–time, computer–based audio production. The Audio Signal Processor (or ASP) hardware–based audio signal processor, designed by James A. Moorer, after work on the Hydra audio project at Stanford University’s CCRMA, was a proof of concept for what is now considered a digital audio workstation. The ASP's design started life in 1980 and was designed primarily for real–time, multichannel EQ and mixing. SoundDroid, an in–house project of Lucasfilm Ltd.’s Sprocket Systems that was later spun off as part of The Droid Works, was a hard disk–based, non–linear, second generation digital audio workstation that leveraged the research done on the ASP. Though the SoundDroid project was never commercialized and The Droid Works was later sold to Avid, the audio development team went on to first create the NoNOISE restoration system in 1987, hosted on a Motorola–powered SUN 1, the first true, general purpose computer “workstation,” which had been developed in cooperation with Lucasfilm. The SUN ran UNIX, developed by Bell Labs and refined at UC Berkeley. Sonic Studio’s current flagship product run on Mac OS, a modern version of that same UNIX variant, BSD Unix, that powered the original SUN workstation.
After evaluating the cost and complexity of their SUNs, the Sonic team decided to tap a new platform, Apple Computer’s Macintosh II, also powered by a Motorola 68000, to create the first production version of the Sonic Station later that year. By 1988, the Sonic Station was in service at EMI Abbey Road and Finesplice in London, and MCA in California, performing “miraculous” feats of restoration and starting a trend of mining back catalog that continues to this day. That first system employed a dedicated NuBus hardware co–processor, with 4 Motorola 56000–series Digital Signal Processors (DSP), beginning a trend that continued through seven generations of hardware.
Demand grew for a turnkey Compact Disc preparation system and, in 1990 with the addition of the world’s first CD-R product, Sony’s $30,000, two piece, E-1/W-1 Compact Disc-Recordable system in conjunction with START Lab’s new media, the complete Sonic System was born. After a few years of development, the product was renamed “SonicStudio” and development continues to this day.
Present day
In 2002, Sonic Solutions decided to divest themselves of their original audio product line. To concentrate solely on the DVD content creation market, they formed a joint venture and, in 2004, that business was transferred to Big Endian, LLC to carry on the development, sales, and support of Sonic Solutions’ audio workstation products.
Based in San Anselmo California, Sonic Studio, LLC continues to manufacture products that address the needs of the world’s most discriminating audio professionals with powerful PCM and DSD origination, editing and processing capabilities, and integrated premastering for both CD, SACD and rich media distribution. Sonic Studio’s NoNOISE noise and distortion reduction tools and streamlined workflow have allowed the product lines to remain at the forefront of restoration for DVD post–production and archival re–release.
Pioneering work
Over the years, development of the product lines have resulted in many breakthroughs now considered commonplace in the professional audio community. Some of the features and technologies brought to the pro audio market by Sonic Studio’s forbearers include:
graphical digital waveform displays
24 bit AES digital I/O
SDIF-2 digital I/O
4–point editing model, borrowed from videotape editing paradigm
integrated, 9 pin machine control
integrated digital restoration tools
multitasking DSP
integrated “desktop” CD preparation
the PreMaster CD delivery format
ultra–high speed data network with multiuser, file–level read/write (Anderson 1993)
96 kHz & 192 kHz, single and double wire AES I/O support
double precision internal signal processing (Moorer 1999)
integrated DVD-Audio & SACD production (Moorer 1998)
References
Notations
Moorer, James A; (1982). The Audio Signal Processor: The Next Step in Digital Audio. New York: Audio Engineering Society. Preprint Number: Rye-020
Moorer, James A.; Borish, Jeffrey; Snell, John; (1985). A Gate-Array Multiplier for Digital Audio Processing. New York: Audio Engineering Society. Preprint Number: 2243
Moorer, James A.; Borish, Jeffrey; (1986). An Optical Disk Recording, Archiving, and Editing Device for Digital Audio Signal Processing. New York: Audio Engineering Society. Preprint Number: 2376
Lawrence M. Fisher; (1988). Removing the Static From Old Recordings. The New York Times
Cumming, David P.; Moorer, James A.; Ogawa, H.; Ishiguro, T.; Nakajima, Hisashi; (1990). CD Mastering Using a Recordable -Red Book Standard- CD and Graphical PQ Subcode Editing. New York: Audio Engineering Society. Preprint Number: 3006
Reichbach, Jonathan D.; Kemmerer, Richard A.; (1992). SoundWorks: An Object-Oriented Distributed System for Digital Sound. New York: IEEE. 0018-9162/92/0300-0025
Moorer, James A; (1996). Breaking the Sound Barrier: Mastering at 96 kHz and Beyond. New York: Audio Engineering Society. Preprint Number: 4357
TECnology Hall of Fame; (1997). 1997 TEC Awards. Penton Media Inc. Mix Magazine
Jacobson, Linda; (2004). Silicon Audio. Penton Media Inc. Mix Magazine
TECnology Hall of Fame; (2006). 1987 Sonic Solutions NoNoise. Penton Media Inc. Mix Magazine
External links
Web site
Resume of Dr. J.A. Moorer
Electronics companies of the United States |
908456 | https://en.wikipedia.org/wiki/Houdini%20%28software%29 | Houdini (software) | Houdini is a 3D animation software application developed by Toronto-based SideFX, who adapted it from the PRISMS suite of procedural generation software tools. The procedural tools are used to produce different effects such as complex reflections, animations and particles system. Some of its procedural features have been in existence since 1987.
Houdini is most commonly used for the creation of visual effects in film and games. It is used by major VFX companies such as Walt Disney Animation Studios, Pixar, DreamWorks Animation, Double Negative, ILM, MPC, Framestore, Sony Pictures Imageworks, Method Studios and The Mill.
It has been used in many feature animation productions, including Disney's feature films Fantasia 2000, Frozen, and Zootopia; the Blue Sky Studios film Rio, and DNA Productions' Ant Bully.
SideFX also publishes Houdini Apprentice, a limited version of the software that is free of charge for non-commercial use.
Release history
Features
Houdini covers all the major areas of 3D production, including these:
Modeling – All standard geometry entities including Polygons, (Hierarchical) NURBS/Bézier Curves/Patches & Trims, Metaballs
Animation – Keyframed animation and raw channel manipulation (CHOPs), motion capture support
Particles
Dynamics – Rigid Body Dynamics, Fluid Dynamics, Wire Dynamics, Cloth Simulation, Crowd simulation.
Lighting – node-based shader authoring, lighting and re-lighting in an IPR viewer
Rendering – Houdini ships with SideFX's rendering engines Mantra and Karma; Houdini Indie licence and up support 3rd party rendering engines, such as Renderman, Octane, Arnold, Redshift, V-ray, Maxwell(soon).
Volumetrics – With its native CloudFx and PyroFx toolsets, Houdini can create clouds, smoke and fire simulations.
Compositing – full compositor of floating-point deep (layered) images.
Plugin Development – development libraries for user extensibility.
Houdini is an open environment and supports a variety of scripting APIs. Python is increasingly the scripting language of choice for the package, and is intended to substitute its original C Shell-like scripting language, HScript. However, any major scripting languages which support socket communication can interface with Houdini.
Tools
Operators
Houdini's procedural nature is found in its operators. Digital assets are generally constructed by connecting sequences of operators (or OPs). This proceduralism has several advantages: it allows users to construct highly detailed geometric or organic objects in comparatively very few steps; it enables and encourages non-linear development; and new operators can be created in terms of existing operators, a flexible alternative to non-procedural scripting often relied on in other packages for customisation. Houdini uses this procedural generation in production of textures, shaders, particles, "channel data" (data used to drive animation), rendering and compositing.
Houdini's operator-based structure is divided into several main groups:
OBJs – nodes that pass transform information (Traditionally these contain SOPs.)
SOPs – Surface Operators – for procedural modelling.
POPs – Particle Operators – used to manipulate particles systems.
CHOPs – Channel Operators – for procedural animation and audio manipulation.
COPs – Composite Operators – used to perform compositing on footages.
DOPs – Dynamic Operators – for dynamic simulations for fluids, cloth, rigid body interaction etc.
SHOPs – Shading Operator – for representing a dozen or more different shading types for several different renderers.
ROPs – render operators – for building networks to represent different render passes and render dependencies.
VOPs – VEX operators – for building nodes of any of the above types using a highly optimized SIMD architecture.
TOPs - Task Operators
LOPs - Lighting Operators - for generating USD describing characters, props, lighting, and rendering.
Operators are connected together in networks. Data flows through, manipulated by each operator in turn. This data could represent 3D geometry, bitmap images, particles, dynamics, shader algorithms, animation, audio, or a combination of these. This node graph architecture is similar to that employed in node-based compositors such as Shake or Nuke.
Complex networks can be grouped into a single meta-operator node which behaves like a class definition, and can be instantiated in other networks like any compiled node. In this way users can create their own sophisticated tools without the need for programming. In this way Houdini can be regarded as a highly interactive visual programming toolkit which makes programming more accessible to artists.
Houdini's set of tools are mostly implemented as operators. This has led to a higher learning curve than other comparable tools. It is one thing to know what all the nodes do – but the key to success with Houdini is understanding how to represent a desired creative outcome as a network of nodes. Successful users are generally familiar with a large repertoire of networks (algorithms) which achieve standard creative outcomes. The overhead involved in acquiring this repertoire of algorithms is offset by the artistic and algorithmic flexibility afforded by access to lower level building blocks with which to configure shot element creation routines. In large productions, the development of a procedural network to solve a specific element creation challenge makes automation trivial. Many studios that use Houdini on large feature effects, and feature animation projects develop libraries of procedures that can be used to automate generation of many of the elements for that film with almost no artist interaction.
Also unique to Houdini is the range of I/O OPs available to animators, including MIDI devices, raw files or TCP connections, audio devices (including built-in phoneme and pitch detection), mouse cursor position, and so on. Of particular note is Houdini's ability to work with audio, including sound and music synthesis and spatial 3D sound processing tools. These operators exist in the context called "CHOPs" for which Side Effects won a Technical Achievement Academy Award in 2002.
VEX (Vector Expression) is one of Houdini's internal languages. It is similar to the Renderman Shading Language. Using VEX a user can develop custom SOPs, POPs, shaders, etc. The current implementation of VEX utilizes SIMD-style processing.
Rendering
Houdini is bundled with a production-class renderer, Mantra, which had many similarities to RenderMan in its scope and application in its initial incarnation. Micropolygon rendering is supported, allowing high-quality displacement operations as well as traditional scan-line and raytracing modes. Shaders are scriptable and composed in their VEX language, or by using VOPs; their node-based interface to programming VEX. Mantra (as does Houdini itself) also supports point-clouds, which can be similar in application as brickmaps in Renderman. This allows more complicated light interactions, such as sub-surface scattering and ambient occlusion, to be produced with lower computational overhead. Mantra can perform extremely fast volume rendering, and also physically based path-tracing – a technique which attempts to more accurately model the physical interactions of light and materials.
TouchDesigner
Derivative Inc. is a spin-off of Side Effects Software that markets a derivative of Houdini called TouchDesigner. Tailored toward real-time OpenGL-generated animation, it was used on rock group Rush's 30th-anniversary tour to produce dynamic graphics driven directly by the musicians. TouchDesigner was also used by Xite Labs (formerly V Squared Labs) to create live visuals for Amon Tobin's ISAM installation tour.
Production
The notable works in which Houdini was used include the 1997 Contact movie and more recent 2016 Zootopia.
See also
References
External links
SideFX Software, Makers of Houdini
Orbolt Asset store (official)
Derivative Inc., spin-off of Side Effects Software and maker of TouchDesigner.
odforce – a Houdini artist community site
3Daet , A project-based Houdini site built by its users. Seems not to be active anymore.
CG WIKI / Joy of Vex, great resource for VEX, one of the programming languages used in Houdini.
Houdini community page on Facebook
PRISMS description and screenshots
30 minute interview with Kim Davidson about the history of Houdini
Houdini 17.5 Released (13 March 2019)
Houdini 18.0 Released (27 November 2019)
3D graphics software
IRIX software
1996 software
3D animation software
3D computer graphics software for Linux
Proprietary commercial software for Linux
Motion graphics software for Linux |
725747 | https://en.wikipedia.org/wiki/Chenango%20Canal | Chenango Canal | The Chenango Canal was a towpath canal in central New York in the United States which linked the Susquehanna River to the Erie Canal. Built and operated in the mid-19th century, it was 97 miles long and for much of its course followed the Chenango River, along Rt. 12 N-S from Binghamton on the south end to Utica on the north. It operated from 1834 to 1878 and provided a significant link in the water transportation system of the northeastern U.S. until supplanted by the region's developing railroad network.
Construction
The canal was first proposed in the New York Legislature in 1824 during the construction of the Erie Canal, prompted by lobbying from local leaders in the Chenango Valley. It was authorized by the legislature in 1833 and completed in October 1836 at a total cost of $2,500,000- approximately twice the original appropriation. In 1833 a grand ball was held in Oxford, NY, which feted the canal's approval. The great American civil engineer John B. Jervis was appointed Chief Engineer of the project and helped in its design. This was an era of extensive canal building in the United States, following the English model, in order to provide a major transportation network for the eastern United States.
The excavation began in 1834 and was largely done by Irish and Scottish immigrant laborers, digging by hand, using pick and shovel, chipping through rock and wading through marsh. They were paid $11 per month, which was three times a common laborer's wages at the time. Skilled workers came from the completed Erie Canal project and brought new inventions, such as an ingenious stump-puller, which used oxen or mules for animal power. Work camps for the laborers were established along the route and as many as 500 men stayed in each camp. The canal opened in October, 1836, and was billed as the "best-built canal in the state".
The Chenango Canal was 42 feet wide at the top and 26 feet wide at the bottom and averaged 4 feet deep. It had 116 locks, 11 lock houses, 12 dams, 19 aqueducts, 52 culverts, 56 road bridges, 106 farm bridges, 53 feeder bridges, and 21 waste weirs. The Chenango was unique in that it was the first reservoir-fed canal in the U.S. In this design, reservoirs were created and feeder canals were dug to bring water to the summit level of the canal. This had been done previously in Europe, but had not been tried in the US. This project had to succeed by getting almost 23 miles of waterway up an incline with a 706' elevation, to the summit level in Bouckville, and back down a descent of 303' to the Susquehanna river in Binghamton. At a time when there were no engineering schools in the country, and hydrology was not yet a scientific discipline, Jervis and his team were able to design a complex waterway that was considered the best of its day.
System
The main artery Erie Canal was built between 1817 and 1825 and provided the key link in a water highway to what would become the Midwestern United States, connecting to the Great Lakes at Buffalo. It connected the Great Lakes to the Hudson River and then to the port of New York City. The Erie made use of the favorable conditions of New York's unique topography which provided that area with the only break in the Appalachian range—allowing for east-west navigation from the Atlantic coast to the Great Lakes. The canal system gave New York State a competitive advantage, helped New York City develop as an international trade center, and allowed Buffalo to grow from just 2,000 settlers in 1820 to over 18,000 people by 1840. The port of New York became essentially the Atlantic home port for all of the Great Lakes states. It was because of this vital and critical connection that New York State became known as the Empire State.
The Erie Canal represented the first major water-works project in the United States. It proved the practicality of large-scale water diversions without disrupting the local environment. The canal connected the waters of Lake Erie to the tidewater of New York harbor in a multi-level route that followed the local terrain and was fed by local water sources. All of the New York branch canals would follow this model. With the Erie, as with all of the branch canals, water flow was required for several purposes: filling the canal at the beginning of each spring season; water for lockage, i.e., water loss from higher to lower levels; water loss by seepage through the berm and towpath banks and water diverted for industrial power usage. Consequently, flooding and droughts were perennial problems for all of the New York canals.
The Chenango Canal operated from 1834 to 1878, from April to November each year. The opening of the canal cut the shipping time from Binghamton to Albany from 9 to 4 days, and reduced the cost of shipping goods dramatically. It was intended to connect Binghamton and surrounding communities, by water route, to the port of New York City and to the Great Lakes States.
The northern terminus of the Chenango Canal was in Utica at an entry lock near present-day N. Genesee Street and the Erie Canal; the southern terminus was in Binghamton at a turning basin near present-day State and Susquehanna Streets. State Street in Binghamton was built on the path of the canal in 1872. The village of Port Dickinson and the hamlets of Port Crane and Pecksport owe their origins and names to being stops on the route. With the coming of the Chenango Canal, Port Crane, being located on the line of this waterway, developed rapidly, with stores, hotels, boat yards and repair and dry docks being built in that village. For some time beginning in 1837, the canal afforded shipping facilities to these and other such isolated areas, which were gradually improved. Between 1840 and 1865, for example, the village of Port Crane reached the height of its prosperity. Overall, the construction of the canal led to a widespread manufacturing and building boom in the Chenango Valley.
A western extension, commonly known as the Extension Canal, was begun in 1840. The Extension continued west along the south side of the Susquehanna River, as far as Vestal. Officially named the Chenango Canal Extension, it was for the purpose of "making connection with the Pennsylvania canal system, and thus to complete a route to the vast coal fields in that state, the New York Legislature, on April 18, 1838, passed an act (chapter 292) directing the canal commissioners to cause a survey to be made from the termination of the Chenango canal at Binghamton, along the valley of the Susquehanna, to the State line near Tioga Point, at the termination of the North Branch canal of Pennsylvania, and to cause an estimate of the cost of this continuation to be made." Also it would connect with western New York through the Junction and Chemung canals, Seneca lake, and the Cayuga and Seneca and the Erie canals.
In the original plan, the Chenango Canal was supposed to connect the Erie Canal with the Pennsylvania Canal. But the connecting canals in the southern part of the state and in northern Pennsylvania were never fully completed nor totally operational. One section would fall into disrepair before the rest of the line was completed. Also, this canal
was begun at the close of the canal era, and the era ended before the line could be completed.
The extension was not the only segment with operational problems. The Chenango Canal had its problems, as well. Most significantly, the Chenango Co. had difficulty performing its regular maintenance due to the often prohibitively high repair costs. As with most canal lines in the East, they began operation carrying too much debt. If the company did not show a wide enough margin from the user fees during the season, they would end the year at a deficit. Also, as with many similar canals, the Chenango's greatest error was that they initially installed a system using wooden locks. With New York's frigid winters and freeze-thaw cycles, the wood never held up for very long. As soon as one section had been repaired, another would give out, and it usually took more time and money than was available before they were all in operation. The original structures were incrementally all rebuilt. Ultimately and at great cost, the process culminated in the construction of stone locks.
Operation
The competition kept the freight rates low. Before the canal was built, it took between 9 and 13 days to ship goods by wagon from Binghamton to Albany, for a cost of $1.25 per 100 pounds. A canal boat made the trip in less than four days and cost $ .25 per 100 pounds. Records also show an example of a fare on a packet line that ran between Norwich and Binghamton. The fare was $1.50 per person, departing at 6 am, arriving sometime between 6 and 8 pm.
Many classes of boats frequented the Chenango Canal and included packet boats, scows, lakers and bullheadsthe name for the freight barges, which were the most common boats seen. The bullhead was so named because of its blunt and rounded bow. They were about 14 feet wide and 75 feet long and were sometimes loaded so heavily that they would periodically drag along the bottom of the canal. The packet boats and barges were drawn at an average speed of four miles an hour by horse or mule teams on the towpath. The passenger boats were usually pulled by horses, which were changed every ten miles. The freight barges were pulled by mules, which were changed every four to six hours. Each barge had two cabins: one at the bow to stable the animals, usually horses or mules, that pulled the boat, and one at the stern which served as a living quarters for the captain and crew, and sometimes a whole family. The packet boats bore names like The Madison of Solsville, with Captain Bishop, or Fair Play, with Captain Van Slyck, and were manned by a minimum crew of three. Each boat needed a driver walking on the towpath controlling the animals, a bowsman controlling the movement and direction of the bow and a steersman on the aft deck. The passengers were seated in chairs on the top deck. When the boat would near a town, the crewmen shouted: "Low bridge!" and everyone would retreat to the lower deck to avoid being swept overboard by a bridge.
Impact
The first packet boat, The Dove of Solsville, arrived in Binghamton from Utica May 6, 1837, officially opening the canal, and was quickly followed by new development along the canal's route. The area benefited from the arrival of new settlers, new and needed merchandise and the provision of a means of shipping finished goods and product in and out of the local areas. Mills and factories sprang up along the southern end of the canal, while stores and hotels arose all along the retail corridor. Numerous and varied supporting businesses also flourished, including taverns, inns and boat yards- for building and repair. Farmers now had an efficient, affordable and dependable means of transportation enabling them to sell their perishable milk production to butter and cheese factories; the factories could readily ship their product to market. Apple cider and cider vinegar were shipped from Mott's, at what is now the Bouckville Mill. Lumber mills had affordable availability of their resources and access to their markets. In a few instances, however, mills were forced to cease operation. Such was the case with Madison's Solsville Mills, whose water supply from Oriskany Creek was diverted away to be used for the supply of the canal.
The Chenango also allowed for efficient, comfortable and relatively fast passenger transportation. New residents arrived from Utica, most having come in through New York's port. The canal's construction laborers themselves were largely immigrants who stayed and settled in the area after the construction was completed. In 1861, the canal transported 1000 soldiers of the 114th Regiment from Norwich to Utica, in a flotilla of 10 packet boats. This was the first leg of their journey southward to serve in the Union Army during the war. Each town through which they passed met them with flags, fanfare and patriotic fervor. A watercolor painting celebrating that event hangs today in the Chenango Museum in Norwich.
The canal itself was also utilized for recreation. In the summer months it supported swimming, boating and fishing. In the winter months, after the surface froze over, ice skating and even horse racing became favorite pastimes.
Before the Chenango Canal was built, much of the Southern Tier and Central New York was still considered to be frontier. The people there lived as pioneers everywhere lived, a rugged and rustic existence, without the prosperity and possessions enjoyed by much of the rest of the state. The people petitioned for a canal corridor so that they could benefit from such things as efficient clean-burning coal, which had to be shipped from Pennsylvania. Previously people had heated only with wood. After the Chenango, trade increased between New York City, Albany and the Southern Tier. Merchants could market heavier items such as manufactured furniture and the coveted coal-burning stoves. With the canal's opening, living standards would generally improve.
The canal was also a source of local employment. It is believed that Phillip Armour, the millionaire meat packer from Madison County, had first worked as a mule driver walking the Chenango Canal. The countless miles must have built his legs and tenacity, for he eventually quit and walked across the United States. He went to work the gold fields of California. He gradually earned a fortune and ultimately became a shipping magnate- the system he probably learned from his experience on the Chenango. Another driver was Nuel Stever. He was a veteran canal boatman who in 1927, at the age of 76, spoke of his colorful memories living on a Chenango Canal packet boat. His interview was published in The Norwich Sun:
When I was five, I began driving canal boat teams on the towpath pulling the boats. Such work was common to boys of that age. I can remember driving a team hour after hour up the towpath for 20 miles when I was five. When I was tired, I'd rest part of my weight on the towrope; it seemed to rest me. My father was at the helm. But when I became 10, I took my turn at the helm and a younger brother drove the teams. Whole families lived on the canal boats. I was the oldest of 21 children. We'd go to Oswego to load lumber for Bartlett's Mill in Binghamton. Hamilton was the highest point and where the canal froze up first in the fall. Often in the fall as many as 82 boats loaded with lumber would be tied up. When the freeze was just beginning, Bartlett would bring up several teams, hitch them to a bunch of stumps and drag through the canal to break the ice so boats could get lumber to his mill. Canalling was a varied business. For instance, we'd take a lot of firkins and get them filled along the way with butter for the merchants. We'd boat grain up to the big stills at Hamilton, Pecksport, Bouckville and Solsville and bring back loads of whiskey which the merchants sold or shipped away. We only did the boating. Whiskey then sold for 25 cents a gallon. It was a busy canal in those days. Three years before the canal closed, about 50 years ago (from 1927), 120 boats carried coal.
Closing
Opened in 1834, Chenango Canal was a necessary link in the interconnecting transportation system in New York, for which the need was well-recognized at that time. Canals had been employed successfully in England to enable the Industrial Revolution. The extraordinary success of the Bridgewater Canal in North West England, completed in 1761, began an era of canal building in that country. The U.S., intent on its own development, would follow suit. However, the advent of the Chenango Canal, after having been tied up in the New York Legislature for 19 years, came late in the canal era. By the time of its construction, this type of canal and its technology was already becoming obsolete. The invention of the steam locomotive had already occurred in England in 1811, and the development of a railway system had begun in England by the mid-1820s. The success of those technologies and systems there would allow them to supplant the artificial water-route systems everywhere. The railroads were more durable, more flexible, more efficient, more cost-effective and most importantly- faster than the canals.
In 1848, the trains finally arrived in Binghamton in the form of the Erie Railroad. The new technology had caught up with the Chenango and the arrival of the 'iron horse' spelled the eventual end for the canal. Over the next two decades the Binghamton area developed into a transportation hub. Along with the canal, the area was now being served by several railroad lines. After the Civil War, railroad expansion would come to include the Delaware, Lackawanna and Western, the New York, Ontario and Western and the Delaware and Hudson in the Chenango and Susquehanna Valleys. This ultimately rendered the canal obsolete. In a sadly ironic twist, it was the canal that carried the engines for the trains, the tools and the railroad men. It was the canal's own barges that carried the rails for the tracks that would replace them. It is also noteworthy that the D&H began its existence as a canal company.
Despite its success in augmenting the economic development of the Southern Tier and Central New York, the Chenango Canal itself had never been a financial success. After years of competition and decline and continual financial loss, by a vote of the state legislature the Chenango Canal was closed in 1878– just four decades and four years after it was opened. The land and assets were broken up and gradually sold, mostly at auction. Some of the properties were sold to private interests, some were deeded to municipal areas and others were held by the state. Much of the channel was subsequently filled in, and frequently paved over, particularly within the cities and the more populated areas. But some of the more isolated stretches of the canal were simply closed and abandoned.
While most of the larger towns, and all of the state, benefited from the triumph of the railroads, many of the smaller villages and hamlets did not. For practically every small settlement that was located on the line of the canal, but which was missed by the railroad, when the canal departed prosperity went with it. Two good illustrations of this are the once-prospering villages of Port Crane and Pecksport, where very little of either one is left today.
After 1900, a surviving stretch of the then-closed canal gained notoriety owing to its use to transport contraband through the town of Hamilton. Tobacco, alcohol (during Prohibition), and marijuana were transported along the canal. In order to control this traffic, NY State officials decided to build a checkpoint along its route. Surprisingly, over five million dollars worth of illegal goods were confiscated, from 1900 until about 1930, in what would become one of the most famous water-borne transportation enforcements of that time. Remnants of the stockade which was built can still be seen in the back rooms of the buildings currently housing the "Barge Canal Coffee Co." at 37 Lebanon Street and also the associated stockade on the corner of Pine Street and W. Kendrick Avenue in Hamilton. The Pine Street building was later converted into an asylum.
Today
In many places the canal path became the roadbed for streets, and its path can be traced by the roads which replaced it. These include Binghamton's State Street and Chenango Street, NY Route 5, NY Route 8, NY Route 12 and NY Route 12B. In Utica the canal bed follows next to or underneath NY Route 12B/12, and entered the Erie Canal west of State Street. Portions of the old channel, stone aqueducts, locks, and other structures still remain in place along its route, and are visible in several locations. These include: between Bouckville and Solsville; near Hamilton, NY; north of North Fenton and west of County Road 32; north of Sherburne, west of NY Route 12B, and in the village of Oxford, both on Canal St and the remains of a harbor on North Washington. The only place left with moving water is an area between Woodman's Pond, near the now extinct Pecksport and the aqueduct on Canal Road, just after Bouckville, which was the summit level on the canal. Most of these canal remnants as well as much of the original path are visible or discernible using Google Earth.
The Chenango Canal bed continues to exist in Utica alongside the arterial for a half mile just east of the arterial and south of the Burrstone Road overpass. The waters of Nail Creek flow through this section of the canal. The stonework of a lock remains in good shape and can be seen here. The tow path, however, is currently overgrown with brush.
Chenango Canal Summit Level is a national historic district located in the vicinity of Bouckville in Madison County, New York, United States. The district contains three contributing structures. It is a five-mile segment of the Chenango Canal constructed between 1834 and 1836. The five mile summit portion is watered, owned and operated by the New York State Canal Corporation as part of the feeder system for the Erie (Barge) Canal about 30 miles north. The contributing structures are the canal prism and adjacent tow path, the remaining portions of the aqueduct that carried the Chenango over the Oriskany Creek, and a pair of stone bridge abutments. It was listed on the National Register of Historic Places in 2005.
Currently, the community in the Madison area is working to develop a trail along the original towpath. They hope to collect historical artifacts for public display and establish what is left of the Chenango Canal in that area as a Madison County park. The Chenango Canal Corridor Connections trail project is a major project with the vision to link various trails along the general canal and O&W rail corridors from Utica to Binghamton. Calling it the Chenango Connections Corridor, the area's citizens are now working in conjunction with the NYS Office of Parks, Recreation, and Historic Preservation; the NYS Canal Corporation; and Parks & Trails New York staff, as part of the HTHP program, to shape a vision and implementation plan for the corridor that spans three counties and multiple villages and towns.
The Chenango Canal Prism and Lock 107 at Chenango Forks in Chenango County, New York was added to the National Register of Historic Places in 2010.
See also
List of canals in New York
List of canals in the United States
References
External links
Canals in New York (state)
Historic districts on the National Register of Historic Places in New York (state)
History of Broome County, New York
Canals opened in 1836
Canals on the National Register of Historic Places in New York (state)
National Register of Historic Places in Broome County, New York
Transportation buildings and structures in Broome County, New York
Buildings and structures in Chenango County, New York
National Register of Historic Places in Madison County, New York
Transportation in Chenango County, New York
Transportation in Madison County, New York
Transportation buildings and structures in Oneida County, New York
1836 establishments in New York (state) |
53422013 | https://en.wikipedia.org/wiki/Nvidia%20Jetson | Nvidia Jetson | Nvidia Jetson is a series of embedded computing boards from Nvidia. The Jetson TK1, TX1 and TX2 models all carry a Tegra processor (or SoC) from Nvidia that integrates an ARM architecture central processing unit (CPU). Jetson is a low-power system and is designed for accelerating machine learning applications.
Hardware
The Jetson family includes the following boards:
In late April 2014, Nvidia shipped the Nvidia Jetson TK1 development board containing a Tegra K1 SoC in the T124 variant and running Ubuntu Linux.
The Nvidia Jetson TX1 development board bears a Tegra X1 of model T210.
The Nvidia Jetson TX2 board bears a Tegra X2 of microarchitecture GP10B (SoC type T186 or very similar). This board and the associated development platform was announced in March 2017 as a compact card design for low power scenarios, e.g. for the use in smaller camera drones. A matrix describing a set of performance modes was provided by the media along with that. Further a TX2i variant, said to be rugged and suitable for industrial use cases, is mentioned.
The Nvidia Jetson Xavier was announced as a development kit in end of August 2018 Indications were given that a 20x acceleration for certain application cases compared to predecessor devices should be expected, and that the application power efficiency is 10x improved.
The Nvidia Jetson Nano was announced as a development system in mid-March 2019 The intended market is for hobbyist robotics due to the low price point. The final specs expose the board being sort of a power-optimized, stripped-down version of what a full Tegra X1 system would mean. Comparing in more detail only half of the CPU (only 4x A57 @ 1.43 GHz) and GPU (128 cores of Maxwell generation @ 921 MHz) cores are present and only half of the maximum possible RAM is attached (4 GB LPDDR4 @ 64 bit + 1.6 GHz = 25.6 GB/s) whilst the available or usable interfacing is determined by the baseboard design and is further subject of implementation decisions and specifics in an end user specific design for an application case.
The published performance modes of the Nvidia Jetson TX2 are as follows.
Jetson TX2 also has 5 power modes, numbered 0 through 4 as published by NVIDIA. The default mode is mode 3 (MAX-P).
The published operation modes of the Nvidia Jetson Nano are:
Software
Various operating systems and software might be able to run on the Jetson board series.
Linux
JetPack is a Software Development Kit (SDK) from Nvidia for their Jetson board series. It includes the Linux for Tegra (L4T) operating system and other tools. The official Nvidia download page bears an entry for JetPack 3.2 (uploaded there on 2018-03-08) that states:
RedHawk Linux is a high-performance RTOS available for the Jetson platform, along with associated NightStar real-time development tools, CUDA/GPU enhancements, and a framework for hardware-in-the-loop and man-in-the-loop simulations.
QNX
The QNX operating system also available for the Jetson platform, though it is not widely announced. There are success reports of installing and running specific QNX packages on certain Nvidia Jetson board variants. Namely the package qnx-V3Q-23.16.01 that is seemingly in parts based on Nvidia's Vibrante Linux distribution is reported to run on the Jetson TK1 Pro board.
See also
Raspberry Pi
Movidius neural compute stick
References
ARM architecture
Linux-based devices
Nvidia products
Single-board computers
ru:Nvidia#Jetson |
41426709 | https://en.wikipedia.org/wiki/Nafees%20Bin%20Zafar | Nafees Bin Zafar | Nafees Bin Zafar (born 1978) is a visual effects and computer graphics software engineer of Bangladeshi origin based in Los Angeles, USA. Zafar currently works as Principal Engineer at animation studio DreamWorks Animation. In 2008, Zafar received an Academy Scientific and Technical Award thus becoming the first person of Bangladeshi origin to win an Academy Award. In 2015, he won a Technical Achievement Award.
Early life
Nafees Bin Zafar was born in Dhaka, Bangladesh and moved to Charleston, South Carolina with his family when he was 11 years old. He studied at College of Charleston and graduated in software engineering.
He is the son of Zafar Bin Bashar, a Partner at Marcum & Kliegman, and Nafeesa Zafar who reside in Long Island, New York. He is a great-grandson of the famous late Bangladeshi poet Golam Mostofa and grand-nephew of the famous Bangladeshi artist and puppeteer Mustafa Monwar .
Career
In February 2008, Zafar received an Academy Scientific and Technical Award for the development of the fluid simulation system at Digital Domain, which was used in the film Pirates of the Caribbean: At World's End.
He was awarded the Scientific and Engineering Award, an Academy plaque, along with his colleagues at Digital Domain, thus becoming the first person of Bangladeshi origin to win an Academy Award.
In February 2015, Zafar was recognized by the Academy once more when he and his colleagues at Digital Domain received a Technical Achievement Award, an Academy certificate, for their work on the Drop Destruction Toolkit, used to create visual effects in the film 2012. He now works as Principal Engineer at DreamWorks Animation.
Filmography
Madagascar 3: Europe's Most Wanted (principal engineer)
Puss in Boots (senior software engineer)
Kung Fu Panda 2 (senior software engineer)
Megamind (senior production engineer)
Shrek Forever After (senior production engineer)
Percy Jackson & the Lightning Thief (software engineer)
The Seeker: The Dark Is Rising (visual effects: Digital Domain)
Pirates of the Caribbean: At World's End (technical developer)
Flags of Our Fathers (technical developer)
Stealth (software engineer)
The Croods (research and development principal engineer: DreamWorks Animation)
See also
DreamWorks Animation
References
External links
Nafees Bin Zafar: Linkedin profile
1978 births
Living people
Academy Award for Technical Achievement winners
College of Charleston alumni
Software engineers
People from Dhaka |
231683 | https://en.wikipedia.org/wiki/Norwegian%20University%20of%20Science%20and%20Technology | Norwegian University of Science and Technology | The Norwegian University of Science and Technology (, NTNU) is a public research university in Norway with the main campus in Trondheim and smaller campuses in Gjøvik and Ålesund. The largest university in Norway, NTNU has over 8,000 employees and over 40,000 students. NTNU in its current form was established by the King-in-Council in 1996 by the merger of the former University of Trondheim and other university-level institutions, with roots dating back to 1760, and has later also incorporated some former university colleges. NTNU is consistently ranked in the top one percentage among the world's universities, usually in the 400–600 range depending on ranking.
NTNU has the main national responsibility for education and research in engineering and technology, and is the successor of Norway's preeminent engineering university, the Norwegian Institute of Technology (NTH), established by Parliament in 1910 as Norway's national engineering university. In addition to engineering and natural sciences, the university offers higher education in other academic disciplines ranging from medicine, psychology, social sciences, the arts, teacher education, architecture and fine art. NTNU is well known for its close collaboration with industry, and particularly with its R&D partner SINTEF, which provided it with the biggest industrial link among all the technical universities in the world. The university's academics include three Nobel laureates in physiology or medicine: Edvard Moser, May-Britt Moser and John O'Keefe.
History
NTNU is a young institution with a long history. The university, in its current form, was established in 1996 by the merger of six research and higher education institutions in Trondheim, as follows:
Norwegian Institute of Technology (NTH), established in 1910
Museum of Natural History and Archaeology (VM), established in 1767
Norwegian College of General Sciences (AVH), established in 1922
Faculty of Medicine (DMF), established in 1975
Trondheim Academy of Fine Art (KiT), established in 1987
Trondheim Conservatory of Music, established in 1973
Prior to the merger, NTH, NLHT, DMF, and VM together constituted the University of Trondheim , which was a much looser organization. However, the university's root goes back to 1760, with the foundation of Det Trondhiemske Selskab (Trondheim Academy), which in 1767 became the Royal Norwegian Society of Sciences and Letters.
The engineering education in Trondheim began with Trondhjems Tekniske Læreanstalt (Trondheim Technical College) in 1870, and in 1910, Norwegian Institute of Technology (NTH) opened officially. In 2010, NTNU celebrated the 250th anniversary of Trondheim Academy. NTNU also celebrated the 100th anniversary of NTH in the same year. The centennial was also celebrated by the publication of several books, among them a history of the university, entitled "Turbulens og tankekraft. Historien om NTNU" which translates as "Turbulence and mindpower: The history of NTNU".
1700s
Det Trondhiemske Selskab (Trondheim Academy), Norway's first academic society, was founded in 1760. In 1767, it changed its name to the Royal Norwegian Society of Science and Letters (DKNVS) upon receiving recognition from the Danish-Norwegian king. DKNVS library – today known as NTNU Gunnerus Library – was founded in 1768, and is Norway's oldest library.
1800s
First proposal for a Norwegian Polytechnical Institute was made in 1833. Trondhjems Tekniske Læreanstalt (Trondheim Technical College) or TTL was founded in 1870. The newly formed school educated engineers of various fields. In 1898, TTL moved to a larger building in Munkegata. TTL was disbanded in 1900s, giving a way to a Norwegian Institute of Technology.
1900–1968
In 1900, the Norwegian Parliament passed a resolution supporting the establishment of Norwegian Institute of Technology (NTH) in Trondheim. NTH was officially opened on September 15, 1910. Five academical departments were originally present in the parliament's resolution of 31 May 1900, such as Architecture and Urban Planning, Civil Engineering, Mechanical Engineering (a. General and b. Naval, i.e. ship and ship engine construction), Electrical engineering, and Chemistry (a. General and b. Electro-chemistry).
In 1922, Norwegian College of Teaching in Trondheim (NLHT) opened at Lade gård.
In 1950, Stiftelsen for industriell og teknisk forskning (Industrial and Technical Research Centre) or SINTEF was founded as part of NTH and as its link to Norwegian industry.
1968–1996
The University of Trondheim (UNiT) was established in 1968, and the Department of Medicine (later the Faculty of Medicine) was established as part of UNiT in 1974. It was designed by the architect Henning Larsen. In 1984, NLHT also absorbed the Norwegian College of General Sciences (AVH) as part of UNiT.
1996–2016
On 1 January 1996, the University of Trondheim became the Norwegian University of Science and Technology (NTNU). As early as 1989, NTH Rector Karsten Jakobsen had broached the idea of a Norwegian University of Science and Technology in Trondheim. On March 21, 1995, the Parliament, with barely a majority after a long debate, decided to establish NTNU in Trondheim. In 2012 the popular trivia game Kahoot was founded in by Johan Brand, Jamie Brooker and Morten Versvik in a joint project with the Norwegian University of Science and Technology. They teamed up with Professor Alf Inge Wang and were later joined by Norwegian entrepreneur Åsmund Furuseth.
2016–present
In 2014, the Norwegian Ministry of Education and Research asked the country's universities and university colleges to provide suggestions, observations, and ideas for rebuilding Norway's institutions of higher education. The context of the request was that the Norwegian government wanted to cut back on the number of institutions in the sector.
The NTNU board decided on 28 January 2015 to merge NTNU with the University Colleges of Sør-Trøndelag, Ålesund and Gjøvik to form a new university that would retain the university's current name, the Norwegian University of Science and Technology. The merger, which went into effect in January 2016, made NTNU Norway's largest single university.
Campus
NTNU has several campuses in Trondheim; Gløshaugen – for engineering and natural sciences – and Dragvoll – for humanities and social sciences – are the main two campuses. Other campuses include Tyholt for marine technology, Øya for medicine, Kalvskinnet for archaeology, Midtbyen for the music conservatory and Nedre Elvehavn for the art academy.
NTNU Gløshaugen is an artistic combination of historical NTH buildings and modern buildings. Combined, the campuses span a total area of 734,000 m2.
In addition to NTNU, the following research institutes are located at Gløshaugen, and cooperate closely with NTNU in several areas of research and development:
SINTEF, since its establishment in 1950, has its main departments at Gløshaugen. SINTEF was originally founded by NTH, but since 1980 has been an independent research institute.
In 1998, the Paper and Fibre Research Institute (PFI), an independent research institute, moved into a new building at Gløshaugen, relocating from Gaustad in Oslo.
In April 2013, the Norwegian Institute for Nature Research (NINA) moved into a new building south of the Natural Science building. NINA often works closely with SINTEF and NTNU.
NTNU has long considered the possibility of bringing the two largest campuses together at or near NTNU's Gløshaugen campus. In 2013, the Rector initiated a vision project and charged it with defining different perspectives on future development in a 50-year perspective. The same year, 2013, the Norwegian Ministry of Education and Research initiated a choice of concept study for the future co-localization of NTNU's two main campuses in Trondheim. The reports were presented in 2014, and both recommended bringing Dragvoll and Gløshaugen together, and better integrating them with the city. A unanimous NTNU board endorsed the recommendations in the vision report.
Merger
On 1 January 2016, the merger between NTNU and the university colleges in Gjøvik, Ålesund, and Sør-Trøndelag officially entered into force, and NTNU consequently had campuses in Ålesund and Gjøvik, as well as in Trondheim. 2016 was also a transitional year in terms of NTNU's leadership. On November 24, 2015, the new Board met for the first time. It was then extended to include board members from each of the three former university colleges and an external representative appointed by the Ministry of Education.
Organization
NTNU is governed by a board of 11 members, in accordance with the provisions of the Norwegian Act relating to universities and university colleges. Two of the members are elected by and from the students.
NTNU's overall budget in 2017 was 8.19 billion NOK, most of which came from the Norwegian Ministry of Education.
As a result of the university merger in 2016, the number of NTNU faculties increased from seven to nine – including the University Museum – with approximately 39,000 students and approximately 2,500 PhD students. The nine NTNU faculties are organized in 65 departments:
Faculty of Engineering
The Faculty of Engineering has eight departments:
Department of Civil and Environmental Engineering
Department of Energy and Process Engineering
Department of Geoscience and Petroleum
Department of Marine Technology
Department of Mechanical and Industrial Engineering
Department of Structural Engineering
Department of Manufacturing and Civil Engineering (in Gjøvik)
Department of Ocean Operations and Civil Engineering (in Ålesund)
Faculty of Information Technology and Electrical Engineering
The Faculty of Information Technology and Electrical Engineering has eight departments:
Department of Computer Science
Department of Electric Power Engineering
Department of Electronic Systems
Department of Engineering Cybernetics
Department of General Science
Department of Information Security and Communication Technology
Department of Mathematical Sciences
Department of ICT and Natural Sciences (in Ålesund)
Department of Software Engineering in masters
Department of Graphics Design
Faculty of Natural Sciences
The Faculty of Natural Sciences has eight departments:
Department of Biology
Department of Biomedical Laboratory Science
Department of Biotechnology and Food Science
Department of Chemical Engineering
Department of Chemistry
Department of Materials Science and Engineering
Department of Physics
Department of Biological Sciences Ålesund
Faculty of Architecture and Design
The Faculty of Architecture and Design has four departments:
Department of Architecture and Planning
Department of Architecture and Technology
Department of Design
Trondheim Academy of Fine Art
Faculty of Economics and Management
The Faculty of Economics and Management has four departments:
Department of Economics
Department of Industrial Economics and Technology Management
Department of International Business
NTNU Business School
Faculty of Medicine and Health Sciences
The Faculty is integrated with St. Olavs Hospital, Trondheim University Hospital, and is located in Campus Øya in Trondheim. Its main areas of research are translational research, medical technology and health surveys, biobanks and registers. In 2016 the faculty had about 350 master's degree students, 250 bachelor's degree students, 720 medical students and more than 500 students attending other courses.
The Faculty of Medicine and Health Sciences has eight departments:
Department of Clinical and Molecular Medicine
Department of Circulation and Medical Imaging
Department of Mental Health
Department of Neuromedicine and Movement Science
Department of Public Health and Nursing
Kavli Institute for Systems Neuroscience
Department of Health Sciences in Gjøvik
Department of Health Sciences in Ålesund
Faculty of Social and Educational Sciences
The Faculty of Social and Educational Sciences has seven departments:
Department of Education and Lifelong Learning
Department of Geography
Department of Psychology
Department of Social Anthropology
Department of Social Work
Department of Sociology and Political Science
Department of Teacher Education
Faculty of Humanities
The Faculty of Humanities has six departments:
Department of Art and Media Studies
Department of Historical Studies
Department of Interdisciplinary Studies of Culture
Department of Language and Literature
Department of Music
Department of Philosophy and Religious Studies
University Museum
The NTNU University Museum forms part of the university at the same organizational level as the faculties. It has two departments:
Department of Archaeology and Cultural History
Department of Natural History
Research
NTNU's history of research in engineering goes back to the early 20th century, when Norway's first electric railway, known as Thamshavn Line, was developed and constructed in Trondheim as an AC powered tramway, with Trondheim-based technologies. The tramway was launched in 1908 and remained in operation until 1974.
Now, research is part of the ongoing activities at NTNU faculties as well as the University Museum. The university has 4377 scientific staff who conduct research in more than 120 laboratories, and are at any time running more than 2,000 research projects. Students and staff can take advantage of roughly 300 research agreements or exchange programs with 58 institutions worldwide.
NTNU has identified four Strategic Research Areas for 2014–2023: NTNU Energy, NTNU Health, NTNU Oceans and NTNU Sustainability, which were chosen on the basis of social relevance, professional quality and the potential for interdisciplinary cooperation.
Research centres
The university hosts six National Centres of Excellence (SFF), 12 Centres for Research-based Innovation (SFI), and three Centres for Environment-friendly Energy Research (FME), which are mainly funded by The Research Council of Norway. NTNU is also a partner in several centres with SINTEF.
The Trøndelag Health Study, with the HUNT Research Centre and HUNT Biobank located in Levanger, is organized under the Faculty of Medicine and Health Sciences.
Kavli Institute for Systems Neuroscience
The fifteenth Kavli Institute was inaugurated at NTNU in 2007, as the Kavli Institute for Systems Neuroscience, which was the fourth Kavli Institute in neuroscience in the world and the first Kavli Institute in Northern Europe. In 2012, Prime Minister Jens Stoltenberg opened the Norwegian Brain Centre as an outgrowth of NTNU's Kavli Institute. It is one of the largest research laboratories of its kind in the world.
Enabling technologies
NTNU has funded the fundamental long-term research and infrastructure through three Enabling Technologies such as NTNU Biotechnology, NTNU Digital and Nano@NTNU.
Research excellence
NTNU Research Excellence is an initiative to develop elite researchers and research groups in international class, which was launched in 2013, and includes established and new initiatives. The established initiatives are financed by the Research Council of Norway, the EU, and private-sectors (R&D), while the new initiatives are funded by NTNU's own funds in light of strategic prioritization of NTNU's resources. These cover a number of research funding schemes including Outstanding Academic Fellows Programme, Onsager Fellowship Programme, K.G. Jebsen Centres, EU projects, and ERC grants. NTNU participates in about 218 projects in the EU Horizon 2020 Framework Program.
NTNU-SINTEF partnership
NTNU works closely with SINTEF, Scandinavia's largest independent research institute and one of the largest contract research organizations in Europe, which is integrated into NTNU Campuses. The cooperation of NTNU and SINTEF has been further developed through the project "Better Together", which was launched in 2014. The research collaboration includes a number of joint research laboratories, for example:
Gemini Centres: NTNU and SINTEF have established a wide range of Gemini Centres. Scientific groups with parallel interests coordinate their scientific efforts and jointly operate their resources.
MARINTEK: MARINTEK has more than 70 years of experience in the developing of cost-effective, high-performance ships, where model testing in their laboratories constitutes an important element. It provides testing facilities, expertise, and analytical tools for developing operationally efficient and safe ship concepts. MARINTEK is located at the NTNU Department of Marine Technology, and has recently joined SINTEF Ocean.
Rolls-Royce University Technology Centre: Rolls-Royce Marine, MARINTEK, and NTNU have had a close collaboration for more than 30 years, with cooperation in the development of propellers, propulsion systems, ship designs, and various ship equipment. The Rolls-Royce University Technology Centre is a long-term research collaboration between NTNU, MARINTEK and Rolls-Royce Marine, with a special focus on propellers and propulsion in waves and off-design conditions.
NTNU NanoLab: NTNU NanoLab aims to facilitate a collaborative research environment for scientists within the fields of physics, chemistry, biology, electrical engineering, materials technology and medical research. The research activities are carried out in collaboration with SINTEF.
SINTEF Energy Lab: SINTEF Energy Lab represents the next generation energy laboratories, and provide important tools for development of future energy solutions and power systems. It has a wide range of experimental facilities in different areas of Electric Power Engineering from High Voltage Technology to Power Electronics. The research activities are application-based and are carried out in close collaboration with NTNU.
The Gas Technology Centre: The Gas Centre seeks to exploit the synergism of multidisciplinary research into the natural gas value chain. The centre is the largest natural gas research- and educational centre in Norway, located at NTNU.
Norwegian Fuel Cell Hydrogen Centre, a joint initiative taken by SINTEF, NTNU and Institute for Energy Technology (IFE).
Publishing
To increase Open Access publishing, NTNU has established a publishing fund.
In 2008 NTNU's digital institutional repository was founded. The intention was to establish a full-text archive for the documentation of the scientific output of the institution, and to make as much as possible of the material available online, both nationally and internationally.
In addition to the research articles and books, intended for academics and researchers both inside and outside the university, NTNU disseminates news to the public about the institution and its research and results.
Universitetsavisa, which translates to The University Newspaper, is the news and discussion paper of the university, available only in Norwegian. It was established in 1991. For a period it existed in both printed and digital editions, but since 2002 it is only available online.
GEMINI publishes research news from NTNU and the independent research group SINTEF in both English and Norwegian. It is published in both a printed and a digital version.
The Norwegian University of Science and Technology publishes the Nordic Journal of Science and Technology.
Ranking
According to Times Higher Education World University Rankings published in March 2017, NTNU is ranked first in the world ranking of universities with the biggest corporate links, due to its research collaboration with SINTEF. Statistically, 9.1 percent of NTNU's total research output is generated in collaboration with SINTEF, which is the largest academic-industry partnership in the world.
NTNU was ranked 205th in the world 2 June 2021 in the CWTS Leiden Ranking, which is based on bibliometric indicators.
NTNU was ranked 59th in Europe and 187th in the World in July 2020 in the Webometrics Ranking of World Universities for its presence on the web.
Rankings by ARWU Global Ranking of Academic Subjects 2021:
Engineering – Marine/Ocean Engineering: 2
Engineering – Chemical Engineering: 76–100
Engineering – Metallurgical Engineering: 40
Engineering – Automation & Control: 101–150
Studies
NTNU specializes in technology and the natural sciences, but also offers a range of bachelor's, master's and doctoral programmes in the humanities, social sciences, economics and public and business administration, and aesthetic disciplines. The university also offers professional degree programmes in medicine, psychology, architecture, the fine arts, music, and teacher education, in addition to technology.
According to the Norwegian Social Science Data Services, NTNU had 84,797 applicants in 2011 and a total student population of 19,054, of whom 9,062 were women. There were 6,193 students enrolled in the Faculty of Social Sciences and Technology Management, 3,518 students enrolled in the Faculty of Engineering Science and Technology, 3,256 students enrolled in the Faculty of Humanities, 3,090 students enrolled in the Faculty of Information Technology, Mathematics and Electrical Engineering, 2,014 students enrolled in the Faculty of Natural Sciences and Technology, 1,071 enrolled in the Faculty of Medicine, and 605 enrolled in the Faculty of Architecture and Fine Art.
About 3,500 bachelor's and master's degrees are awarded each year, and more than 5,500 participate in further education programmes.
NTNU has more than 300 cooperative or exchange agreements with 60 universities worldwide, and several international student exchange programmes. There are, at any given time, around 2,600 foreign students at the university.
NTNU Teaching Excellence
In 2015, NTNU established an initiative called ″NTNU Teaching Excellence″.
Student life
NTNU welcomes students from all over the world, and offers more than 60 international master programmes as well as PhD programmes, which all are taught in English. PhD vacancies are announced on the university website and are paid as academic staff, that offers one of the world's best PhD fellowships as well as employment benefits under Norwegian law. There are no tuition fees at NTNU, however students do pay "semesteravgift" every semester. However, the international students have to guarantee their living expenses if they are not offered a scholarship.
NTNU students have a clear presence in the city of Trondheim. The most famous student organization is the Studentersamfundet i Trondhjem, also known as "the red round house" after its architectural form; every other year it organizes a cultural festival UKA. Another festival organized by students is the International Student Festival in Trondheim ISFiT, which awards a student peace prize and draws internationally known speakers. EMECS-thon is a student driven embedded systems marathon competition, organized by students from NTNU and implemented in some of the top universities worldwide, where participants have 48 hours to develop an embedded project from scratch. The student sports organization, NTNUI, has roughly 10 000 members in its many branches, with the largest groups including orienteering, cross-country and telemark skiing, but there are also groups for sports less common in Norway, like American football, lacrosse and aikido. A cabin and cottage organization owns several cabins in the countryside, available for students wishing to spend a few days away. There are also student fraternities, some of which conduct voluntary hazing rituals, which provide contact with potential employers and for social interaction between students. There are also alumni associations; religious and political organizations; clubs devoted to various topics such as innovation, human rights, beer, oatmeal, anime and computers; and The Association for Various Associations, which is a parody of the university's large number of student organizations. University recently started to offer "roof over your head" guarantee to the new coming student to Trondheim until they find proper housing.
The Gløshaugen campus of the university has been filmed with a quadcopter and may be seen as YouTube video here.
NTNU is a pioneer with of the concept of "Student Cabins", the university offers its students access to cabins on the outskirts of the city of Trondheim in which they can enjoy on vacations and weekends.
Nobel laureates and notable people
Nobel laureates
1968 Lars Onsager, Chemistry (graduated as a chemical engineer from Norwegian Institute of Technology, NTH, in 1925),
1973 Ivar Giaever, Physics (graduated as a mechanical engineer from Norwegian Institute of Technology, NTH, in 1952)
2014 Edvard Moser, Medicine or Physiology (professor of neuroscience, NTNU), not NTNU alumni
2014 May-Britt Moser, Medicine or Physiology (professor of neuroscience, NTNU), not NTNU alumni
2014 John O'Keefe, Medicine or Physiology (visiting researcher, NTNU 2015– ), not NTNU alumni
Faculty and staff
Alumni and honorary doctors
In 2006, NTNU Alumni was founded, primarily as a meeting place and professional network for former students and staff of NTNU and its precursors. The network is now also open to current employees and students. In 2014 the number of members was around 30,000.
NTNU annually awards honorary doctorates to scientists and others who have made an extraordinary contribution to science or culture.
See also
Gemini (magazine), research news from NTNU and SINTEF
Centre for Renewable Energy
References
External links
Official website
NTNU – facts and figures
An overview of some of NTNU's laboratories
An overview of the campus in Gløshaugen
Universities and colleges in Norway
Science and technology in Norway
Education in Trondheim
Buildings and structures in Trondheim
Educational institutions established in 1996
1996 establishments in Norway
Architecture schools
Members of the European Research Consortium for Informatics and Mathematics
Engineering universities and colleges in Norway
Universities and colleges formed by merger in Norway |
51771271 | https://en.wikipedia.org/wiki/Stanford%20Mobile%20Inquiry-based%20Learning%20Environment%20%28SMILE%29 | Stanford Mobile Inquiry-based Learning Environment (SMILE) | Stanford Mobile Inquiry-based Learning Environment (SMILE) is a mobile learning management software and pedagogical model that introduces an innovative approach to students' education. It is designed to push higher-order learning skills such as applying, analyzing, evaluating, and creating. Instead of a passive, one-way lecture, SMILE engages students in an active learning process by encouraging them to ask, share, answer and evaluate their own questions. Teachers play more of the role of a “coach,” or “facilitator”. The software generates transparent real-time learning analytics so teachers can better understand each student's learning journey, and students acquire deeper insight regarding their own interests and skills. SMILE is valuable for aiding the learning process in remote, poverty-stricken, underserved countries, particularly for cases where teachers are scarce. SMILE was developed under the leadership of Dr. Paul Kim, Wilson Wang, and Rayan Malik.
The primary objective of SMILE is to enhance students’ questioning abilities and encourage greater student-centric practices in classrooms, and enable a low-cost mobile wireless learning environment.
History
SMILE was first developed with the help of Seeds of Empowerment (Seeds), a global non-profit 501(c)(3) organization founded in 2009 by Dr. Paul Kim. Since 2009, the NGO has helped pilot studies test the software around the world, including in countries such as South Africa, United Arab Emirates, Ghana, and Tanzania.
Kim and his research assistants at Stanford University were the main contributors to the initial technical design of SMILE. The PocketSchool research study investigated a portable ad-hoc network solution that enabled a multi-user interactive learning environment in areas where resources such as electricity or access to the Internet is limited. This research was part of multiple projects affiliated with Stanford's interdisciplinary Programmable Open Mobile Internet supported by National Science Foundation.
SMILE was further developed by GSE-IT at the Graduate School of Education of Stanford University, and partnering organizations such as Edify. The license of trademarks, software, hardware, and technical design remains with the Office of Technology Licensing (OTL) at Stanford University.
SMILE has been listed as one of the most innovative tools for the schools of tomorrow by International Commission on Financing Global Education Opportunity, chaired by Gordon Brown, former Prime Minister of United Kingdom, in its 2016 report.
How it works
SMILE is composed of two main applications: a mobile-based question application for students, along with a management system for teachers. The software allows students to create open-ended or multiple-choice questions on mobile phones during class to share with their classmates and teachers.
The classroom management software allows students to share, respond, and rate questions on criteria such as creativity and depth of analysis. These applications can communicate via either a local network or the Internet. The local ad hoc network (SMILE Ad-hoc), shown in Fig. 1, was designed for developing regions without any type of network; the Internet version (SMILE Global) was designed for areas with mobile networks linked to the Internet. SMILE Ad-hoc enables students to engage in SMILE activities and exchange inquiries with peers in their classrooms or their own school. SMILE Global enables students around the world to exchange their inquiries regardless of their location. Both SMILE Ad-hoc and SMILE Global allow students to incorporate multimedia components in their questions: SMILE Ad-hoc uses images, and SMILE Global uses images, audio, and video.
Students are asked to submit their questions to the server management application, which then collects all the students’ questions and sends them back to the students' mobile-based application so that each student can answer his or her peers’ questions. Teachers can also enter questions to test information. While responding to their peers’ questions, students are also asked to rate the questions on a scale of 1(poor) - 5 (excellent). Teachers and students can also develop their own suitable standards for rating questions.
Question ratings
The instrument is designed to identify performance variations. It enables organizers to define five different levels of question quality. For example:
The data management software gathers these responses, records the time students take to respond, and saves this data for the teacher to analyze. Students are also able to view how everyone's question was rated and whether they answered questions correctly.
As facilitators of this system, teachers have the ability to choose the “mode of learning” which describes the different forms or activities of questioning that students engage in. The following is a list of the various operating modes.
Operating modes
SMILE has five operating modes. The facilitator chooses the mode for each activity.
Teachers can promote a classroom environment that is either collaborative, competitive, or both, depending on what they deem will motivate their students more. When creating questions in teams, the learning environment calls for collaboration. When ranking each team's questions, the activity turns into a competitive game. Additionally, generating multiple choice questions is a critical facet of this learning model because it leads students to do thorough research to find the right answer and distractors. Verifying that distractors are not feasible answers to the question also reinforces the student's learning of the material.
In lieu of test scores, the ratings of the questions can be used to assess learning outcomes. Analyzing a student's ability to rate other students' questions can be used to gauge critical thinking skills.
SMILE on Google Assistant
SMILE on Google Assistant is composed of three main features: the Question Evaluator Quiz, the Recent News Feature, and the School Subjects Feature. The application enables students to actively engage in their learning by asking questions and its development is led by Wilson Wang and Rayan Malik.
The Question Evaluator Quiz helps students identify effective questions by asking them to rank open-ended questions using a pre-determined SMILE rubric based on Blooms’ Taxonomy. The quizzes are divided into different subjects, such as Economics or History, and students are required to rank 5 questions from their chosen subject from Level 1-5. Students and their teachers are provided a visual report after the quiz is completed that includes their overall score as well as personalized learning tools to help the student improve both their question analysis skills and conceptual understanding.
The Recent News Feature encourages students to actively read the news while teaching them how to ask insightful questions. Students are provided with 10 news articles about different topics from a variety of news sources. After reading all the articles, students select an article to complete the interactive exercise with. In the exercise, students are asked to create multiple questions using specific keywords from the article that they selected. The questions students create are ranked from Level 1-5 using artificial intelligence software and a visual report is created that provides a comprehensive overview of their performance in the interactive exercise.
The School Subjects Feature enables students to learn content from their curriculum via inquiry-based learning. Students select a topic from their curriculum, such as Government Intervention in Economics, and then ask questions about the topic using specific keywords. The questions students create are ranked from Level 1-5 using artificial intelligence software. This feature is still in development and is currently available for the British A-Level curriculum.
SMILE on Google Assistant is also currently being designed as a separate application for iOS and Android Devices.
Practice
Because it is content-agnostic, SMILE is intended for a wide range of educational settings. For example, some schools have successfully implemented SMILE into their curriculum. SMILE effectively utilizes mobiles and the "flipped" classroom.
The teacher has multiple features at his or her disposal. The Activity Flow window allows the teacher to activate the various stages of the activity. The Student Status window displays the current status of each student. The Scoreboard displays each student's responses. The Question Status window displays metadata about the question. The Question window displays the question itself and its predetermined correct answer. The Top Scorers window displays which student achieved the highest score and which question received the highest ratings. The Save Questions button allows the teacher to save data from a given exercise to the server.
SMILE in Math Classrooms
Math classes in Argentina and Indonesia have implemented SMILE as a learning model. In a typical setting, students go through several phases of learning. They are: Introduction and device exploration, Prompt for problems, Student grouping and generating questions, Question generation, Question solving, Result review, Reflection, and Repetition & enrichment. Song, Kim, and Karimi describe this in their 2012 paper, Inquiry-based Learning Environment Using Mobile Devices in Math Classroom.
After familiarizing themselves with the mobile devices, students were asked to create challenging questions that “even their teachers would not be able to solve.” Students were also be grouped in teams to generate questions where each member would attempt to also solve and verify their answers to their own questions. In addition to using their mobiles, students took pictures of supplementary figures or graphics that they drew to illustrate the question. At this time, facilitators would walk around to guide the students and help them discern what makes a good question. Afterwards, students solved their peers’ questions and rated them according to quality. Facilitators would monitor these activities in real-time and allow students to review the correct answer and student/group ranking of the questions. After reviewing, students were asked to explain their process of generating math questions and how they solved them.
SMILE in healthcare
A typical SMILE session involves participants making questions regarding healthcare topics most prevalent and of interest to themselves. Questions could include captured images from references, physical environments, or audios and videos of patients. SMILE then collects all inquiries to redistribute them for participants to answer. Once participants respond, they are asked to rank their peers’ questions and also present the rationale behind their own questions. Facilitators observe and analyze the quality of the questions according to relevancy and clarity. In the end, a review of correctly answered questions, average rating, and number of questions generated is shown.
Technology
Participants in the SMILE Plug model must be physically present and connected to the ad-hoc SMILE WiFi network. A SMILE Plug router contains the SMILE server software, KIWIX, Khan Academy Lite, other various open education resources including open education textbooks, and four different coding language school programs.
SMILE Global enables students around the world to exchange their inquiries regardless of their location. People who are interested in a particular topic (e.g., for example, 'health') can search the keyword and also create their own questions, respond to existing questions, or comment on questions and answer. The SMILE Global server is accessible in the cloud.
Cost
The cost of implementing a SMILE activity depends on the infrastructure available at the school, but minimum costs are $80 per mobile phone (one for every 2-3 students), $300 for a notebook laptop computer, and $100 for a local router. Mobile devices provide a low-cost alternative to the traditional computer lab model.
In 2012, the SMILE team partnered with Marvell to create SMILE Consortium.
SMILE Plug on Raspberry Pi
In 2016, the SMILE Plug was implemented on a Raspberry Pi 3. SMILE will boot in one minute when plugged into USB power. In developing countries with limited access to electricity, a USB battery pack is required. The Pi, designed for use in areas of low internet connectivity, provides a local WiFi access point. The Plug requires a microSD card which acts as the hard drive and local repository of the offline resources. In order to update the SMILE Plug, one will swap out the previous microSD card with a newer microSD card with updated resources.
SMILE Global and Natural Language Processing
In 2017, SMILE Global will interface with a natural language processing API. The SMILE team has prepared a databank of questions pre-categorized according to the question quality rubric. The API will return a rating for each question that is submitted on SMILE Global. The immediate response and feedback will give students a chance to make improvements to their questions in real time.
Projects
SMILE has reached over 25 countries, including the United States, India, Argentina, Mexico, Costa Rica, Colombia, Nepal, China, Uruguay, Indonesia, South Korea, South Africa, Sri Lanka, Pakistan, and Tanzania.
Argentina
In 2012, the Ministry of Education in Buenos Aires looked into modifying the cell phone prohibition use in the classroom that had been in effect since 2006. In addition to using SMILE, educators can now create executable programs on mobile devices to help facilitate learning in the classroom.
SMILE workshops on Music, Language Arts, and Mathematics were implemented in Misiones and Talarin in August 2011. By using an exploratory learning pedagogy, students were able to compose songs. The power of mobile devices to reach the last mile and the last school is most evident where electricity and internet access is not guaranteed.
Chile
Due to the centralized nature of Chile, it is harder for education to reach the poorest areas of the country. The concept of a mobile classroom, or "pocket school," connected and tied together by a network of mobile phones, is an attempt to take advantage of the resources already available in the most underserved communities.
Indonesia
Students were asked to generate math questions covering a wide range of topics, from triangle-angle sum theorem, to fractions, areas, and diameters.
South Korea
SMILE Global was tested with medical students at Chungbuk National University. Criteria for high-quality questions, criteria rubrics, and examples of high- and low-quality questions were discussed with students first. As the students were already very experienced in using technology, they spent 60% of their time on the inquiry-making task.
United Arab Emirates
SMILE Global was tested with fifty-four KG2 students in August 2020 during a study led by Rayan S. Malik in Dubai. Students were introduced to SMILE Global via workshops and spent seven weeks learning content by interacting with SMILE Global. Five different learning outcomes, including critical thinking and conceptual understanding skills, were quantified to examine the impact of SMILE Global on students' learning outcomes. The study found that while SMILE Global had no statistical effect on students' conceptual understanding, it significantly improved their question analysis skills and confidence.
Findings
Teachers need an initial training period and some follow-up mentoring so they can facilitate questions. Tailoring the content of the training to the local environment is crucial. Without putting the benefits of SMILE into the local context, teachers and students will find no compelling reason to adopt the pedagogy.
SMILE worked best when officials, along with civil society organizations, universities, and local businesses, worked together to bring the software to classrooms.
Cohesiveness of Integration
The success rate of implementing SMILE is dependent on how cohesively an inquiry-based pedagogy is tied to the curriculum taught at a school. While SMILE can be implemented with the existing curriculum (for example, with students asking simple recall math questions), it is most effective as an additional platform to foster critical thinking. Higher teacher motivation, better classroom integration, and higher frequency of use are three factors that increase SMILE retention.
Challenges
Cultural norms governing relations between adults and students and situational authority roles may inhibit student questioning.
SMILE was more difficult to implement in areas where rote memorization pedagogies were typical teaching methodologies. Some students found it hard to generate their own questions, given their previous classroom experiences with rote memorization activities.
Additionally, students with little experience using smartphones took longer to understand and use the technology. Eventually, however, they adjusted after exploration.
References
Educational technology projects
Educational software
Peer learning
Critical pedagogy |
47716871 | https://en.wikipedia.org/wiki/Greenhouse%20Software | Greenhouse Software | Greenhouse Software (commonly known as Greenhouse) is an American technology company headquartered in New York City that provides a recruiting software as a service. It was founded in 2012 by Daniel Chait and Jon Stross.
The company raised $2.7 million in a seed round in 2013, $7.5 million in its Series A round in 2014, $13.6 million in its Series B round in 2015, $35 million in its Series C round in 2015, and $50 million in its Series D round in 2018. Research firm CB Insights, in a study commissioned by The New York Times, listed Greenhouse among fifty startups predicted to become unicorns, companies with at least a $1 billion valuation.
History
Early history: Founding, seed funding, and Series A round (2012–2014)
Greenhouse Software cofounder Daniel Chait founded banking software startup company Lab49 from his kitchen. While running Lab49, Chait had to deal with the pressures and obligations of recruitment. After he sold his stake in Lab49 in 2011, he decided to focus his next business venture on recruiting, which he told TechCrunch in a November 2013 interview "seemed like a big opportunity". Greenhouse was founded in 2012.
On November 14, 2013, Greenhouse raised $2.7 million in a seed round led by Social+Capital Partnership and Resolute Ventures. The angel investors who participated in the seed round were Nick Ganju (a founder of ZocDoc), Seth Goldstein (a founder of DJZ and Turntable.fm), Thatcher Bell (a founder of DFJ Gotham Ventures), Thomas Lehrman (a founder of Gerson Lehrman Group), and Bill Lohse (an early investor in Pinterest). The funding round's aim was to expand the engineering, sales, and marketing teams, as well as increase the number of customers Greenhouse could take on.
In August 2014, Greenhouse raised $7.5 million in its Series A round led by Social+Capital Partnership. Resolute Ventures and Felicis Ventures were part of the funding round. Social+Capital Partnership's Mamoon Hamid joined Greenhouse's board. The company aimed to use the money to increase hiring and to market its software.
Later history: Series B round and Series C round (2015–present)
In March 2015, Greenhouse secured $13.6 million in its Series B round led by Benchmark. The Social+Capital Partnership, Felicis Ventures and Resolute Ventures participated in the funding round. Benchmark's Matt Cohler joined Greenhouse's board of directors. According to CEO Daniel Chait, the company's valuation grew by 2.5 times since the Series A round. The company aimed to use the money to expand the sales team and to handle security audits and internationalization. It further planned to improve its product to better predict ways to get superior candidates.
In August 2015, Greenhouse received $35 million in its Series C round led by Thrive Capital. Between its Series B and Series C rounds, the company nearly doubled the number of customers, from 450 to 800. The company had 125 employees when it closed its Series C round, up from 45 in the beginning of 2015. Because Greenhouse had only 10% of its customer base outside of the United States, it planned to use the funding for increasing the sales team outside the U.S. They also planned to improve the site infrastructure and complete several confidential research and development ventures. Greenhouse is based in New York City but has an office in San Francisco and intends to add offices abroad.
In August 2015, at the request of The New York Times, the research firm CB Insights ran tests based on funds raised and employee retention rate to predict which startups would become the next unicorns, companies with at least a $1 billion valuation. CB Insights predicted that Greenhouse would become a unicorn. In 2015, Greenhouse Software was on the Forbes cloud 100 list.
Product
Greenhouse provides a recruiting software as a service. Greenhouse sells annual licenses to companies, allowing them to use Greenhouse's software. Greenhouse collects into one dashboard the job seekers' applications from different routes like employment websites and referrals. Greenhouse is an open platform that uses an application programming interface (API) to facilitate the aggregation. To determine which job posts are more effective, it allows companies to do A/B testing. Greenhouse lets companies create a uniform interview procedure so candidates can be judged based on the same rubric. It aggregates a candidate's resumes and interview feedback for comparison with the job opening's specifications. It allows companies to compare candidates against each other and to compare their hiring metrics against the industry standards.
See also
Tech companies in the New York metropolitan area
References
External links
American companies established in 2012
Software companies established in 2012
Privately held companies based in New York City
Professional networks
Recruitment software
Software companies based in New York City
Software companies of the United States
2012 establishments in New York City |
38863 | https://en.wikipedia.org/wiki/Motorola%2068010 | Motorola 68010 | The Motorola MC68010 processor is a 16/32-bit microprocessor from Motorola, released in 1982 as the successor to the Motorola 68000. It fixes several small flaws in the 68000, and adds a few features.
The 68010 is pin-compatible with the 68000, but is not 100% software compatible. Some of the differences were:
The MOVE from SR instruction is now privileged (it may only be executed in supervisor mode). This means that the 68010 meets Popek and Goldberg virtualization requirements. Because the 68000 offers an unprivileged MOVE from SR, it does not meet them.
The MOVE from CCR instruction was added to partially compensate for the removal of the user-mode MOVE from SR.
It can recover from bus faults, and re-run the last instruction, allowing it to implement virtual memory.
The exception stack frame is different.
It introduced a 22-bit Vector Base Register (VBR) that holds A[31:10] of the 1 KiB-aligned base address for the exception vector table. The 68000 vector table was always based at address zero.
"Loop mode" which accelerates loops consisting of only two instructions, such as a MOVE and a DBRA. The two-instruction mini-loop opcodes are prefetched and held in the 6-byte instruction cache while subsequent memory read/write cycles are only needed for the data operands for the duration of the loop. It provided for performance improvements averaging 50%, as a result of the elimination of instruction opcodes fetching during the loop.
In practice, the overall speed gain over a 68000 at the same frequency is less than 10%.
The 68010 could be used with the 68451 MMU. However, aspects of its design, such as its 1 clock memory access penalty, made this configuration unpopular. Some vendors used their own MMU designs, such as Sun Microsystems in their Sun-2 workstation and Convergent Technologies in the AT&T UNIX PC/3B1.
Usage
The 68010 was never as popular as the 68000. However, due to the 68010's small speed boost over the 68000 and its support for virtual memory, it can be found in a number of smaller Unix systems, both with the 68451 MMU (for example in the Torch Triple X), and with a custom MMU (such as the Sun-2 Workstation, AT&T UNIX PC/3B1, Convergent Technologies MiniFrame, NCR Tower XP and HP 9000 Model 310) and various research machines. Most other vendors stayed with the 68000 until the 68020 was introduced.
Atari Games used the 68010 in some of their arcade boards such as the Atari System 1. Some owners of Amiga and Atari ST computers and Sega Genesis game consoles replaced their system's 68000 CPU with a 68010 to gain a small speed boost.
Motorola 68012
The Motorola MC68012 processor is a 16/32-bit microprocessor from the early 1980s. It is an 84-pin PGA version of the Motorola 68010. The memory space was extended to 2 GB, and a read-modify-write cycle (RMC) pin, indicating that an indivisible read-modify-write cycle in progress, was added, in order to help the design of multiprocessor systems with virtual memory. All other processors had to hold off memory accesses until the cycle was complete. All other features of the MC68010 were preserved.
The expansion of the memory space caused an issue for any programs that used the high byte of an address to store data, a programming trick that was successful with those processors that only have a 24-bit address bus (68000 and 68010). A similar problem affected the 68020.
References
External links
68010 images and descriptions at cpu-collection.de
68k microprocessors
32-bit microprocessors |
19192569 | https://en.wikipedia.org/wiki/Comparison%20of%20online%20backup%20services | Comparison of online backup services | This is a comparison of online backup services.
Online backup is a special kind of online storage service; however, various products that are designed for file storage may not have features or characteristics that others designed for backup have. Online Backup usually requires a backup client program. A browser-only online storage service is usually not considered a valid online backup service.
Online folder sync services can be used for backup purposes. However, some Online Folder Sync services may not provide a safe Online Backup. If a file is accidentally locally corrupted or deleted, it depends on the versioning features of a Folder Sync service, whether this file will still be retrievable.
Comparison
Legend
Windows/Linux/Mac/iOS/Android/BlackBerry: Supported operating systems for thick client (native binary application), which provide background data transmission and setting services.
Zero knowledge: The service provider has no knowledge of the user's encryption key, ensuring privacy of the backup data.
Secure Key Management: If yes, the user holds and controls the encryption key. If no, the service provider holds and controls the encryption key.
Payment options/plans:
Limited MB plan: Pay per computer. Additional fee for storage over a threshold.
Unlimited MB plan: Pay per computer. Storage per computer is unlimited.
$/MB plan: Pay per unit of storage, but unlimited computers may share that storage.
Cloud hosted Net Drive: Cloud can serve storage over WebDAV, SMB/CIFS, NFS, AFP or other NAS protocol, allowing files to be streamed from the cloud. A change made to the cloud is immediately accessible to applications on all clients without needing to pre-download (sync) the file in full.
Sync: Synchronization between computers, and/or mobile devices (PDA, MDA,...)
Public Internet file hosting
Restore via physical media
Server location: Countries where physical servers are located. Where the data will be located.
Still in Beta version
Whether the desktop client (if available) can detect and upload changes without scanning all files.
Many backup services offer a limited free plan, often for personal use. Often it is possible to increase the free backup limit through coupons, referrals, or other means that are not included in this column. This column also does not include free trials that are only available for a limited period of time.
External hard drive support: Can refer to an alternate backup destination or whether the service can back up external drives.
Hybrid Online Backup works by storing data to local disk so that the backup can be captured at high speed, and then either the backup software or a D2D2C (Disk to Disk to Cloud) appliance encrypts and transmits data to a service provider. Recent backups are retained locally, to speed data recovery operations.
Unlimited BW: If bandwidth capping or limits are used on accounts.
Comments
Acronis Up to five PCs, always incremental backups, remote access from the web
Backblaze Data de-duplication; block-level incremental.
Barracuda Backup Service Data de-duplication; real-time hybrid on-site/off-site data back-up.
BullGuard Backup 5 PC/license, fast upload speeds, mobile access, encrypted transfer and storage, password-protected settings, free 24/7 support.
Carbonite Block-level incremental, Home or Pro editions. iPhone/ Blackberry/ Android App available to remotely access data from the online backup (For Pro: Users of the computer which are backed up, not available for the Administrator of the Pro). Can manually select files to upload that are larger than 4 GB.
Cloudberry Backup Image & File Based backups, data de-duplication, block-level and multiple cloud providers supported.
CloudJuncxion Decentralized multi-cloud backup with integrated sharing, sync, backup, and Cloud NAS. Fault-tolerance against failure of a constituent cloud.
Crashplan
Unlimited destinations. Data de-duplication; block-level incremental. Can run server-free, exchanging backup space with friends and family.
Datashield High-level encryption, personalized encryption key, shared cloud drive, sync folder functionality .
Dolly Drive
Cloud storage that is specifically designed for the Mac. Also allows users to store files exclusively in the cloud for seamless access on any computer or mobile device.
Diino iPhone/Android app available.
Dropbox Data de-duplication, delta sync, iPhone/Android/Blackberry app available.
Dropmysite website backup, database backup, SFTP support, free up to 2 GB.
Egnyte Delta sync, Google Docs sync, user and group management
ElephantDrive Auto-transfer from defunct Xdrive.
F-Secure [Steek acquired by F-Secure July 2009]
Humyo Humyo was acquired by Trend Micro and will become part of Trend Micro SafeSync. Humyo no longer accepts new clients.
IASO Backup Advanced data reduction technology. Data de-duplication mechanism. High level of scalability and cost effectiveness.
ICFiles Secure File Share Storage. Proprietary license download client. High level security, SOC 2 TYPE II, ISO 27001,27017, 27018, CSA, PCI, HIPAA, CJIS, EU Model Clauses, on request private servers for FISMA and FedRAMP.
IDrive Proprietary license download client. Automatic Selection. Continuous Data Protection. "Virtual drive" explorer.
Jungle Disk Proprietary license download client sample code.
KeepVault Real-time hybrid on-site and offsite data backup.
Memopal Cross-user de-duplication, delta sync.
MiMedia Initial seed via a MiMedia-owned external hard drive available (no extra cost, shipping included).
Mozy Data de-duplication; block-level incremental. "Mozy Data Shuttle" physical seeding service available for extra fee.
Replicalia Professional Backup for Professional Data.
SpiderOak Data de-duplication."Zero Knowledge" encryption.
StoreGrid Cloud Byte-level incremental backup, local backup, Disk Image backup—BMR and physical seeding.
Syncplicity Google Docs sync, Central Management with Business Console.
TaniBackup (Cheap Backup) Up to 10 TB per standard package. Extensible to over 300 TB per single account. Access via SMB/CIFS or SSH (including sftp, scp, rsync and so on). 100 GB for $2 per month.
Tarsnap Client source available; data de-duplication; block-level incremental.
TeamDrive Store encrypted data on any WebDAV server; supports working offline; files can be commented; built-in support for conflict resolution.
Unitrends Vault2Cloud Data de-duplication; hybrid on- and off-premises data backup; physical seeding.
UpdateStar Online Backup Data de-duplication; block-level incremental.
Usenet backup Is the method of storing backup data on the usenet.
Windows Live Mesh Replaces windows live sync and windows live folder.
Zetta Enterprise-grade Online Backup Supports Linux, Mac OS, and Windows, high speed WAN optimization, SAS 70 certified data centers.
Zmanda Cloud Backup Available in German and Japanese languages, supports MS SQL Server, MS Exchange, SharePoint, MySQL Database, System State, Oracle.
Versioning
Any changes can be undone, and files can be undeleted.
Acronis Supports detailed history of changes to files with browsing by date or version number.
Backblaze Old versions of files are kept for 30 days by default; One-year or Forever Retention is optional.
Box Versioning is included in paid subscription
Carbonite Keeps old versions for up to three months. It keeps one version for each day of the past week, one version for each of the previous three weeks, and one version for each of the previous two months that the file has been backed up. Versioning available for PC computers only; not available for Mac.
CloudJuncxion Supports multiple versions less than a week old, one version less than two weeks old, one version less than month old, and one version older than a month.
Crashplan Options: All, or staged (daily, then weekly, etc.).
Cubby All previous versions and deleted files are kept until explicitly removed by the user or the user runs out of space. All deleted files and previous versions count towards the storage limit.
Dolly Drive Yes. Keeps unlimited versions of files.
Dropbox By default, Dropbox saves a history of all deleted and earlier versions of files for 30 days for all Dropbox accounts.
Dropmysite Provides incremental backups with the ability to download every snapshot.
ElephantDrive Any number of versions can be kept for any amount of time.
Google Drive Old versions of files are kept for 30 days or 100 revisions. Revisions can be set not to be automatically deleted.
IASO Backup All versions of files can be kept for different periods of time, starting from 1 month to 1 year or more.
ICFiles No files are kept after delete, auto delete clears at every 24 hours.
iDrive Up to 10 old versions of files are kept forever (until explicitly removed).
Mozy Old version of files are kept for 30 days. Pro 60 days, Enterprise 90 days.
PowerFolder Five versions are kept online. In the client it is configurable how many versions to store locally.
SOS Online Backup (Infrascale) All versions are kept. Only the largest counts towards the storage limit.
SpiderOak All versions are kept. All files can be undeleted.
SugarSync Five versions are kept. Only the most recent version of each of existing files as well as deleted files count towards the storage limit.
Sync.com Sync.com saves a history of all deleted and earlier versions of files, 30 days for free accounts, indefinitely for premium plans.
Syncplicity Old versions of files, as well as deleted files, are kept for 30 days. Configurable for business or enterprise-class services.
TeamDrive All previous versions are kept and can be restored.
Tresorit Versioning is included in paid subscription
Zetta Enterprise-grade Online Backup All versions are kept. All files can be undeleted.
Other features and limitations
Other notable limitations or features.
Baidu Cloud Must be registered by verified phone first.
Box Performance degrades after 10,000 files in sync folder. Technical limit of 40,000 files in sync folder. Does not sync .tmp files, Outlook PST files, hidden files (hidden folders are synced), or any file or folder with \/*?":<>| in the name.
CloudJuncxion Decentralized fragment-and-disperse storage across a collection of heterogeneous clouds for maximal security. Supports a Virtual Private Cloud model for complete control by the enterprise customer. Supports Sync Groups for greater control over synchronization of files across multiple devices.
Dropbox Performance degrades with more than 300,000 files in sync folder. This is a soft limit.
Sugarsync Limited to 80,000 files per top level sync folder. To workaround, you can create multiple syncing folders, but each top level folder is limited to 80k files. Also, Microsoft outlook and Apple iTunes databases are unsupported.
Trustbox A 3-layer encrypted storage supports privacy for an unlimited file version retrieval. Restore any file from any point in time.
Defunct services
Bitcasa closed its services in February 2017.
Copy was discontinued on May 1, 2016
Dell DataSafe was discontinued on June 11, 2015.
drop.io
Mozy, shutdown in 2019, and it redirects users to Carbonite
Norton Zone was discontinued on July 7, 2014.
Streamload aka MediaMax
Ubuntu One was discontinued on June 1, 2014.
Windows Live Mesh
Wuala was discontinued on November 15, 2015.
Xdrive
ZumoDrive
See also
File hosting service
Remote backup service
Comparison of online music lockers
Comparison of file synchronization software
Comparison of file hosting services
Cloud storage
Shared disk access
File sharing
List of backup software
References
File hosting
Hard disk software
Cloud storage
Online services comparisons
Online backup
Comparison of online backup services |
23503030 | https://en.wikipedia.org/wiki/Apache%20Empire-db | Apache Empire-db | Apache Empire-db is a Java library that provides a high level object-oriented API for accessing relational database management systems (RDBMS) through JDBC. Apache Empire-db is open source and provided under the Apache License 2.0 from the Apache Software Foundation.
Compared to Object-relational mapping (ORM) or other data persistence solutions such as Hibernate, iBATIS or TopLink Empire-db does not use XML files or Java annotations to provide a mapping of plain (old) Java object (POJO's) to database tables, views and columns. Instead Empire-db uses a Java object model to describe the underlying data model and an API that works almost solely with object references rather than string literals.
Empire-db's aim is to provide better software quality and improved maintainability through increased compile-time safety and reduced redundancy of metadata. Additionally applications may benefit from better performance due to full control over SQL statements and their execution by the developer compared to most OR-mapping solutions.
Major benefits
Empire-db's key strength is its API for dynamic SQL generation for arbitrary select, update, insert or delete statements, purely by using Java methods which reference the model objects. This provides type-safety and almost entirely eliminates the use of string literals for names or expressions in code. Additionally DBMS independence is achieved through a pluggable driver model.
Using references to table and column objects significantly improves compile-time safety and thus reduces the amount of testing. As a positive side effect the IDE's code completion can be used to browse the data model, increases productivity and eliminates the need for other external tools or IDE-plugins.
Further the object model also provides safe and easy access to meta-information of the data model such as field data type, maximum field length, whether a field is mandatory and a finite choice of options for a field's values. Metadata is user-extensible and not limited to DBMS related metadata. Availability of meta-information encourages more generic code and eliminates redundancies throughout application layers.
Features at a glance
Data model definition through a Java object model omits the need to learn XML schemas or annotations and easily allows user interceptions and extensions.
Portable RDBMS independent record handling and command definition with support for a variety of relational databases such as Oracle, Microsoft SQL Server, MySQL, Derby, H2 and HSQLDB (as of version 2.0.5)
DDL generation for target DBMS from object definition, either for the entire database or for individual objects such as tables, views, columns and relations.
Type-safe API for dynamic SQL command generation allows dynamic building of SQL statements using API methods and object references only instead of string literals. This provides a high degree of type-safety which simplifies testing and maintenance.
Reduced amount of Java code and powerful interception of field and metadata access through dynamic beans as an alternative to POJOs. This even allows data model changes (DDL) at runtime.
Automatic tracking of record state and field modification (aka "dirty checking") to only insert/ update modified fields.
Support for optimistic locking through timestamp column.
No need to always work with full database entities. Build queries to provide the data exactly as needed, and obtain the result for example as a list of any type of POJO with matching property setters or constructor.
Lightweight and passive library with zero configuration footprint that allows simple integration with any architecture or framework.
Example
As an example consider a database with two tables called Employees and Departments for which a list of employees in a particular format, with certain constraints and a given order should be retrieved.
The corresponding Oracle syntax SQL statement is assumed to be as follows:
SELECT t1.EMPLOYEE_ID,
t1.LASTNAME || ', ' || t1.FIRSTNAME AS NAME,
t2.DEPARTMENT
FROM (EMPLOYEES t1
INNER JOIN DEPARTMENTS t2 ON t1.DEPARTMENT_ID = t2.DEPARTMENT_ID)
WHERE upper(t1.LASTNAME) LIKE upper('Foo%')
AND t1.RETIRED=0
ORDER BY t1.LASTNAME, t1.FIRSTNAME
This SQL statement can be created using Empire-db's command API using object model references like this:
SampleDB db = getDatabase();
// Declare shortcuts (not necessary but convenient)
SampleDB.Employees EMP = db.EMPLOYEES;
SampleDB.Departments DEP = db.DEPARTMENTS;
// Create a command object
DBCommand cmd = db.createCommand();
// Select columns
cmd.select(EMP.EMPLOYEE_ID);
cmd.select(EMP.LASTNAME.append(", ").append(EMP.FIRSTNAME).as("NAME"));
cmd.select(DEP.DEPARTMENT);
// Join tables
cmd.join (EMP.DEPARTMENT_ID, DEP.DEPARTMENT_ID);
// Set constraints
cmd.where(EMP.LASTNAME.likeUpper("Foo%"));
cmd.where(EMP.RETIRED.is(false));
// Set order
cmd.orderBy(EMP.LASTNAME);
cmd.orderBy(EMP.FIRSTNAME);
In order to execute the query and retrieve a list of POJO's holding the query result the following code may be used:
// Class definition for target objects
public class EmployeeInfo {
private int employeeId;
private String name;
private String department;
// Getter's and Setters for all properties
// or a public Constructor using fields
public get...
public set...
}
// Retrieve employee list using the cmd object created above
DBReader reader = new DBReader();
try {
reader.open(cmd, getConnection());
List<EmployeeInfo> empList = reader.getBeanList(EmployeeInfo.class);
} finally {
reader.close()
}
Empire-db also supports field access through object references or obtaining query results as XML.
History
Empire-db was originally developed at ESTEAM Software a German software development company which used Empire-db to develop various applications for a variety of different branches.
In January 2008 Empire-db was made officially open source and first published though SourceForge.net.
In June 2008 a proposal was submitted to the Apache Software Foundation for Empire-db to become an Apache Incubator project. In July 2008 Empire-db got accepted for incubation and all rights over the Software were transferred to the Apache Foundation.
In October 2008 Empire-db 2.0.4 was the first official Apache incubator release with all package names changed to begin with org.apache.empire.
See also
Java Database Connectivity (JDBC)
Object-relational mapping
Hibernate
iBATIS
TopLink
Apache Struts
External links
Empire-db
Java (programming language) |
36581235 | https://en.wikipedia.org/wiki/Publication%20history%20of%20Anarky | Publication history of Anarky | The publication history of Anarky, a fictional character appearing in books published by DC Comics, spans various story arcs and comic book series. Co-created by Alan Grant and Norm Breyfogle, he first appeared in Detective Comics #608 (Nov. 1989), as an adversary of Batman. Introduced as Lonnie Machin, a child prodigy with knowledge of radical philosophy and driven to overthrow governments to improve social conditions, stories revolving around Anarky often focus on political and philosophical themes. The character, who is named after the philosophy of anarchism, primarily espouses anti-statism. Multiple social issues have been addressed whenever the character has appeared in print, including environmentalism, antimilitarism, economic exploitation, and political corruption. Inspired by multiple sources, early stories featuring the character often included homages to political and philosophical books, and referenced anarchist philosophers and theorists. The inspiration for the creation of the character and its early development was based in Grant's personal interest in anti-authoritarian philosophy and politics. However, when Grant himself transitioned to the philosophy of Neo-Tech, he shifted the focus of Anarky from a vehicle for socialist and populist philosophy, to rationalist, atheist, and free market-based thought.
Originally intended to only be used in the debut story in which he appeared, Grant decided to continue using Anarky as a sporadically recurring character throughout the early 1990s, following positive reception by readers and Dennis O'Neil. The character experienced a brief surge in media exposure during the late 1990s, beginning when Norm Breyfogle convinced Grant to produce a limited series based on the character. The 1997 spin-off series, Anarky, was received with positive reviews and sales, and later declared by Grant to be among his "career highlights". Batman: Anarky, a trade paperback collection of stories featuring the character, soon followed. This popular acclaim culminated, however, in a financially and critically unsuccessful ongoing solo series. The 1999 Anarky series, in which even Grant has expressed his distaste, was quickly cancelled after eight issues, but not before sparking a minor controversy by suggesting Anarky was the biological son of the Joker.
Following the cancellation of the Anarky series, and Grant's departure from DC Comics, Anarky experienced a prolonged period of absence from DC publications, despite professional and fan interest in his return. This period of obscurity lasted approximately ten years, with three brief interruptions for minor cameo appearances in 2000, 2001, and 2005. In 2008, Anarky reappeared in an issue of Robin authored by Fabian Nicieza, with the intention of ending this period of obscurity. The storyline drastically altered the character's presentation, prompting a series of responses by Nicieza. Anarky became a recurring character in issues of Red Robin, authored by Nicieza, which was cancelled in 2011.
In 2013 a revamped version of Anarky was debuted as the primary antagonist for Beware the Batman, a Batman animated series produced by Warner Bros. Animation. Later that year, the character made his video game debut in Batman: Arkham Origins, as an anti-villain who attempts to ally with Batman against corruption in Gotham City, while threatening government and corporate institutions with destruction.
Publication history
1989–1996: Creation and early depictions
Originally inspired by his personal political leanings, Alan Grant entertained the idea of interjecting anarchist philosophy into Batman comic books. In an attempt to emulate the success of Chopper, a rebellious youth in Judge Dredd, he conceptualized a character as a twelve-year-old anarchist vigilante, who readers would sympathize with despite the character's harsh methods. Creating the character without any consultation from his partner, illustrator Norm Breyfogle, his only instructions to Breyfogle were that Anarky be designed as a cross between V and the Black Spy from Mad magazine's Spy vs. Spy. Grant also briefly considered incorporating Krazy Kat as a third design suggestion, but decided against the idea. In response to deadline pressures, and not recognizing the character's potential, Breyfogle "made no preliminary sketches, simply draping him in long red sheets". The character was also intended to wear a costume that disguised his youth, and so was fitted with a crude "head extender" that elongated his neck, creating a jarring appearance. While both of these design elements have since been dropped, more enduring aspects of the character have been his golden face mask, "priestly" hat, and his golden sceptre.
The first Anarky story, "Anarky in Gotham City Part 1: Letters to the Editor", appeared in Detective Comics #608 (Nov. 1989). Lonnie Machin is introduced as "Anarky" as early as his first appearance in Detective Comics #608, withholding his origin story for a later point. He is established as an uncommonly philosophical and intelligent 12-year-old. The debut story of the character failed to provide a back story to explain his behavior, but a narrative from Batman reveals that he is a socially conscious and arrogant child who believes he knows how to solve societies' problems. Convinced that only he can do so, he becomes a vigilante and fashions weapons—a stun baton and smoke bombs—in labs at school. Lonnie Machin made his debut as "Anarky" by responding to complaints in the newspaper by attacking the offending sources, such as the owner of a factory whose byproduct waste is polluting local river water. He goes to lengths to disguise his age and appearance by creating a costume with a false head to increase his height. This was in fact intended as a ruse on the part of writer Alan Grant to disguise the character's true identity, and to confuse the reader into believing Anarky to be an adult. Anarky and Batman ultimately come to blows, and during their brief fight, Batman deduces that Anarky is actually a young child. During this first confrontation, Anarky is aided by a band of homeless men, including Legs, a homeless cripple who becomes loyal to him and would assist him in later appearances. After being caught, Lonnie is locked away in a juvenile detention center.
In Grant's earliest script for the character, Anarky was designed to be far more vicious, and to have killed his first victim. Dennis O'Neil, then editor of Detective Comics, requested that Grant "tone down" the script, as he felt Anarky becoming a murderer at such a young age was morally reprehensible. Grant consented to the request and the script was rewritten. In a 2004 interview, Grant explained the motivation behind this early decision: "I wanted to play it that way because it created incredible tension in Anarky himself. Other heroes might kill—Batman used to, in the early days—but for a teenager to rationally decide to take lives ... well, it hadn't been handled in comics before. But Denny was boss, and I respected his opinion and toned things down".
Although Grant had not created the character to be used beyond the two-part debut story, positive reactions from reader letters and his editor caused him to change his mind. He then decided to make Lonnie Machin the third Robin, following Jason Todd, desiring a new sidekick who would act as a foil to Batman, and not have the same motivations for vengeance. This was abandoned when he learned that Tim Drake had already been created to fill the role by Marv Wolfman. Quickly rebounding from this decision, Grant instead used the second appearance of Anarky as the antagonist for Tim Drake's first solo detective case.
This appearance came in Detective Comics #620. The story chronicles that Anarky increases his computer skills during his detention to the point of becoming an advanced grey hat computer hacker. He takes on the online user alias "Moneyspider" to steal millions of dollars from Western corporations, including Wayne Enterprises, outmaneuvering Batman's own data security in the process. He then uses the money to create bank accounts for poor farmers in Third World countries. Tim Drake pursues the hacker in an online investigation, tracking Anarky to his location at the detention center. In 2008, Fabian Nicieza would revert Lonnie Machin to this early alias as part of an attempt to reuse the character following a period of absence lasting approximately 10 years.
In the years following Anarky's creation, the character was rarely incorporated into Batman stories by Grant, being reserved for stories in which the author felt the need to make a philosophical point. Throughout the early 1990s, Anarky was a lesser-known, but established antagonist in the Batman franchise, frequently depicted as escaping from the detention center and peregrinating around Gotham City. However, his back-story had still yet to be elaborated upon. Grant provided hints to Anarky's origin in Robin Annual #1, "The Anarky Ultimatum", part of Eclipso: The Darkness Within crossover event. Within the story it is revealed that Lonnie Machin was a highly intelligent child who from an early age read prodigiously at local bookstores, but had few friends. Published in 1992, it would not be until 1995 that Grant would finally elaborate upon Lonnie Machin's fictional history.
1995 saw Grant begin the slow increase in the character's abilities that would culminate in the Anarky series. In the "Anarky" story arc from Batman: Shadow of the Bat #40–41, Lonnie is released from juvenile detention, and builds a machine that allows him to fuse both hemispheres of his brain, giving him increased intelligence, and what he perceives as enlightenment. Creating an online bookstore, Anarco, to propagate radical literature, he begins to accumulate funds that he donates through another front company, The Anarkist Foundation, to radical organizations, such as eco-warriors and gun protesters, or keeps for his own projects. These story devices served to further improve Anarky's skill set, and increase his intelligence and financial independence. During this storyline, Grant finally revealed Lonnie Machin's origins in full, using a farewell letter to his parents to provide exposition into the character's motivations. The origin story narrates through flashbacks that Lonnie Machin was once an ordinary child who, while abnormally intelligent, was apolitical and unaware of world events. At 11, he gains a foreign pen pal, Xuasus, as part of a school program. Through this contact, Lonnie discovers the level of global corruption he never knew of before. Further studies into war and political violence leads him to hold radical sympathies. He comes to view all wars as being caused by political elites, with common individuals forced or cajoled into fighting on behalf of the former. He becomes convinced of the need to reshape society and becomes Anarky.
During the Anarky series, much of the character's development was influenced by the nature of Grant and Breyfogle's association. As part of the story writing process, the duo would engage in philosophical discussion carried out entirely over fax-transmissions. These long, in-depth, and occasionally heated debates influenced plot points, as well as the general direction of Anarky's character development. Because of this, Breyfogle personally considers Anarky to be among their few co-creations, whereas he considers other characters they "made" together, such as the Ventriloquist, to be entirely Grant's creations.
1997–1999: Anarky series
Following the comic book industry crash of 1996, Norm Breyfogle was unemployed and looking for work. As a result of a request Breyfogle made to DC for employment, Darren Vincenzo, then an editorial assistant at DC Comics, suggested multiple projects which Breyfogle could take part in. Among his suggestions was an Anarky limited series, written by Grant, which was eventually the project decided upon. The four-issue limited series, Anarky, was published in May 1997. Entitled "Metamorphosis", the story maintained the character's anti-authoritarian sentiments, but was instead based on Neo-Tech, a philosophy developed by Frank R. Wallace.
During the climax of the "Anarky" storyline of Batman: Shadow of the Bat #41-42, it is implied that Anarky dies in a large explosion. In turn, the Anarky limited series resolved this event by revealing that Anarky survives, but chooses to shed the encumbrance of his double life by faking his death. Anarky works in seclusion to further his goal of achieving a utopian society, briefly hiring Legs and other homeless men to monitor Batman's movements. His four-issue adventure leads him into conflict with Etrigan the Demon and Darkseid, as Anarky confronts the nature of evil through dialogue, and concludes with a showdown against Batman, addressing the issue of consequentialism.
Well received by critics and financially successful, Grant has referred to the limited series as one of his favorite projects, and ranked it among his "career highlights". With its success, Vincenzo suggested continuing the book as an ongoing series to Breyfogle and Grant. Although Grant was concerned that such a series would not be viable, he agreed to write it at Breyfogle's insistence, as the illustrator was still struggling for employment. Grant's primary concerns centered on his belief that Anarky's role as a non-superpowered teenager was not capable of competing for reader attention when DC Comics already had a similar series in Robin. Further, while potential disagreements with editors over story elements were not among his initial concerns, he eventually found himself constantly at odds with editors and editorial assistants throughout the creation of the series.
Grant's doubts concerning the comic's prospects eventually proved correct. The series was panned by critics, failed to catch on among readers, and was canceled after eight issues, however Grant has noted that it was popular in Latin American countries, perhaps owing to a history of political repression in the region. "It didn't sit too well with American readers, who prefer the soap opera and cool costume aspects of superhero comics. But I became a minor hero in many Latin countries, like Argentina and Mexico, where readers had been subjected to tyranny and fascism and knew precisely what I was writing about". Breyfogle gave a similar explanation for what he believed to be the cause of the series' failure. In an essay written after the cancellation of the series, he reflected on the difficulty of combining escapist entertainment with social commentary: "Anarky is a hybrid of the mainstream and the not-quite-so-mainstream. This title may have experienced exactly what every "half-breed" suffers: rejection by both groups with which it claims identity".
Despite numerous editorial impositions, the most controversial plot point was not a mandate, but was instead a suggestion by Breyfogle, intended as a means to expand Anarky's characterization: that Anarky's biological father be revealed to be the Joker. Breyfogle expressed an interest using the relationship as a source for internal conflict in the character: "... I figured that because Anarky represents the epitome of reason, one of the biggest crises he could face would be to discover that his father was the exact opposite: a raving lunatic!" Alternatively, Grant saw it as an opportunity to solidify Anarky's role in the Batman franchise.
Grant's decision to pursue the suggestion ran into conflict with Dennis O'Neil, who protested against it. Grant recalled: "Denny only let me write that story under protest, he was totally opposed to Joker being Anarky's father and said under no circumstances would DC allow that". Grant persisted, suggesting that the move would generate interest for the comic book, and that the story could be undone later in another story. With O'Neil's permission, the sub-plot of Anarky's unrevealed heritage was published in the series' first issue. With the eventual series cancellation, the final issue was quickly written to reveal the identity of Anarky's father as the Joker, simply to resolve the plot line, and leaving any future rebuttal to a future publications.
As the last issue of the Anarky series, the unresolved finale left open the possibility that the Joker might be Anarky's actual father, and the planned "rebuttal" was never published. Further, Grant and Breyfogle later speculated that as Dennis O'Neil has retired from DC Comics, and the final editorial decision currently belongs to Dan DiDio, it is no longer possible to be sure whether a rebuttal will ever be published. As of 2012, there is yet no record of Didio ever commenting on the subject, though the DC Universe timeline chronologically prevents the Joker from being Anarky's biological father, as the character's birth predates the existence of either Batman or the Joker.
As Anarky was created while Grant and Breyfogle were operating under "work-for-hire" rules, DC Comics owns all rights to the Anarky character. Following the cancellation of the Anarky series, both men attempted to buy the rights to Anarky from the company, but their offer was declined.
2000–2007: Absence from DC publications
After the financial failure of Anarky (vol. 2), the character entered a period of absence from DC publications that lasted several years. Norm Breyfogle attempted to continue using the character in other comics during this time, co-writing an issue of The Spectre with John Marc DeMatteis that would cameo Anarky as a rationalist foil to the mystical nature of the Spectre. When this story was rejected, Breyfogle came to suspect the character's prolonged absence was due in part to censorship.
Since the cancellation of the Anarky series, Grant has disassociated himself from the direction of the character. In a 2004 interview he recalled that he had been asked by James Peatty for a critique of an Anarky/Green Arrow script the latter had written. Peatty desired to know if his presentation of Anarky had been correct. Grant declined to read the story, explaining, "you have to let these things go". The script was published in 2005 and Anarky made the guest appearance billed as his "return" to DC continuity in Green Arrow #51, "Anarky in the USA".
The story narrative chronicles Anarky's reappearance after several years of obscurity in response to a bombing in Star City that he is framed for. He teams up with the Green Arrow to hunt down the guilty parties, but remains a wanted felon by authorities. Although the front cover of the issue advertised the comic as the "return" of the character, Anarky failed to make any further appearances. This was despite comments by Peatty that he had further plans to write stories for the character. Among his comments was the suggestion that he intended to explore the cliffhanger of Anarky's parentage left after the cancellation of the original Anarky series, and that as Green Arrow #51 was being published, he had already written a two-part story for Batman which featured Anarky.
Besides Breyfogle and Peatty, Todd Seavey was another professional writer who expressed an interest in creating stories for Anarky. A freelance libertarian writer and editor, and author of several issues of Justice League, Seavey considered authoring an Anarky series his "dream comics project".
Breyfogle has also noted that Anarky retained interest among a "diehard" fan base during this obscure period. During a panel at WonderCon 2006, multiple requests were made by the audience for Anarky to appear in the DC Comics limited series 52. In response, Dan DiDio announced two weeks later, at a DC Comics panel during the 2006 New York Comic Con, that the writing team of 52 had decided to create a part for the character. 52 editor, Michael Siglain, later responded to a reader question concerning when Anarky would appear in the series, estimating Anarky would appear in later issues. However, 52 concluded without Anarky making an appearance and with no explanation given by anyone involved in the production of the series.
2008: Return as "Moneyspider"
In 2008, Anarky reappeared in the December issue of Robin (vol. 2) #181. Anarky was among several villains to be showcased in DC Comic's "Faces of Evil" event. With the publication of Robin (vol. 2) #181, it was revealed that Lonnie Machin's role as Anarky had been supplanted by another Batman villain, Ulysses Hadrian Armstrong. Further, Machin was depicted as being held hostage by Armstrong, "paralyzed and catatonic", encased in an iron lung, and connected to computers through his brain. This final feature allowed the character to connect to the Internet and communicate with others via a speech synthesizer.
The story, "Pushing Buttons, Pulling Strings", narrates that as Tim Drake attempts to maintain control of Gotham City in Batman's absence following the "Batman R.I.P." storyline, it is revealed that the ultimate foe attempting to foment chaos and destroy the city through gang wars and terrorist bombings is Armstrong. Armstrong is also revealed to have encountered Lonnie Machin at an unspecified point prior to the story, at which time Machin was "shot in the head", leaving him paralyzed and hostage to Armstrong. Incapacitated, Machin reverts to his hacker identity as "Moneyspider", while Armstrong commandeers the identity of Anarky and conspires to acquire power through chaos. The storyline concluded with the publication of Robin (vol. 2) #182, on January 21, 2009, as part of the "Faces of Evil" event. The issue featured the Ulysses Hadrian Armstrong version of Anarky on the front cover of the issue, but did not include Lonnie Machin as a character within the story.
Fabian Nicieza, author of the issue and storyline in which Anarky appeared, responded to reader concerns in an internet forum for a Q&A secession with fans a few days after issue #181 was published. Nicieza explained his decision behind giving Machin's mantle as Anarky to another character was due to his desire to establish "Robin's Joker", and that "the concept of Anarky, applied in a visceral, immature fashion, would make an excellent counterpoint to Robin's ordered methodical thinking". However, in an effort to respect the original characterization of Anarky, it was necessary that it not be Machin, who Nicieza recognized as neither immature, nor a villain. Nicieza also noted the difficulty inherent in writing any story featuring Anarky, due to the complexity of the character's philosophy. Regardless, Nicieza did desire to use Machin and properly return the character to publication, and so favored presenting Ulysses H. Armstrong as Anarky, and Lonnie Machin as MoneySpider, describing the latter as an "electronic ghost". The alias Moneyspider was a secondary name briefly used by Grant for Anarky in a 1990 Detective Comics storyline. Nicieza felt that this created a scenario in which each could be used for different effects. As he put it, "[o]ne can prove a physical opposition to Robin (Ulysses as Anarky), the other an intellectual one (Lonnie as MoneySpider)". Nicieza also acknowledged that this dramatic change in the character's presentation would upset fans of the character, but countered that he felt he had not made any changes to the character which could not be undone easily by other writers.
Following the publication of Robin (vol. 2) #181, Roderick Long, a political commentator and Senior Scholar at the Ludwig von Mises Institute, and self-professed fan of the character, expressed annoyance at the portrayal of the character of Lonnie Machin and the usurpation of the Anarky mantle by Armstrong. Upon learning of Nicieza's reasoning for the portrayal, Long parodied the explanation as a mock-dialectic, writing: "Thesis: Anarky is too interesting a character not to write about. Antithesis: Anarky is too interesting a character for me to write successfully about. Dialectical synthesis: Therefore I will make Anarky less interesting so I can write about him".
Grant also commented on the transformation in an interview for the Scotland regional edition of The Big Issue: "Someone recently sent me DC's new take on Anarky and I was saddened to see they were using him as just another asshole villain".
Several months later, in a forum discussion, Nicieza once again defended his writing of the story. In a discussion regarding communication between comic book authors, and whether writers were bothered when their creations were altered, or re-characterized, by other authors of a shared universe, Nicieza's treatment of Anarky was criticized. In addressing the concerns of his detractors, Nicieza submitted that he fully agreed with the criticisms by Alan Grant. As Nicieza explained, his intention was to deliberately alter the concept the Anarky mantle represented, while attempting to respect the characterization of Lonnie Machin. For this reason, Nicieza also saw little reason to communicate with Grant for feedback, and so made no overtures to Anarky's creator. Nicieza also confessed to his personal discomfort with communicating with Grant, given the latter's estrangement from DC Comics.
Nicieza continued his explanation, tackling the topic of the new "misapplication" of the philosophical underpinnings of the Anarky mantle in the hands of the General: "I was well aware that the General's application of the name, costume and themes were inaccurate to the political philosophy of Anarchy (and of Lonnie's reason for being as Anarky). It's even STATED in print during the course of the story that Ulysses didn't 'get it'". Nicieza then concluded with a restatement in his expectation that other writers now had the opportunity to pursue stories which would make use of the characters: "I could understand original Anarky readers and the creator of the character not agreeing with that, but therein lies the beauty of our medium -- there's always another story to tell that could 'fix' what you don't like ..."
2009–2011: Red Robin series
With the conclusion of Robin (vol. 2) #182, Nicieza began authoring the 2009 Azrael series, leaving any future use of Anarky or Moneyspider to author Christopher Yost, who would pick up the Robin character in a new Red Robin series. At the 2009 New York Comic Con panel for the "Batman: Battle for the Cowl" storyline, various collaborating writers, editors, and artists for Batman-related comic books took questions from the audience. When asked a question regarding Anarky, Mike Marts confirmed that "Anarky" would be utilized in future publications, but did not elaborate further. Yet in the ensuing months, Yost only made one brief references to Anarky, without directly involving the character in a story plot.
In 2010, Nicieza replaced Yost as the author of Red Robin, and Nicieza was quick to note his interest in using Anarky and Moneyspider in future issues of the series. Nicieza reintroduced Ulysses Armstrong and Lonnie Machin over the course of several storylines, and regularly used Lonnie as a cast member of the ongoing Red Robin series. Within Nicieza's first storyline, "The Hit List", Tim Drake rescues Lonnie Machin from the evil Ulysses Armstrong, who escapes capture and still claims the "Anarky" mantle. Recruiting Lonnie as he recovers from his coma, Drake asks that Machin act as Moneyspider and join him in fighting crime. Tim Drake comes to rely on Machin for his computer wizardry, as well as for his advice and companionship, in issues that follow. In the next story arc, "The Rabbit Hole", Nicieza reintroduced readers to a concept explored in other DC Universe comic books—the virtual reality realm of the Ünternet. Within the plot, Tim Drake is aided by Machin in hacking the Ünternet, a virtual realm used by criminals as a meeting space and form of entertainment. Once inside, Machin seizes control of the system, where he is returned to a digitized semblance of his former health and reclaims the mantle of "Anarky" within cyberspace. Uniquely gifted with full control of his bicameral mind, he leaves half of his personality as Anarky online at all times, even as the other half interacts with Drake offline as Moneyspider. Fending off villains within the digital realm with Tim Drake's help, Moneyspider is told to shut down the entire system by Drake. Sensing an opportunity, he pretends to do so, but instead secretly salvages it for himself. With the cancellation of the series five months later, the repercussions of these events were left unexplored.
The series was cancelled in October 2011, as a result of The New 52, a revamp and relaunch by DC Comics of its entire line of ongoing monthly superhero comic books, in which all of its existing titles were cancelled. 52 new series debuted in September with new #1 issues to replace the cancelled titles.
2012–2013: Debuts in animation and video games
At the MIPJunior conference, in October 2011, Sam Register, executive vice president for creative affairs at Warner Bros. Animation, announced several upcoming events for 2012, including a new CGI animated series, Beware the Batman, intended to focus on lesser known villains for an unfamiliar audience. A Cartoon Network press release wrote that Anarky would be one of the planned villains to be included, while series developers later explained that the character is revamped for the series and chosen as the primary antagonist. Series producers Glen Murakami and Mitch Watson compared his role to that of "Moriarty to Batman's Sherlock Holmes", explaining that he would indirectly challenge Batman through complex machinations. Anarky debuted in the first season's third episode, "Tests", on July 27, 2013. The episode was written by Jim Krieg, with direction by Curt Geda, while the character was voiced by Wallace Langham.
Scott Thill, technology and pop culture commentator for Wired magazine, has praised the choice to debut Anarky on television, noting the character's relevance following the rise of the Occupy movement and the hacktivist activities of Anonymous.
In 2013, Anarky later made his video game debut in Batman: Arkham Origins. Presenting the first game play demonstration for the game at the 2013 Electronic Entertainment Expo, and providing sit down interviews for media was the creative director of WB Games Montréal, Eric Holmes. Anarky was presented as an example of the new "Most Wanted" system introduced in the series. The system used guest villains to create mini-quests for players to optionally tackle, adding variety and replay value to the game.
The game was released on October 25, 2013. Voiced by Matthew Mercer, Anarky's side-mission involved his placing explosives at corporate and government locations. However, Anarky doesn't act strictly as a villain, but rather would attempt to appeal to Batman for an alliance. Extra components of the game included a series of invisible "Anarky tags" hidden throughout the over world map, which upon discovery would provide an essay explaining his critique of social problems; and a pirate radio station, "Liberation Radio", which the player could listen to as Anarky summarized the history of the 1919 United States anarchist bombings. The actual side mission involved a race against a countdown timer, as the player guided Batman across the city skyline from a starting point to reach the destination. Upon arrival, the player must defeat guards protecting a bomb and then deactivate it before detonation, which commentators compared to a similar side mission in Batman: Arkham City which involved Victor Zsasz.
Though aware of the "Moneyspider" period the character had experienced in the pages of Red Robin, Holmes explained that the developers had primarily drawn upon the earliest stories for Anarky by Alan Grant and Norm Breyfogle in Detective Comics. In response to a comment from an interviewer on the contemporary popularity of anarchism, Holmes acknowledged it as a factor in the choice to include the character in the game, and to update his appearance to that of a street protester with a gang resembling a social movement. He also noted the character's resemblance to the masked protesters of Anonymous. According to Holmes, Anarky's relevance to contemporary protest movements was the key element that made Anarky stand out among the characters of the game: "In the real world, this is Anarky's moment. Right now. Today". Holmes also warned players unfamiliar with Anarky against investigating the character through published material online, as he felt it would potentially spoil surprises the game held.
Alan Grant was later quoted as "honoured" by the choice to include Anarky in the game, though also "slightly peeved" that Anarky's character design had been based on what he felt were anarchist stereotypes, "complete with Molotov cocktails!" He was nonetheless pleased with the way the character was utilized within the game, and remarked that he'd also like to see Anarky used in films: "I’d love to see Anarky get even more exposure... though only if the writers handled him properly".
2013: Post-New 52
While Anarky was "rising in profile in other media" by mid 2013, the character had yet to be reintroduced to the status quo of the Post-New 52 DC Universe. This changed on August 12, when DC Comics announced that Anarky would be reintroduced in Green Lantern Corps #25, "Powers That Be", on November 13 the same year. The issue was a tie-in to the "Batman: Zero Year" crossover event, authored by Van Jensen and co-plotted by Robert Venditti.
In the lead up to the publication date, at a panel event at the New York Comic Con, Jensen was asked by a fan holding a "plush Anarky doll" what the character's role would be in the story. Jensen explained that Anarky "would have a very big hand" in the story, and further explained, "you can understand what he's doing even if you don't agree with what he's doing". Jensen had also indicated that his version of Anarky would be a "fresh take that also honors his legacy".
The story featured a character study of John Stewart, narrating Stewart's final mission as a young Marine in the midst of a Gotham City power blackout and citywide evacuation, mere days before a major storm is to hit the city. Anarky is depicted as rallying a group of followers and evacuees to occupy a sports stadium, on the basis that the area the stadium was built upon was gentrified at the expense of the local community and should be returned to them. The identity of this new version of Anarky was unrevealed, as Anarky escapes capture at the end of the story. The main story sequence is interwoven with flashbacks as Stewart's mother narrates her experiences as a child during the 1967 Detroit riot, forming the basis for the content of his family's character.
A review of the storyline embraced two particular additions brought to the revamped version of Anarky; the first being that this new version of Anarky is portrayed as an African American, as Batman has traditionally lacked many black villains; the second choice being to preserve the character's anonymity. To the commentator, this presented the opportunity that this Anarky could also be female. However, while the story had borrowed its characterization and design of Anarky from the version found in Batman: Arkham Origins, it was judged to have simultaneously made the character less threatening. The reviewer drew upon this result to caution the "need to reach a happy medium between the disillusioned protester we see here and the super genius kung fu fighting terrorist from Beware the Batman".
List of notable media appearances
Comics
As a lesser-known and underutilized character in the DC Universe, Anarky has a smaller library of associated comic books and significant story lines than more popular DC Comics characters. Between 1989 and 1996, Anarky was primarily written by Alan Grant in Batman-related comics, received a guest appearance in a single issue of Green Arrow by Kevin Dooley, and was given an entry in Who's Who in the DC Universe.
In the late 1990s, Anarky entered a brief period of minor prominence; first with the publication of Anarky in 1997; followed in 1998 with the Batman: Anarky collection; and in 1999, with featured appearances in both DCU Heroes Secret Files and Origins #1 and the ongoing series, Anarky (vol. 2). After the cancellation of the ongoing series, Anarky lapsed into an obscurity that lasted nearly 10 years. This ambiguous condition was not complete, as Anarky was sporadically used during this time. These appearances include marginal cameos in issues of Young Justice, Wonder Woman, and Green Arrow.
Anarky made a controversial appearance in 2008 (in Robin (vol. 4) #177-182) as part of an effort to return the character to regular publication, and became a recurring cast member in the Red Robin series in November 2010, until the series was cancelled in October 2011.
Comprising the entire "Metamorphosis" story arc, this 1997 limited series was retroactively labeled the "first volume" following its continuation in 1999.
Anarky relocates to Washington, D.C. to wage war against the United States government, in a financially and critically unsuccessful ongoing series.
Anarky is revamped as "Moneyspider", a comatose hacker recruited by the eponymous Red Robin to act as a recurring cast member and literary foil in this ongoing series.
A trade paperback collecting four stories featuring Anarky in various Batman-related comics between 1989 and 1997.
Anarky's debut appearance in Detective Comics, in which Anarky begins a campaign of revolt in Gotham City.
Anarky attempts to "deprogram" humanity of all social constraints in a four-part limited series, revamping Anarky with new abilities and philosophy.
Anarky seeks the truth of his parentage and learns the Joker may be his father in this controversial final issue of the ongoing Anarky series.
Robin faces a mysterious figure who promotes gang warfare in Batman's absence in the final story arc of Robin (vol. 4), which reintroduces Lonnie Machin as "Moneyspider".
Anarky has rallied a neighborhood against a group of Marines, including John Stewart, who are attempting to rescue them in the midst of a Gotham City evacuation, in Anarky's New 52 debut.
Other media
Television
Arrowverse
Lonnie Machin, portrayed by Alexander Calvert, appears in the fourth season of Arrow, depicted as an unhinged former mob enforcer. Described as "crazy pants", Machin was hired by Damien Darhk to cause chaos in Star City. However, when Darhk fires him for going too far, Machin develops a monomaniac desire to destroy Darhk. This brings him into conflict with Team Arrow, getting burned badly by Speedy during a bout of bloodlust; Machin escapes his ambulance, leaving his symbol behind. Adopting a plastic mask to hide his burns, Machin dubs himself "Anarky" and begins trying to bring down H.I.V.E. to lure out Darhk to kill him. In addition, he kidnapped his daughter and wife but Lonnie killed Darhk's wife Ruvé Adams. In the season four finale, Anarky attempts to ruin Project Genesis by cutting off the air supply to H.I.V.E.'s bunker. He is stopped by Speedy, revealing herself as Thea Queen to him in the process. In the fifth-season premiere, "Legacies", Anarky attempts to bomb Star City but is stopped by Green Arrow and arrested.
Animation
When asked by a fan-site if Anarky would be featured in The New Batman Adventures, writer Paul Dini called his possible inclusion a "longshot". He ultimately did not appear in the show itself, but was however depicted as existing in the continuity of the series: a poster by Ty Templeton in Batman & Superman Magazine was the first to feature him, followed by a story written by his creator Alan Grant in The Batman Adventures #31.
Lesser known among the cast of characters in the DC universe, Anarky went unused for adaptations to other media platforms throughout much of the character's existence, but in 2012 the character was chosen to act as the main antagonist in Beware the Batman. Anarky debuted in "Tests", the third episode of the first season, which aired on July 27, 2013. Anarky, voiced by Wallace Langham, is revamped as a criminal mastermind and acts as Batman's primary antagonist in this CGI animated series. He is clad in a white suit with cape and mask opposed to crimson robes and golden mask. Anarky is depicted as obsessed with both chaos and chess, enacting complex plans to battle Batman, who he views as his exact opposite. He later forms an alliance with District Attorney Harvey Dent, as the two both want to take down Batman. Though his plans seemingly work when Deathstroke seemingly kills Batman, Dent, having gone insane after being disfigured, attempts to murder Anarky. Anarky nearly kills Dent in retaliation, but Commissioner Gordon disarms him, and Anarky escapes. In the final scene of the series, Anarky, hiding out in a dilapidated room, compliments Batman for beating him, before announcing he plans to "play again".
Video games
In 2013, Anarky was included as a villain in the Batman video game, Batman: Arkham Origins. Anarky, voiced by Matthew Mercer, is a teenager who leads a movement among the homeless and desperate, setting explosives at government and corporate buildings Batman must deactivate in his video game debut:
Anarky is also mentioned by characters in dialogue in 2015's Batman: Arkham Knight, which occurs roughly ten years after the events of Arkham Origins. Anarky is mentioned to have disappeared following his capture, and it is heavily implied that the federal government had intervened in his imprisonment and transferred him to a federal prison or someplace else. His trademark vest can be found in the evidence locker, and several thugs mention that Anarky would love the chaos going on in Gotham.
Anarky is among many other DC characters included in Scribblenauts Unmasked: A DC Comics Adventure.
Traditional games
Vs. System: Arkham Inmates. Anarky: Lonnie Machin (DWF-122), Vs. System. Illustration by Sam Keith.
Vs. System: Arkham Inmates. Ulysses Armstrong: Anarky, Chaos Successor (DCN-064), Vs. System.
DC Comics: Arkham Asylum. Anarky No.20, HeroClix. (October 2008 - July 2012)
See also
List of comic book supervillain debuts
Modern Age of Comic Books
Publication history of Dick Grayson
Publication history of Superman
Publication history of Wonder Woman
Red Hood vs. Anarky
Footnotes
. Following Anarky's debut in "Anarky in Gotham City", the character's design incorporated the head extender in Robin Annual #1 (1992), Green Arrow #89 (Aug. 1994), and The Batman Adventures #31 (April 1995). The head extender was not included in Shadow of The Bat #18 (Oct. 1993), and The Batman Chronicles #1 (summer 1995).
. 52 was promoted as a comic that would attempt to incorporate as many DC Comics characters as possible. In a Q&A session hosted by Newsarama.com, Michael Siglain answered a series of questions regarding which characters fans wanted to see in the series. Question No. 19 asked "We were told Anarky would be playing a part in 52. Could you please tell us when we can expect his appearances?" Siglain's simple response to readers was, "check back in the late 40s." Speculation centered on the prospect of Anarky appearing in issue No.48 of the series, as the solicited cover illustration was released to the public several weeks before the issues' publication. On the cover, the circle-A could be seen as a minor element in the background. In a review for "Week 48", Major Spoilers considered the absence of Anarky a drawback: "It's too bad we didn't see the return of Anarky as hinted by this week's cover" Pop culture critic, Douglas Wolk, wrote, "I guess this issue's cover is the closest we're going to get to Anarky after all (and by proxy as close as we're going to get to the Haunted Tank). Too bad."
. In warning readers to avoid spoiling potential surprises for their experience in playing Batman: Arkham Origins, Eric Holmes specifically referenced the Wikipedia article on the character as a resource to avoid. "You know what? If you want to enjoy the game, don't bother reading up on him, because there are a few surprises about him which will turn up in the game, and if you go read Wikipedia or something like that, it'll rob you a little bit on some of the stuff in the game, because there are some surprises about Anarky."
. Anarky is not actually a part of the main story of the game, but is rather featured in several other respects. As part of the "Most Wanted" game feature, which encourages the player to track down side missions at their leisure, apprehending Anarky unlocks the "Anarky" character trophy. An "Anarky tag" collectible feature encourages players to search the city of Gotham map for hidden messages left by Anarky for his followers, earning the player the "Voice of the People" achievement if completed, and a "Liberation Radio" broadcast can be heard using Batman's Cryptographic Sequencer tool, which broadcasts Anarky's recitation of the "Plain Words" flier found during the 1919 United States anarchist bombings. A second collectible feature, Enigma's "extortion data files" include two for Anarky featuring voice actors performing as a corrupt cop coercing a homeless person to infiltrate Anarky's ranks, and Anarky calling Gordon and informing the latter to leave the city as he does not wish to hurt the officer and believes he can still be "rehabilitated".
References
External links
Anarky on the Unofficial Guide to the DC Universe website.
Batman characters
Comic book publication histories |
25729627 | https://en.wikipedia.org/wiki/AdBlock | AdBlock | AdBlock is a free and open-source content filtering and ad blocking browser extension for the Google Chrome, Apple Safari (desktop and mobile), Firefox, Opera, and Microsoft Edge web browsers. AdBlock allows users to prevent page elements, such as advertisements, from being displayed. It is free to download and use, and it includes optional donations to the developers. The AdBlock extension was created on December 8, 2009, which is the day that support for extensions was added to Google Chrome.
AdBlock's efforts are not related to Adblock Plus. The developer of AdBlock, Michael Gundlach, claims to have been inspired by the Adblock Plus extension for Firefox, which is itself based on the original Adblock that ceased development in 2004.
Since 2016 AdBlock has been based on the Adblock Plus source code.
In July 2018, AdBlock acquired uBlock, a commercial ad-blocker owned by uBlock LLC and based on uBlock Origin.
Crowdfunding
Gundlach launched a crowdfunding campaign on Crowdtilt in August 2013 in order to fund an ad campaign to raise awareness of ad-blocking and to rent a billboard at Times Square. After the one-month campaign, it raised $55,000.
Sales and acceptable ads
AdBlock was sold to an anonymous buyer in 2015 and on October 15, 2015 Gundlach's name was taken down from the site.
In the terms of the deal, the original developer Michael Gundlach left operations to Adblock's continuing director, Gabriel Cubbage, and as of October 2, 2015, AdBlock began participating in the Acceptable Ads program. Acceptable Ads identifies "non-annoying" ads, which AdBlock shows by default. The intent is to allow non-invasive advertising, to either maintain support for websites that rely on advertising as a main source of revenue or for websites that have an agreement with the program.
Filters
AdBlock uses the same filter syntax as Adblock Plus for Firefox and natively supports Adblock Plus filter subscriptions. Filter subscriptions can be added from a list of recommendations in the "Filter Lists" tab of the AdBlock options page, or by clicking on an Adblock Plus auto-subscribe link.
Partnership with Amnesty International
On March 12, 2016, in support of World Day Against Cyber Censorship, and in partnership with Amnesty International, instead of blocking ads, AdBlock replaced ads with banners linked to articles on Amnesty's website, written by prominent free speech advocates such as Edward Snowden, to raise awareness of government-imposed online censorship and digital privacy issues around the world.
The campaign was met with both praise and criticism, with AdBlock's CEO, Gabriel Cubbage, defending the decision in an essay on AdBlock's website, saying "We’re showing you Amnesty banners, just for today, because we believe users should be part of the conversation about online privacy. Tomorrow, those spaces will be vacant again. But take a moment to consider that in an increasingly information-driven world, when your right to digital privacy is threatened, so is your right to free expression." Meanwhile, Simon Sharwood of The Register characterized Cubbage's position as "'You should control your computer except when we feel political', says AdBlock CEO".
AdBlock for Firefox
On September 13, 2014, the AdBlock team released a version for Firefox users, ported from the code for Google Chrome, released under the same free software license as the original Adblock. The extension was removed on April 2, 2015 by an administrator on Mozilla Add-ons.
The official site's knowledge base article on December 7, 2015 states that with version 44 or higher of Firefox desktop and Firefox Mobile, AdBlock will not be supported. The last version of Adblock for those platforms will work on older versions of Firefox.
AdBlock was released again on Mozilla Add-ons on November 17, 2016.
CatBlock
On April 1, 2012, Adblock developer Michael Gundlach tweaked the code to display LOLcats instead of simply blocking ads. Initially developed as a short-lived April Fools joke, the response was so positive that CatBlock was continued to be offered as an optional add-on supported by a monthly subscription.
On October 23, 2014, the developer decided to end official support for CatBlock, and made it open-source, under GPLv3 licensing, as the original extension.
See also
Ad blocking
Advertising management
uBlock Origin
References
External links
2009 software
Ad blocking software
Advertising-free media
Crowdfunded projects
Free Firefox WebExtensions
Freeware
Google Chrome extensions
IOS software
Microsoft Edge extensions
Mobile applications
Online advertising
Opera Software
Software add-ons |
862865 | https://en.wikipedia.org/wiki/Cannon%20Fodder%20%28series%29 | Cannon Fodder (series) | Cannon Fodder is a series of war (and later science fiction) themed action games developed by Sensible Software, initially released for the Commodore Amiga. Only two games in the series were created by Sensible, but were converted to most active systems at the time of release. A sequel, Cannon Fodder 2, was released in 1994 for Amiga and DOS. A third game, Cannon Fodder 3, was made by a Russian developer and released in English in 2012.
Gameplay
The player is in charge of a squad of between one and eight men that can be, for command purposes, split up to three groups (referred to as Snake, Eagle and Panther squads). All men have a machine gun with unlimited ammunition, as well as limited caches of grenades and rockets that can be found on the map. In later levels, the player is provided with some grenades and rockets at the start of the mission. The player's machine guns do not harm its own soldiers, but friendly fire from grenades and rockets is possible, which are also the only weapons capable of destroying buildings and vehicles. Men can also die if hit by debris flung from exploding buildings and vehicles, get caught in man-traps, mired in quicksand, and hit by enemy fire. Men usually walk, but several vehicles are available in some missions.
The games are split into several missions, which are usually sub-divided into phases. Dead soldiers are replaced by new ones at the start of each phase. Each soldier that survives a mission is promoted and receives a small increase in the rate of fire, accuracy, and range. The player is only able to save the game upon completion of a whole mission. Each phase is structured around mission objectives which range from "Kill all enemies" or "Destroy enemy buildings" to "Rescue all hostages". Some phases are complex, and require the player to use their imagination, pre-planning and strategy. For example, players may have to split their team into two or more groups and leave one group to defend an area or route, assigning its control to the game's artificial intelligence, while taking control of another group.
The pre-mission screen shows a hill with a grave for each dead soldier, with recruits lining up in front of it and a sports-like score at the top of the screen. Soldiers each have unique names, while on the grand scale of things being nothing more than interchangeable cannon fodder. The game has hundreds of individually named recruits, of which the first few — Jools, Jops, Stoo and Rj — were named directly after the development staff. As each recruit is killed in battle, he receives a tombstone on the hill and the next recruit in line takes his place.
Cannon Fodder 2 introduced a variety of new settings: a Middle Eastern conflict, an alien planet and spacecraft, the medieval and gangster-themed Chicago. It nevertheless retained the same mechanics and gameplay. In the new settings, troops wielding grenades and rockets were replaced by such units as aliens and wizards, but nonetheless behaved in the same manner, as did battering rams representing trucks and so forth. The game was more difficult than its predecessor, employing puzzle elements such as multiple ways -of varying effectiveness- to solve levels. It featured reduced use of water obstacles but retained mines and booby traps.
Cannon Fodder 3 featured a military, counter-terrorism theme with more advanced weather settings. It retained grenades and rockets as the central secondary weapons and introduced further power-ups. It expanded the selection of vehicles and added further enemy installations such as sniper towers. It also features an online cooperative mode.
Music video
The theme tune for the game ("War!") was written by the lead game designer Jon Hare, with musician Richard Joseph. Vocals were sung by Hare himself. A music video of the song was put together to promote the original release .
Shot over just one day and for a total budget of £500, it featured the entire team dressed up in military uniforms, an assortment of masks (including one of Mario and Donald Duck) and toy guns. The version of the music track is more complete than the one that appeared on the 16-bit versions and was recorded professionally. In fact, the menu screen track is also a pared down version of a proper song, featuring studio-standard vocals. Both of these tracks were written and performed by Jon Hare, as were many of the other songs featured in Sensible's games.
Games
Cannon Fodder
Production of Cannon Fodder began following the completion of the successful strategy game Mega Lo Mania. Sensible Software wanted to create another strategy game featuring mouse control and the notion of sending troops on missions, but with more action than had been used in Mega Lo Mania. Production began in 1991 but was slowed due to Mega Drive conversions of other games. Cannon Fodder lost its provisional publisher in the aftermath of owner Robert Maxwell's death. As development work resumed, the team gradually reduced the complexity the strategy gameplay in favour of more direct control and action gameplay. In May 1993, Sensible Software found a new publisher in Virgin Interactive, which released the game in November of that year.
Amiga magazines rated the game positively, widely awarding scores of over 90%, while Amiga Action awarded an unprecedented score, calling it the best game of the year. Critics praised the fun, addictiveness, music and humour of the game. The game also drew criticism in the Daily Star for its juxtaposition of war and humour and its use of iconography closely resembling the remembrance poppy.
Ports
Once Sensible Software was sold off to Codemasters, the decision was taken to port the game over to the Game Boy Color. The limit on having two men in your squad and a much smaller playing area meant changes had to be made to the gameplay, mainly to make it easier. Jon Hare described the change as converting "11-a-side football to 5-a-side football".
In 2004, Jon Hare set up a small mobile phone games team known as Tower Studios. Their first release was Sensible Soccer in 2004, followed by Cannon Fodder in 2005. Both titles were published by Kuju Entertainment. The games were only playable on certain color models and due to many keypads' inability to register a diagonal movement the control systems for both games had to be radically redesigned.
Cannon Soccer
In 1994, a free minigame called Cannon Soccer (or Cannon Fodder - Amiga Format Christmas Special) was included on the coverdisk of the Amiga Format Christmas issue. It was essentially two bonus levels of Cannon Fodder in which the soldiers fought hordes of Sensible Soccer players in a snowy landscape. The levels were titled "Land of Hope and Glory", and "It's Snow Time".
Sensible Soccer 92/93 Meets Bulldog Blighty
One of the demos on the Amiga Power cover disk 21 was Sensible Soccer Meets Bulldog Blighty. It featured a mode of play that involved replacing players with soldiers from Cannon Fodder and the ball with a hand grenade. The grenade would randomly begin to flash and would eventually explode after a few minutes, killing any nearby players. The magazine described it as a "1944 version of Sensible Soccer", though The Daily Telegraph compared it to the Christmas-time football match in 1914.
Cannon Fodder 2
Cannon Fodder 2 is a 1995 sequel featuring very similar gameplay and graphics to the point where an Amiga Computing review suggested it had more in common with a data disk than a true sequel. The game featured levels set in deserts and an Alien spaceship as well as levels with a medieval theme. Amiga Computing rated the game at 71%.
The designer Stuart Campbell wrote: "CF2 was a cross-cultural kinda game. Levels were inspired by films, music, other games, politics and events. Titles came from songs, books, and all manner of other sources".
Cancelled sequels
After selling Sensible Software to Codemasters, Jon Hare ended up consulting on many of their development projects, one of which was the PS2 title Prince Naseem Boxing. Work on this title was performed in a satellite studio based in Hammersmith, London. However, due to the commercial failure of this title, the studio was shut down. A casualty of this was cancellation of a 3D remake of Cannon Fodder, something that Hare had been working on for at least nine months. Hare did speak about how he was looking to expand on the whole theme of war and include gameplay not just set on the battlefield: "I'd like to focus on the public's perceptions of war and warfare. There's many interesting things that go on behind the scenes with politicians".
In an interview with Eurogamer in late 2005, Hare confirmed that there was up to two years' work (on and off) put into a 3D update of Cannon Fodder: "I designed Cannon Fodder 3 with Codies six years ago, development stopped and started three times and eventually it was seemingly permanently halted when the London studio was closed four years ago. Nothing would please me more than to see this project resurrected, it was very advanced in its structure and therefore would need little modernisation".
In August 2006, Codemasters London announced a brand new version of Cannon Fodder for the PlayStation Portable. The game would have retained its familiar top down view, and the big heads of the soldiers, and for the first time the game would have been 3D. After a large launch announcement which included character renders and screenshots, the game was quietly canceled without explanation. In a later interview, Hare said: "Unfortunately, through no fault of their own, Codemasters hit economic problems and had to sell the studio, so everything just went".
Cannon Fodder 3
In 2008, now intellectual property owner Codemasters licensed Russian publisher Game Factory Interactive to develop another sequel. GFI created the game along with developer Burut CT and released it in Russia and the Commonwealth of Independent States in December 2011. English-language media speculated on whether GFI was permitted to release the game outside of that region, but Codemasters ultimately clarified it had reserved but declined the option of publishing the game. GFI released the game in Europe and North America in February 2012 via a download service. The game retains the core style of its predecessors but with more advanced graphics, a counter-terrorism theme and a greater array of weapons and units. English-language publications gave the game mixed, mediocre reviews, with both more positive, and negative reviews appearing elsewhere in Europe.
References
Electronic Arts franchises
Sensible Software
Video game franchises
Video game franchises introduced in 1993 |
2997298 | https://en.wikipedia.org/wiki/RULE%20Project | RULE Project | RULE (Run Up-to-Date Linux Everywhere) is a project that aims to use up-to-date Linux software on old PCs (5 years old or more), by recompiling and changing some code to recent programs in order to make them use less resources so they can run conveniently.
The reasons
The principal advantages of using old PCs with up to date software are:
support for more up-to-date standards (HTML4, Java, etc.)
more secure to use because of patched security holes
old PCs may consume less energy than new ones, making them cheaper to use and reducing pollution
re-use PCs that would otherwise be discarded, thus reducing e-waste
PCs are not cheap, especially for third world or Eastern European countries, or for big volumes (for example a school needs a lot of PCs.)
Information about the project
The distributions of Linux that have been the focus for the project are Fedora Core and Red Hat, but users/developers of other distributions are welcomed.
The main activity of the RULE site is to test contributors' packages and provide user documentation and easy installation procedures.
See also
Lightweight Linux distribution
Linux Terminal Server Project (a very different approach to the same problem)
External links
Official website No longer in service
A Tribute to the RULE Project
SLINKY, the installer
Linux package management-related software
Free software projects |
23870546 | https://en.wikipedia.org/wiki/Distributed%20firewall | Distributed firewall | A firewall is a system or group of systems (router, proxy, or gateway) that implements a set of security rules to enforce access control between two networks to protect the "inside" network from the "outside" network. It may be a hardware device or a software program running on a secure host computer. In either case, it must have at least two network interfaces, one for the network it is intended to protect, and one for the network it is exposed to. A firewall sits at the junction point or gateway between the two networks, usually a Private network and a public network such as the Internet.
Evolution of distributed firewall
Conventional firewalls rely on the notions of restricted topology and control entry points to function. More precisely, they rely on the assumption that everyone on one side of the entry point—the firewall—is to be trusted, and that anyone on the other side is, at least potentially, an enemy.
Distributed firewalls are host-resident security software applications that protect the enterprise network's servers and end-user machines against unwanted intrusion. They offer the advantage of filtering traffic from both the Internet and the internal network. This enables them to prevent hacking attacks that originate from both the Internet and the internal network. This is important because the most costly and destructive attacks still originate from within the organization.
They are like personal firewalls, except they offer several important advantages like central management, logging, and in some cases, access-control granularity. These features are necessary to implement corporate security policies in larger enterprises. Policies can be defined and pushed out on an enterprise-wide basis.
A feature of distributed firewalls is centralized management. The ability to populate servers and end-users machines, to configure and "push out" consistent security policies helps to maximize limited resources. The ability to gather reports and maintain updates centrally makes distributed security practical. Distributed firewalls help in two ways. Remote end-user machines can be secured. Secondly, they secure critical servers on the network preventing intrusion by malicious code and "jailing" other such code by not letting the protected server be used as a launchpad for expanded attacks.
Usually deployed behind the traditional firewall, they provide a second layer of defense. They work by enabling only essential traffic into the machine they protect, prohibiting other types of traffic to prevent unwanted intrusions. Whereas the perimeter firewall must take a generalist, common denominator approach to protecting servers on the network, distributed firewalls act as specialists.
Some problems with the conventional firewalls that lead to Distributed firewalls are as follows.
Due to the increasing line speeds and the more computation intensive protocols that a firewall must support; firewalls tend to become congestion points. This gap between processing and networking speeds is likely to increase, at least for the foreseeable future; while computers (and hence firewalls) are getting faster, the combination of more complex protocols and the tremendous increase in the amount of data that must be passed through the firewall has been and likely will continue to outpace Moore's law.
There exist protocols, and new protocols are designed, that are difficult to process at the firewall, because the latter lacks certain knowledge that is readily available at the endpoints. FTP and RealAudio are two such protocols. Although there exist application-level proxies that handle such protocols, such solutions are viewed as architecturally “unclean” and in some cases too invasive.
Likewise, because of the dependence on the network topology, a PF can only enforce a policy on traffic that traverses it. Thus, traffic exchanged among nodes in the protected network cannot be controlled. This gives an attacker that is already an insider or can somehow bypass the firewall complete freedom to act.
Worse yet, it has become trivial for anyone to establish a new, unauthorized entry point to the network without the administrator's knowledge and consent. Various forms of tunnels, wireless, and dial-up access methods allow individuals to establish backdoor access that bypasses all the security mechanisms provided by traditional firewalls. While firewalls are in general not intended to guard against misbehavior by insiders, there is a tension between internal needs for more connectivity and the difficulty of satisfying such needs with a centralized firewall.
IPsec is a protocol suite, recently standardized by the IETF, which provides network-layer security services such as packet confidentiality, authentication, data integrity, replay protection, and automated key management.
This is an artifact of firewall deployment: internal traffic that is not seen by the firewall cannot be filtered; as a result, internal users can mount attacks on other users and networks without the firewall being able to intervene.
Large networks today tend to have a large number of entry points (for performance, failover, and other reasons). Furthermore, many sites employ internal firewalls to provide some form of compartmentalization. This makes administration particularly difficult, both from a practical point of view and with regard to policy consistency, since no unified and comprehensive management mechanism exists.
End-to-end encryption can also be a threat to firewalls, as it prevents them from looking at the packet fields necessary to do filtering. Allowing end-to-end encryption through a firewall implies considerable trust to the users on behalf of the administrators.
Finally, there is an increasing need for finer-grained access control which standard firewalls cannot readily accommodate without greatly increasing their complexity and processing requirements.
Distributed firewalls are host-resident security software applications that protect the enterprise network's critical endpoints against unwanted intrusion that is, its servers and end-user machines. In this concept, the security policy is defined centrally and the enforcement of the policy takes place at each endpoint (hosts, routers, etc.). Usually deployed behind the traditional firewall, they provide a second layer of protection.
Since all the hosts on the inside are trusted equally, if any of these machines are subverted, they can be used to launch attacks to other hosts, especially to trusted hosts for protocols like rlogin. Thus there is a faithful effort from the industry security organizations to move towards a system which has all the aspects of a desktop firewall but with centralized management like Distributed Firewalls.
Distributed, host-resident firewalls prevent the hacking of both the PC and its use as an entry point into the enterprise network. A compromised PC can make the whole network vulnerable to attacks. The hacker can penetrate the enterprise network uncontested and steal or corrupt corporate assets.
Basic working
Distributed firewalls are often kernel-mode applications that sit at the bottom of the OSI stack in the operating system. They filter all traffic regardless of its origin—the Internet or the internal network. They treat both the Internet and the internal network as "unfriendly". They guard the individual machine in the same way that the perimeter firewall guards the overall network.
Distributed firewalls rest on three notions:
A policy language that states what sort of connections are permitted or prohibited,
Any of a number of system management tools, such as Microsoft's SMS or ASD, and
IPSEC, the network-level encryption mechanism for Internet Protocols (TCP, UDP, etc.).
The basic idea is simple. A compiler translates the policy language into some internal format. The system management software distributes this policy file to all hosts that are protected by the firewall. And incoming packets are accepted or rejected by each "inside" host, according to both the policy and the cryptographically-verified identity of each sender.
Policies
One of the most often used term in case of network security and in particular distributed firewall is policy. It is essential to know about policies. A “security policy” defines the security rules of a system. Without a defined security policy, there is no way to know what access is allowed or blocked. simple example for a firewall is:
Allow all connections to the web server.
Deny all other access.
The distribution of the policy can be different and varies with the implementation. It can be either directly pushed to end systems, or pulled when necessary.
Pull technique
The hosts, while booting up pings to the central management server to check whether the central management server is up and active. It registers with the central management server and requests for its policies which it should implement. The central management server provides the host with its security policies.
For example, a license server or a security clearance server can be asked if a certain communication should be permitted. A conventional firewall could do the same, but it lacks important knowledge about the context of the request. End systems may know things like which files are involved, and what their security levels might be. Such information could be carried over a network protocol, but only by adding complexity.
Push technique
The push technique is employed when the policies are updated at the central management side by the network administrator and the hosts have to be updated immediately. This push technology ensures that the hosts always have the updated policies at any time.
The policy language defines which inbound and outbound connections on any component of the network policy domain are allowed, and can affect policy decisions on any layer of the network, being it at rejecting or passing certain packets or enforcing policies at the Application Layer.
Components of a distributed firewall
A central management system for designing the policies.
A transmission system to transmit these policies.
Implementation of the designed policies at the client end.
Central management system
Central Management, a component of distributed firewalls, makes it practical to secure enterprise-wide servers, desktops, laptops, and workstations. Central management provides greater control and efficiency and it decreases the maintenance costs of managing global security installations. This feature addresses the need to maximize network security resources by enabling policies to be centrally configured, deployed, monitored, and updated. From a single workstation, distributed firewalls can be scanned to understand the current operating policy and to determine if updating is required.
Policy distribution
The policy distribution scheme should guarantee the integrity of the policy during transfer. The distribution of the policy can be different and varies with the implementation. It can be either directly pushed to end systems, or pulled when necessary.
Host-end implementation
The security policies transmitted from the central management server have to be implemented by the host. The host end part of the Distributed Firewall does provide any administrative control for the network administrator to control the implementation of policies. The host allows traffic based on the security rules it has implemented.
Threat comparison
Distributed firewalls have both strengths and weaknesses when compared to conventional firewalls. By far the biggest difference, of course, is their reliance on topology. If the network topology does not permit reliance on traditional firewall techniques, there is little choice. A more interesting question is how the two types compare in a closed, single-entry network. That is, if either will work, is there a reason to choose one over the other?
Service exposure and port scanning
Both types of firewalls are excellent at rejecting connection requests for inappropriate services. Conventional firewalls drop the requests at the border; distributed firewalls do so at the host. A more interesting question is what is noticed by the host attempting to connect. Today, such packets are typically discarded, with no notification. A distributed firewall may choose to discard the packet, under the assumption that its legal peers know to use IPSEC; alternatively, it may instead send back a response requesting that the connection be authenticated, which in turn gives notice of the existence of the host.
Firewalls built on pure packet filters cannot reject some "stealth scans" very well. One technique, for example, uses fragmented packets that can pass through unexamined because the port numbers aren't present in the first fragment. A distributed firewall will reassemble the packet and then reject it. On balance, against this sort of threat the two firewall types are at least comparable.
IP address spoofing
On network, addresses are not a favored concept. Using cryptographic mechanisms most likely prevents attacks based on forged source addresses, under the assumption that the trusted repository containing all necessary credentials has not been subject to compromise in itself. These problems can be solved by conventional firewalls with corresponding rules for discarding packets at the network perimeter but will not prevent such attacks originating from inside the network policy domain.
Malicious software
With the spread use of distributed object-oriented systems like CORBA, client-side use of Java, and weaknesses in mail readers, and the like, there is a wide variety of threats residing in the application and intermediate level of communication traffic. Firewall mechanisms at the perimeter can come useful by inspecting incoming e-mails for known malicious code fingerprints, but can be confronted with complex, thus resource-consuming situations when making decisions on other code, like Java.
Using the framework of a distributed firewall and especially considering a policy language which allows for a policy decision on the application level can circumvent some of these problems, under the condition that contents of such communication packets can be interpreted semantically by the policy verifying mechanisms. Stateful inspection of packets shows up to be easily adapted to these requirements and allows for finer granularity in decision making. Furthermore, malicious code contents may be completely disguised to the screening unit at the network perimeter, given the use of virtual private networks, and enciphered communication traffic in general and can completely disable such policy enforcement on conventional firewalls.
Intrusion detection
Many firewalls detect attempted intrusions. If that functionality is to be provided by a distributed firewall, each individual host has to notice probes and forward them to some central location for processing and correlation.
The former problem is not hard; many hosts already log such attempts. One can make a good case that such detection should be done in any event. The collection is more problematic, especially at times of poor connectivity to the central site. There is also the risk of coordinated attacks in effect, causing a denial-of-service attack against the central machine.
Insider attacks
Given the natural view of a conventional firewall on the network topology as consisting of an inside and outside, problems can arise, once one or more members of the policy network domain have been compromised. Perimeter firewalls can only enforce policies between distinct networks and show no option to circumvent problems which arise in the situation discussed above.
Given a distributed firewalls independence on topological constraints supports the enforcement of policies whether hosts are members or outsiders of the overall policy domain and base their decisions on authenticating mechanisms which are not inherent characteristics of the networks layout. Moreover, compromise of an endpoint either by a legitimate user or intruder will not weaken the overall network in a way that leads directly to compromise of other machines, given the fact that the deployment of virtual private networks prevents sniffing of communication traffic in which the attacked machine is not involved.
On the other side, on the end-point itself nearly the same problems arise as in conventional firewalls: Assuming that a machine has been taken over by an adversary must lead to the conclusion that the policy enforcement mechanisms themself may be broken. The installation of backdoors on this machine can be done quite easily once the security mechanisms are flawed and in the lack of a perimeter firewall, there is no trusted entity anymore which might prevent arbitrary traffic entering or leaving the compromised host.
Additionally use of tools like SSH and the like allow tunneling of other applications communication and can not be prevented without proper knowledge of the decrypting credentials, moreover given the fact that in case an attack has shown up successfully the verifying mechanisms in themself may not be trusted anymore.
At first glance, the biggest weakness of distributed firewalls is their greater susceptibility to lack of cooperation by users. What happens if someone changes the policy files on their own?
Distributed firewalls can reduce the threat of actual attacks by insiders, simply by making it easier to set up smaller groups of users. Thus, one can restrict access to a file server to only those users who need it, rather than letting anyone inside the company pound on it.
It is also worth expending some effort to prevent casual subversion of policies. If policies are stored in a simple ASCII file, a user wishing to, for example, play a game could easily turn off protection. Requiring the would-be uncooperative user to go to more trouble is probably worthwhile, even if the mechanism is theoretically insufficient.
For example, policies could be digitally signed, and verified by a frequently-changing key in an awkward-to-replace location. For more stringent protections, the policy enforcement can be incorporated into a tamper-resistant network card.
References
Books
Sonnenreich, Wes, and Tom Yates, Building Linux and OpenBSD Firewalls, Singapore: Addison Wiley
Zwicky, D. Elizabeth, Simon Cooper, Brent D. Chapman, Building Internet Firewalls O'Reilly Publications
Strebe, Firewalls 24 Seven, BPB Publishers
White papers and reports
Bellovin, M. Steven “Distributed Firewalls", login, November 1999, pp. 39–47 http://www.usenix.org/publications/login/1999-11/features/firewalls.html
Dr. Hancock, Bill "Host-Resident Firewalls: Defending Windows NT/2000 Servers and Desktops from Network Attacks"
Bellovin, S.M. and W.R. Cheswick, "Firewalls and Internet Security: Repelling the Wily Hacker", Addison-Wesley, 1994.
Ioannidis, S. and Keromytis, A.D., and Bellovin, S.M. and J.M. Smith, "Implementing a Distributed Firewall", Proceedings of Computer and Communications Security (CCS), pp. 190–199, November 2000, Athens, Greece.
Computer network security |
6368345 | https://en.wikipedia.org/wiki/Micral | Micral | Micral is a series of microcomputers produced by the French company Réalisation d'Études Électroniques (R2E), beginning with the Micral N in early 1973. The Micral N was the first commercially available microprocessor-based computer.
In 1986, three judges at The Computer Museum, Boston – Apple II designer and Apple Inc. co-founder Steve Wozniak, early MITS employee and PC World publisher David Bunnell, and the museum's associate director and curator Oliver Strimpel – awarded the title of "first personal computer using a microprocessor" to the 1973 Micral. The Micral N was the earliest commercial, non-kit personal computer based on a microprocessor (in this case, the Intel 8008).
The Computer History Museum currently says that the Micral is one of the earliest commercial, non-kit personal computers. The 1971 Kenbak-1, invented before the first microprocessor, is considered to be the world's first "personal computer". That machine did not have a one-chip CPU but instead was based purely on small-scale integration TTL chips.
Micral N
R2E founder André Truong Trong Thi (EFREI degree, Paris), a French immigrant from Vietnam, asked Frenchman François Gernelle to develop the Micral N computer for the Institut National de la Recherche Agronomique (INRA), starting in June 1972. Alain Perrier of INRA was looking for a computer for process control in his crop evapotranspiration measurements. The software was developed by Benchetrit. Beckmann designed the I/O boards and controllers for peripheral magnetic storage. Lacombe was responsible for the memory system, I/O high speed channel, power supply and front panel. Gernelle invented the Micral N, which was much smaller than existing minicomputers. The January 1974 Users Manual called it "the first of a new generation of mini-computer whose principal feature is its very low cost," and said, "MICRAL's principal use is in process control. It does not aim to be an universal mini-computer."
The computer was to be delivered in December 1972, and Gernelle, Lacombe, Benchetrit and Beckmann had to work in a cellar in Châtenay-Malabry for 18 hours a day in order to deliver the computer in time. The software, the ROM-based MIC 01 monitor and the ASMIC 01 assembler, was written on an Intertechnique Multi-8 minicomputer using a cross assembler. The computer was based on an Intel 8008 microprocessor clocked at 500 kHz. It had a backplane bus, called the Pluribus with 74-pin connector. 14 boards could be plugged in a Pluribus. With two Pluribus, the Micral N could support up to 24 boards. The computer used MOS memory instead of core memory. The Micral N could support parallel and serial input/output. It had 8 levels of interrupt and a stack. The computer was programmed with punched tape, and used a teleprinter or modem for I/O. The front panel console was optional, offering customers the option of designing their own console to match a particular application. It was delivered to the INRA in January 1973, and commercialized in February 1973 for FF 8,500 (about $1,750) making it a cost-effective replacement for minicomputers which augured the era of the PC.
France had produced the first microcomputer. A year would pass before the first North American microcomputer, SCELBI, was advertised in the March 1974 issue of QST, an amateur radio magazine.
Indeed, INRA was originally planning to use PDP-8 computers for process control, but the Micral N could do the same for a fifth of the cost. An 8-inch floppy disk reader was added to the Micral in December 1973, following a command of the Commissariat à l'Energie Atomique. This was made possible by the pile-canal, a buffer that could accept one megabyte per second. In 1974, a keyboard and screen were fitted to the Micral computers. A hard disk (first made by CAELUS then by Diablo) became available in 1975. In 1979, the Micral 8031 D was equipped with a 5" 1/4 inches hard disk of 5 Megabytes made by Shugart.
Later models
Following the April 1974 introduction of the Intel 8080, R2E introduced the second and third Micral models, 8080-based at 1 MHz, the Micral G and Micral S.
In November 1975, R2E signed Warner & Swasey Company as the exclusive manufacturer and marketer of the Micral line in the United States and Canada. Warner & Swasey marketed its Micral-based system for industrial data processing applications such as engineering data analysis, accounting and inventory control. R2E and Warner & Swasey displayed the Micral M multiple microcomputer system at the June 1976 National Computer Conference. The Micral M consists of up to eight Micral S microcomputers, each with its own local memory and sharing the common memory so the local and common memory look like one monolithic memory for each processor. The system has a distributed multiprocessor operating system R2E said was based on sharing common resources and real-time task management.
Some time after the July 1976 introduction of the Zilog Z80, came the Z80-based Micral CZ. The 8080-based Micral C, an intelligent CRT terminal designed for word processing and automatic typesetting, was introduced in July 1977. It has two Shugart SA400 minifloppy drives and a panel of system control and sense switches below the minifloppy drives. Business application language (BAL) and FORTRAN are supported. By October, R2E had set up an American subsidiary, R2E of America, in Minneapolis. The Micral V Portable (1978) could run FORTRAN and assembler under the Sysmic operating system, or BAL. The original Sysmic operating system was renamed Prologue in 1978. Prologue was able to perform real-time multitasking, and was a multi-user system. R2E offered CP/M for the Micral C in 1979.
The R2E Micral CCMC Portal portable microcomputer made its official appearance in September 1980 at the SICOB show in Paris. It was designed and marketed by the studies and developments department of François Gernelle of the french firm R2E Micral at the request of the company CCMC specializing in payroll and accounting. The Portal was based on an Intel 8085 processor, 8-bit, clocked at 2 MHz. It weighed 12Kg and its dimensions were 45cm x 45cm x 15cm, It provided total mobility. Its operating system was Prologue.
Later Micrals used the Intel 8088. The last Micral designed by François Gernelle was the 9020. In 1981, R2E was bought by Groupe Bull. Starting with the Bull Micral 30, which could use both Prologue and MS-DOS, Groupe Bull transformed the Micral computers into a line of PC compatibles. François Gernelle left Bull in 1983.
Legacy
Truong's R2E sold about 90,000 units of the Micral that were mostly used in vertical applications such as highway toll booths and process control.
Litigation followed after Truong started claiming that he alone invented the first personal computer. The courts did not judge in favor of Truong, who was declared "the businessman, but not the inventor", giving in 1998 the sole claim as inventor of the first personal computer to Gernelle and the R2E engineering team.
In the mid-1970s, Philippe Kahn was a programmer for the Micral. Kahn later headed Borland which released Turbo Pascal and Sidekick in 1983.
Paul G. Allen, the co-founder of Microsoft with Bill Gates, bought a Micral N by the auctioneer Rouillac at the Artigny Castle in France, on June 11, 2017 for his Seattle museum Living Computers: Museum + Labs.
Micral computer models
R2E series
1973 : Micral N, first microcomputer, built by François Gernelle.
1974 : Micral G, Intel 8008 at 1MHz, 16K RAM
1974 : Micral S, Intel 8080
1976 : Micral M, distributed system, Intel 8080 × 8
1977 : Micral C, Intel 8080, 24K RAM, integrated monitor, floppy disc drive
1978 : Micral V, Intel 8080, 32K RAM, portable
Bull series
1979 : Micral 80-30, Zilog Z80
1980 : Micral 80-20, Zilog Z80A at 3MHz
1980 : Portal, Intel 8085 at 2MHz
1981 : Micral P2, Zilog Z80 at 5MHz, 64K RAM
1981 : Micral X, Zilog Z80, 10MB hard disc (CII Honeywell Bull D140)
1983 : Micral 90-20, Intel 8088 at 5MHz
1983 : Micral 90-50, Intel 8086 at 8MHz, 256K RAM
PC compatible series
1985 : Bull Micral 30, Intel 8088 at 4.77MHz, PC-XT compatible
1986 : Bull Micral 60, Intel 80286 at 6MHz, PC-AT compatible
1986 : Bull Micral 35, Intel 80286 at 8MHz
1987 : Bull Micral 40, Intel 80286 at 8MHz
1988 : Bull Micral 45, Intel 80286 at 12MHz
1988 : Bull Micral 65, Intel 80286 at 12MHz
1988 : Bull Micral 75, Intel 80386 at 8Mhz
1988 : Bull Micral Attaché, Intel 8086 at 9.54MHz, portable
1989 : Bull Micral 200, Intel 80286 at 12MHz
1989 : Bull Micral 600, Intel 80386 at 25MHz
In 1989 Bull buys Zenith Data Systems, starting to release PC compatibles under the brand Zenith.
See also
History of computing hardware (1960s–present)
History of personal computers
Intellec
List of early microcomputers
Portal laptop computer
References
External links
Gernelle and Truong
A picture of the Front panel of Micral-N
François Gernelle LA NAISSANCE DU PREMIER MICRO-ORDINATEUR : LE MICRAL N ("The birth of the first microcomputer: the Micral N") Proceedings of the second symposium on the history of computing (CNAM, Paris, 1990)
François Gernelle Communication sur les choix architecturaux et technologiques qui ont présidé à la conception du « Micral » Premier micro-ordinateur au monde Proceedings of the fifth symposium on the history of computing (Toulouse, 1998)
French patent FR2216883 (number INPI: 73 03 553), German patent DE2404886, Dutch Patent NL7401328, Japanese patent JP50117333 (inventor François Gernelle) RECHNER, INSBESONDERE FUER REALZEIT-ANWENDUNG (August 8, 1974)
French patent FR2216884(number INPI: 7303552), German patent DE2404887, Dutch patent NL7401271, Japanese patent JP50117327 (inventor François Gernelle) KANAL FUER DEN INFORMATIONSAUSTAUSCH ZWISCHEN EINEM RECHNER UND SCHNELLEN PERIPHEREN EINHEITEN (August 8, 1974)
: Data processing system, specially for real-time applications
: Channel for exchanging information between a computer and rapid peripheral units (pile-canal)
MICRAL documentation at bitsavers.org
Early microcomputers
Computer companies of France
Groupe Bull
8-bit computers |
544156 | https://en.wikipedia.org/wiki/Richard%20A.%20Clarke | Richard A. Clarke | Richard Alan Clarke (born October 27, 1950) is an American novelist and former government official. He was National Coordinator for Security, Infrastructure Protection, and Counter-terrorism for the United States between 1998 and 2003.
Clarke worked for the State Department during the presidency of Ronald Reagan. In 1992, President George H.W. Bush appointed him to chair the Counter-terrorism Security Group and to a seat on the United States National Security Council. President Bill Clinton retained Clarke and in 1998 promoted him to be the National Coordinator for Security, Infrastructure Protection, and Counter-terrorism, the chief counter-terrorism adviser on the National Security Council. Under President George W. Bush, Clarke initially continued in the same position but no longer had Cabinet-level access. He later was appointed as the Special Advisor to the President on cybersecurity. Clarke left the Bush administration in 2003.
Clarke came to widespread public attention for his counter-terrorism role in March 2004: he published a memoir about his service in government, Against All Enemies, appeared on the 60 Minutes television news magazine, and testified before the 9/11 Commission. In all three cases, Clarke sharply criticized the Bush administration's attitude toward counter-terrorism before the 9/11 terrorist attacks, and its decision afterward to wage war and invade Iraq. Clarke was criticized by some supporters of Bush's decisions.
After leaving U.S. government, with U.S. government legal approvals, Clarke helped the United Arab Emirates to set up a cyber security unit intended to protect their nation. Years after Clarke left, some components of the program were acquired by a sequence of firms and it is reported they eventually surveilled women's rights activists, UN diplomats and FIFA officials.
Background
Richard Clarke was born to a worker in a chocolate factory and a nurse in Boston, Massachusetts in 1950. He attended the Boston Latin School, where he graduated in 1968. He attended college at the University of Pennsylvania, where he received a Bachelor's degree in 1972. He had been selected to serve in the Sphinx Senior Society.
After starting as a management intern at the U.S. Department of Defense and later working as an analyst on European security issues, Clarke went to graduate school. He earned a master's degree in management in 1978 from the Massachusetts Institute of Technology.
Government career
In 1973, Clarke began work in the federal government as a management intern in the Department of Defense. He worked in numerous areas of defense while in headquarters.
From 1979-1985, he worked at the Department of State as a career analyst in the Bureau of Politico-Military Affairs. Beginning in 1985, Clarke was appointed by the Ronald Reagan administration as Deputy Assistant Secretary of State for Intelligence, his first political appointee position as a Republican Party member.
During the administration of George H.W. Bush, he was appointed as the Assistant Secretary of State for Political-Military Affairs. He coordinated diplomatic efforts to support the 1990–1991 Gulf War and subsequent security arrangements.
Democrat Bill Clinton kept Clarke on in his administration, appointing him in 1998 as National Coordinator for Security, Infrastructure Protection, and Counter-terrorism for the National Security Council. In this position, he had cabinet-level access to the president.
Clarke continued as counter-terrorism coordinator at the NSC during the first year of the George W. Bush administration, but no longer had access, as the position's scope was reduced. His written recommendations and memos had to go through layers of political appointees above him. In 2001, he was appointed as Special Advisor to the President on cybersecurity and cyberterrorism. He resigned from the Bush administration in early 2003.
Clarke's positions inside the government have included:
United States Department of State 1985–1992
Assistant Secretary of State for Politico-Military Affairs, 1989–1992
Deputy Assistant Secretary of State for Intelligence, 1985–1988
United States National Security Council, 1992–2003
Special Advisor, 2001–2003
National Coordinator for Security, Infrastructure Protection, and Counter-terrorism, 1998–2001
Chairman of the Counter-terrorism Security Group, 1992–2003
Clinton administration
During the Rwandan genocide of 1994, Clarke advised Madeleine Albright, then US Ambassador to the United Nations, to request the UN to withdraw all UN troops from the country. She refused, and permitted Lieutenant-General Roméo Dallaire to keep a few hundred UN troops; his forces saved tens of thousands from the genocide.
Later Clarke told Samantha Power, "It wasn't in American's national interest. If we had to do the same thing today and I was advising the President, I would advise the same thing." He supervised the writing of PDD-25, a classified Executive Order that established criteria for future US participation in UN peacekeeping operations. It also proposed a reduced military and economic role for the United States in Rwanda.
After Islamists took control in Sudan in a 1989 coup d'état, the United States had adopted a policy of disengagement with the authoritarian regime throughout the 1990s. After the September 11, 2001 terrorist attacks, however, some critics charged that the US should have moderated its policy toward Sudan earlier. The influence of Islamists there waned in the second half of the 1990s, and Sudanese officials began to indicate an interest in accommodating US concerns related to Osama bin Laden. He lived in Sudan until he was expelled in May 1996. (He was later revealed as the planner of 9/11.)
Timothy M. Carney, US ambassador to Sudan between September 1995 and November 1997, co-authored an op-ed in 2002 claiming that in 1997, Sudan offered to turn over its intelligence on bin Laden to the USA, but that Susan Rice, as National Security Council (NSC) Africa specialist, together with NSC terrorism specialist Richard A. Clarke, successfully lobbied for continuing to bar U.S. officials, including the CIA and FBI, from engaging with the Khartoum government. Similar allegations (that Susan Rice joined others in missing an opportunity to cooperate with Sudan on counter-terrorism) were made by David Rose, Vanity Fair contributing editor, and Richard Miniter, author of Losing Bin Laden.
Clarke was involved in supervising the investigation of Ramzi Yousef, one of the main perpetrators of the 1993 World Trade Center bombing, who had traveled to the United States on an Iraqi passport. Yousef is the nephew of Khalid Sheikh Mohammed, a senior al-Qaeda member. Many in the Clinton administration and the intelligence community believed Yousef's ties were evidence linking al-Qaeda's activities and the government of Iraq.
In February 1999, Clarke wrote the Deputy National Security Advisor that a reliable source reported Iraqi officials had met with Bin Laden and may have offered him asylum. Clarke advised against surveillance flights to track bin Laden in Afghanistan: he said that anticipating an attack, "old wily Usama will likely boogie to Baghdad," where he would be impossible to find. That year Clarke told the press in official statements that "Iraqi nerve gas experts" and al-Qaeda were linked to an alleged joint-chemical-weapons-development effort at the Al-Shifa pharmaceutical factory in Sudan.
Michael Scheuer is the former chief of the bin Laden Unit at the Counterterrorist Center at the CIA. Matthew Continetti wrote: "Scheuer believes that Clarke's risk aversion and politicking negatively impacted the hunt for bin Laden prior to September 11, 2001. Scheuer stated that his unit, codename 'Alec,' had provided information that could have led to the capture and or killing of Osama bin Laden on ten occasions during the Clinton administration, only to have his recommendations for action turned down by senior intelligence officials, including Clarke."
Operation Orient Express
In 1996, Clarke entered into a secret pact with Madeleine Albright, then US ambassador to the UN, Michael Sheehan, and James Rubin, to overthrow U.N. Secretary-General Boutros Boutros-Ghali, who was running unopposed for a second term in the 1996 selection. They dubbed the pact "Operation Orient Express" to reflect their hope "that many nations would join us in doing in the UN head." However, every other member of the Security Council voted for Boutros-Ghali. Despite severe criticism, Clarke and Sheehan prevailed upon President Clinton to resist international pressure and continue the US's solo veto. After four deadlocked meetings of the Security Council, Boutros-Ghali suspended his candidacy. He is the only U.N. Secretary-General ever to be denied a second term by a Security Council member veto.
The United States fought a four-round veto duel with France, forcing it to back down and accept the selection of US-educated Kofi Annan as the next Secretary-General. In his memoirs, Clarke said that "the entire operation had strengthened Albright's hand in the competition to be Secretary of State in the second Clinton administration."
Bush administration
On April 8, 2004, Condoleezza Rice was publicly interviewed by the 9/11 investigatory commission. She discussed Clarke and his communications with the Bush administration regarding bin Laden and associated terrorist plots targeting the United States. Clarke had written a memo dated January 25, 2001, to Rice. He urgently requested a meeting of the NSC's Principals Committee to discuss the growing al-Qaeda threat in the greater Middle East, and suggested strategies for combating al-Qaeda that might be adopted by the new Bush administration.
In his memoir, Against All Enemies, Clarke wrote that Condoleezza Rice decided that the position of National Coordinator for Counterterrorism should be downgraded. By demoting the office, he believed that the Administration sent a signal to the national security bureaucracy that reduced the salience of terrorism. No longer would Clarke's memos go to the President; instead they had to pass through a chain of command of National Security Advisor Condoleezza Rice and her deputy Stephen Hadley, who bounced every one of them back.
Within a week of the inauguration, I wrote to Rice and Hadley asking 'urgently' for a Principals, or Cabinet-level, meeting to review the imminent Al-Qaeda threat. Rice told me that the Principals Committee, which had been the first venue for terrorism policy discussions in the Clinton administration, would not address the issue until it had been 'framed' by the Deputies.
The National Commission On Terrorist Attacks Upon The United States reported in its eighth public hearing:
Clarke asked on several occasions for early principals meetings on these issues and was frustrated that no early meeting was scheduled.
No Principals Committee meetings on al Qaeda were held until September 4th, 2001.
At the first Deputies Committee meeting on terrorism, held in April 2001, Clarke strongly suggested that the U.S. put pressure on both the Taliban and al-Qaeda by arming the Northern Alliance and other groups in Afghanistan. Simultaneously, he said that the US should target bin Laden and his leadership by restoring flights of the MQ-1 Predators. Deputy Secretary of Defense Paul Wolfowitz responded, "Well, I just don't understand why we are beginning by talking about this one man bin Laden." Clarke replied that he was talking about bin Laden and his network because it posed "an immediate and serious threat to the United States." According to Clarke, Wolfowitz turned to him and said, "You give bin Laden too much credit. He could not do all these things like the 1993 attack on New York, not without a state sponsor. Just because the FBI and CIA have failed to find the linkages does not mean they don't exist."
Clarke wrote in Against All Enemies that in the summer of 2001, the intelligence community was convinced of an imminent attack by al-Qaeda, but could not get the attention of the highest levels of the Bush administration.
At a July 5, 2001, White House gathering of the FAA, the Coast Guard, the FBI, Secret Service and INS, Clarke said that "something really spectacular is going to happen here, and it's going to happen soon."
Cyberterrorism and cybersecurity
Appointed in 2001 as Special Advisor to the President on Cybersecurity, Clarke spent his last year in the Bush administration focusing on cybersecurity and the threat of terrorism against the critical infrastructure of the United States. At a security conference in 2002, after citing statistics that indicated that less than 0.0025 percent of corporate revenue on average is spent on information-technology security, Clarke was heard to say, "If you spend more on coffee than on IT security, then you will be hacked. What's more, you deserve to be hacked."
9/11 Commission
On March 24, 2004, Clarke testified at the public 9/11 Commission hearings. He initially offered an apology to the families of 9/11 victims and said: "...your government failed you. Those entrusted with protecting you failed you. And I failed you. We tried hard, but that doesn't matter because we failed. And for that failure, I would ask, once all the facts are out, for your understanding and for your forgiveness."
Clarke's testimony during the hearings was consistent with his account in his memoir. Clarke said that before and during the 9/11 crisis, many in the administration were distracted from taking action against Osama bin Laden's al-Qaeda organization because of an existing pre-occupation with Iraq and Saddam Hussein. Clarke wrote that on September 12, 2001, President Bush "testily" asked him and his aides to try to find evidence that Saddam was connected to the terrorist attacks. In response, Clarke wrote a report stating there was no evidence of Iraqi involvement: all relevant agencies, including the FBI and the CIA, signed off on this conclusion. The paper was quickly returned by a deputy with a note saying, "Please update and resubmit." In April 2004, the White House at first denied Clarke's account of meeting with Bush but reversed its denial when others who had been present backed Clarke's version of the events.
Supporting Clarke's claim that intelligence forewarning of attacks had been delivered to the president prior to 9/11, former Deputy Attorney General Jamie Gorelick, the sole member of the 9/11 Commission permitted (under an agreement with the Bush administration) to read the President's Daily Brief, said that these had contained "an extraordinary spike" in intelligence warnings of al-Qaeda attacks that had "plateaued at a spike level for months" before 9/11.
Criticism
Before and after Clarke appeared before the 9/11 Commission, some critics tried to attack his credibility. They impugned his motives, claiming he was a disappointed job-hunter, that he sought publicity, and that he was a political partisan. They charged that he exaggerated perceived failures in the Bush administration's counterterrorism policies while exculpating the former Clinton administration from its perceived shortcomings.
According to some reports, the White House tried to discredit Clarke in a move described as "shooting the messenger." The New York Times economics columnist Paul Krugman was more blunt, calling the attacks on Clarke "a campaign of character assassination."
Some Republicans inside and outside the Bush administration questioned both Clarke's testimony and his tenure during the hearings. Senate Republican Majority Leader Bill Frist took to the Senate floor to make a speech alleging Clarke told "two entirely different stories under oath", pointing to congressional hearing testimony Clarke gave in 2002 and his 9/11 Commission testimony. Frist later speculated to reporters Clarke was trading on his former service as a government insider with access to the nation’s most valuable intelligence to sell a book.
Clarke was criticized for his suggestions in 1999 of intelligence indicating a link between Saddam Hussein and al-Qaeda, despite the fact Clarke and others concluded after investigations by 2001 that no link had been established. In Against All Enemies Clarke writes, "It is certainly possible that Iraqi agents dangled the possibility of asylum in Iraq before bin Laden at some point when everyone knew that the U.S. was pressuring the Taliban to arrest him. If that dangle happened, bin Laden's accepting asylum clearly did not," (p. 270). In an interview on March 21, 2004, Clarke claimed that "there's absolutely no evidence that Iraq was supporting al-Qaeda, ever." Clarke claimed in his book that this conclusion was understood by the intelligence community at the time of 9/11 and the ensuing months, but top Bush administration officials were preoccupied with finding a link between Iraq and 9/11 in the months that followed the attack, and thus, Clarke argued, the Iraq war distracted attention and resources from the war in Afghanistan and hunt for Osama bin Laden.
Fox News, allegedly with the Administration's consent, identified and released a background briefing that Clarke gave in August 2002, at the Administration's request, to minimize the fallout from a Time magazine story about the President's failure to take certain actions before 9/11. In that briefing on behalf of the White House, Clarke stated "there was no plan on Al-Qaeda that was passed from the Clinton administration to the Bush administration," and that after taking office President Bush decided to "add to the existing Clinton strategy and to increase CIA resources, for example, for covert action, fivefold, to go after Al-Qaeda." At the next day's hearing, 9/11 Commission member James Thompson challenged Clarke with the 2002 account, and Clarke explained: "I was asked to make that case to the press. I was a special assistant to the President, and I made the case I was asked to make... I was asked to highlight the positive aspects of what the Administration had done and to minimize the negative aspects of what the Administration had done. And as a special assistant to the President, one is frequently asked to do that kind of thing. I've done it for several Presidents."
Another point of attack was Clarke's role in allowing members of the bin Laden family to fly to Saudi Arabia on September 20, 2001. According to Clarke's statements to the 9/11 Commission, a request was relayed to Clarke from the Saudi embassy to allow the members of the bin Laden family living in the U.S. to fly home. Clarke testified to the commission that he passed this decision in turn to the FBI via Dale Watson, and that the FBI at length sent its approval of the flight to the Interagency Crisis Management Group. However, FBI spokesman John Iannarelli denied that the FBI had a role in approving the flight: "I can say unequivocally that the FBI had no role in facilitating these flights."
Clarke has also exchanged criticism with Michael Scheuer, former chief of the Bin Laden Issue Station at the CIA. When asked to respond to Clarke's claim that Scheuer was "a hothead, a middle manager who really didn't go to any of the cabinet meetings," Scheuer returned the criticism as follows: "I certainly agree with the fact that I didn't go to the cabinet meetings. But I'm certainly also aware that I'm much better informed than Mr. Clarke ever was about the nature of the intelligence that was available against Osama bin Laden and which was consistently denigrated by himself and Mr. Tenet."
On March 28, 2004, at the height of the controversy during the 9/11 Commission Hearings, Clarke went on NBC's Sunday morning news show Meet the Press and was interviewed by journalist Tim Russert. In responding to and rebutting the criticism, Clarke challenged the Bush administration to declassify the whole record, including closed testimony by Bush administration officials before the Commission.
As of August 2017, Clarke had been obtaining large amounts of funds, notably $20 million for the Middle East Institute via the Emirates Center for Strategic Studies and Research (ECSSR), an Abu Dhabi-based think tank. The Middle East Institute had been propagating Emirati agendas in Washington and was mentioned in mail leaks of Yousef Al Otaiba, the Emirati ambassador to the US. The Intercept reported that Saif Mohamed Al Hajeri, CEO of Tawazun Holding L.L.C., had been sanctioning the money, larger than the annual budget of the Middle East Institute, on orders of Otaiba.
Post government career
Clarke is currently Chairman of Good Harbor Consulting and Good Harbour International, two strategic planning and corporate risk management firms; an on-air consultant for ABC News, and a contributor to the Good Harbor Report, an online community discussing homeland security, defense, and politics. He is an adjunct lecturer at the Harvard Kennedy School and a faculty affiliate of its Belfer Center for Science and International Affairs. He has also published two novels: The Scorpion's Gate (2005) and, Breakpoint (2007).
Clarke wrote an op-ed for the Washington Post, titled "The Trauma of 9/11 Is No Excuse" (May 31, 2009) harshly critical of other Bush administration officials. Clarke wrote that he had little sympathy for his fellow officials who seemed to want to use the excuse of being traumatized and were caught unaware by Al-Qaeda's attacks on the USA because they are being caught unaware was due to their ignoring clear reports a major attack on U.S. soil was imminent. Clarke particularly singled out former Vice President Dick Cheney and former Secretary of State, Condoleezza Rice.
In April 2010, Clarke released his book on Cyber War. In April 2012, he wrote a New York Times op-ed addressing cyber attacks. In stemming cyber attacks carried out by foreign governments and foreign hackers, particularly from China, Clarke opined that the U.S. government should be authorized to "create a major program to grab stolen data leaving the country" in a fashion similar to how the U.S. Department of Homeland Security currently searches for child pornography that crosses America's "virtual borders." Moreover, he suggested that the US president could authorize agencies to scan Internet traffic outside the US and seize sensitive files stolen from within the United States. Clarke then stated that such a policy would not endanger privacy rights through the institution of a privacy advocate, who could stop abuses or any activity that went beyond halting the theft of important files. The op-ed did not offer evidence that finding and blocking files while they are being transmitted is technically feasible.
In September 2012, Clarke stated that Middle Eastern governments were likely behind hacking incidents against several banks. During the same year, he endorsed Barack Obama's reelection for President of the United States.
Following the 2013 high-speed fatal car crash of journalist Michael Hastings, a vocal critic of the surveillance state and restrictions on the press freedom under the Obama Administration tenure, Clarke was quoted as saying, "There is reason to believe that intelligence agencies for major powers—including the United States—know how to remotely seize control of a car. So if there were a cyber attack on the car—and I'm not saying there was, I think whoever did it would probably get away with it."
In 2013, Clarke served on an advisory group for the Obama administration, as it sought to reform NSA spying programs following the revelations of documents released by Edward Snowden. The report mentioned in 'Recommendation 30' on page 37, "...that the National Security Council staff should manage an interagency process to review on a regular basis the activities of the US Government regarding attacks, that exploit a previously unknown vulnerability in a computer application." Clarke told Reuters on 11 April 2014 that the NSA had not known of Heartbleed.
In a 2017 interview, Clarke described Russia's recent cyberattack against Ukraine that spread worldwide, via the exPetr virus that posed as ransomware. He warned confidently that Russia would be back to interfere with the 2018 and 2020 U.S. elections as the vulnerabilities demonstrated in the 2016 election still exist.
In August 2021, Clarke was named as a member of American facial recognition company Clearview AI's advisory board.
Written works
Against All Enemies: Inside America's War on Terror—What Really Happened (2004). . Non-fiction book critical of past and present administrations for the way they handled the war on terror both before and after September 11, 2001. The book focuses much of its criticism on Bush for failing to take sufficient action to protect the country in the elevated-threat period before the September 11, 2001 attacks. Clarke also feels that the 2003 invasion of Iraq greatly hampered the war on terror and was a distraction from the real terrorists.
Defeating the Jihadists: A Blueprint for Action (2004). . Non-fiction book in which Clarke outlines his idea of a more effective U.S. counterterrorism policy.
The Scorpion's Gate (2005). . Novel.
Breakpoint (2007). . Novel.
Your Government Failed You: Breaking the Cycle of National Security Disasters (2008). . Non-fiction book.
Cyber War: The Next Threat to National Security and What to Do About It (2010), with Robert K. Knake. . Non-fiction book.
"How China Steals Our Secrets" (2012). The New York Times. Op-ed.
Sting of the Drone (2014). Thomas Dunne Books. . Novel.
Pinnacle Event (2015). Thomas Dunne Books. . Novel.
Warnings: Finding Cassandras to Stop Catastrophes (2017), with R. P. Eddy. HarperCollins. . Non-fiction book.
The Fifth Domain: Defending Our Country, Our Companies, and Ourselves in the Age of Cyber Threats (2019), with Robert K. Knake. . Non-fiction book.
See also
Ramzi Yousef
Blue sky memo
References
External links
1950 births
American anti–Iraq War activists
American Book Award winners
Boston Latin School alumni
Counter-terrorism theorists
Harvard Kennedy School staff
Living people
MIT Sloan School of Management alumni
People from Boston
Psychological warfare theorists
Terrorism theorists
The Century Foundation
United States National Security Council staffers
University of Pennsylvania alumni |
934009 | https://en.wikipedia.org/wiki/List%20of%20Greek%20mythological%20creatures | List of Greek mythological creatures | A host of legendary creatures, animals and mythic humanoids occur in ancient Greek mythology.
Mythological creatures
Aeternae, creatures with bony, saw-toothed protuberances sprouting from their heads.
Alcyoneus, a giant.
Almops, a giant son of the god Poseidon and the half-nymph Helle.
Aloadae, a group of giants who capture the god Ares.
Amphisbaena, a serpent with a head at each end.
Arae, female daemons of curses, called forth from the underworld.
Argus or Argus Panoptes, a hundred-eyed giant.
Asterius, a giant.
Athos, a giant.
Basilisk
Briareus, a Hundred-hander.
Catoblepas, buffalo-like creature with shaggy fur, large horns and a heavy head whose toxic breath or ugly looks could kill.
Centaur and Centauride, creature with a head and the torso of a human and the body of a horse.
Centaurs
Agrius, one of the centaurs who fought with Heracles
Amycus, one of the centaurs who fought at the Centauromachy.
Asbolus, a centaur. He was a seer who read omens in the flight of birds.
Bienor, one of the centaurs who fought at the Centauromachy.
Centaurus, father of the centaurs.
Chiron,The ancient trainer of heroes such as Heracles.
Chthonius, a centaur who was killed by Nestor at the wedding of Pirithous and Hippodamia.
Cyllarus, one of the centaurs who fought at the Centauromachy.
Dictys, one of the centaurs who fought at the Centauromachy.
Elatus, a centaur killed by Heracles.
Eurynomos, one of the Centaurs who fought against the Lapiths at the wedding of Hippodamia.
Eurytion, two different Centaurs bearing the same name.
Eurytus, a centaur present at the wedding of Pirithous and Hippodamia, who caused the conflict between the Lapiths and the centaurs by trying to carry the bride off.
Hylaeus, a centaur who tried to rape Atalanta. She killed him.
Hylonome, a Centauride, wife of Cyllarus.
Nessus, famous centaur, killed by Heracles.
Perimedes, one of the centaurs who fought at the Centauromachy.
Phólos.
Pholus, a wise centaur and friend of Heracles.
Rhaecus, a centaur who tried to rape Atalanta, who Atalanta then killed.
Rhoetus, a centaur who fought and killed at the Centauromachy.
Thaumas.
Cyprian centaurs, bull-horned centaurs native to the island of Cyprus.
Lamian centaurs or Lamian Pheres, twelve rustic spirits of the Lamos river. They were set by Zeus to guard the infant Dionysus, protecting him from the machinations of Hera but the enraged goddess transformed them into ox-horned centaurs. They accompanied Dionysus in his campaign against the Indians.
Aescaus
Amphithemis
Ceteus
Eurybios
Faunus
Gleneus
Nomeon
Orthaon
Petraeus
Phanes
Riphonus
Spargeus
Winged centaurs
Cerastes, spineless serpents with a set of ram-like horns on their heads.
Cerberus, a three headed dog, pet of Hades
Cetus or Ceto, sea monsters.
Ceuthonymus, daemon of the underworld. Father of Menoetius.
Charon, the ferryman of Hades, who transports the dead across the River Styx
Charybdis, a sea monster whose inhalations formed a deadly whirlpool or a huge water mouth.
Chimera, a fire-breathing, three-headed monster with one head of a lion, one of a snake, and another of a goat, lion claws in front and goat legs behind, and a long snake tail.
Chthonius, a giant.
Crocotta or Cynolycus, creature with the body of a stag, a lion's neck, cloven hooves, and a wide mouth with a sharp, bony ridge in place of teeth. It imitates the human voice, calls men by name at night, and devours those who approach it.
Cyclopes one-eyed giants.
Arges, one of the children of Gaia and Uranus. Uranus locked him in Tartarus.
Brontes, one of the children of Gaia and Uranus. Uranus locked him in Tartarus.
Steropes, one of the children of Gaia and Uranus. Uranus locked him in Tartarus.
Polyphemus, son of Poseidon, who was outwitted and blinded by Odysseus.
Assistants of the god Hephaestus at his workshops.
Daemons
Agathodaemon
Cacodemon
Eudaemon
Daemones Ceramici, five malevolent spirits who plagued the craftsman potter.
Asbetos
Omodamos
Sabaktes
Smaragos
Syntribos
Damysus, the fastest of the giants.
Demogorgon
Diomedes of Thrace, was a giant, the son of Ares and Cyrene.
Dryad, tree spirits that look similar to women.
Echion, a giant.
Eidolon, spirit-image of a living or dead person; a shade or phantom look-alike of the human form.
Empusa, beautiful demonesses, with flaming hair and with one brass leg and the other one a donkey leg, who preyed on human blood and flesh.
Enceladus or Enkelados, a giant who battled Athena in the war against the gods.
Eurynomos, the netherworld daemon of rotting corpses dwelling in the Underworld.
Eurytus, a giant.
Gegenees, six-armed giants which were slain by the Argonauts.
Gello, a female demon or revenant who threatens the reproductive cycle by causing infertility, miscarriage, and infant mortality.
Geryon, a giant: according to Hesiod, Geryon had one body and three heads, whereas the tradition followed by Aeschylus gave him three bodies. A lost description by Stesichorus said that he has six hands and six feet and is winged; there are some mid-sixth-century Chalcidian vases portraying Geryon as winged. Some accounts state that he had six legs as well, while others state that the three bodies were joined to one pair of legs.
Ghosts, Shades, Spirits.
Gigantes, were a race of great strength and aggression. Archaic and Classical representations show Gigantes as human in form. Later representations show Gigantes with snakes for legs.
Gorgons, female monsters depicted as having snakes on their head instead of hair, and sometimes described as having tusks, wings and brazen claws.
Euryale, whose scream could kill.
Medusa, whose gaze could turn anyone to stone, killed by Perseus.
Stheno, the third gorgon sister
Graeae, three old women with one tooth and one eye among them. Also known as the Graeae sisters.
Deino
Enyo
Pemphredo
Griffin or Gryphon or Gryps or Grypes, a creature that combines the body of a lion and the head and wings of an eagle.
Harpies, creature with torso, head and arms of a woman, and talons, tail and wings (mixed with the arms) of a bird. Very small but can be vicious when provoked.
Aello
Celaeno
Ocypete
Hecatonchires, three giants of incredible strength and ferocity, each with a hundred arms; also called Centimanes.
Briareos or Aegaeon
Cottus
Gyges
Hippalectryon, a creature with the fore-parts of a horse and the hind-parts of a cockerel/rooster.
Hippocampus, a creature with the upper body of a horse and the lower body of a fish. Created by Poseidon when he offered them to Athens.
Hydras
Lernaean Hydra, A many-headed, serpent-like creature that guarded an Underworld entrance beneath Lake Lerna. It was destroyed by Heracles, in his second Labour. Son of Typhon and Echidna.
Ichthyocentaurs, a pair of marine centaurs with the upper bodies of men, the lower fronts of horses, and the tails of fish.
Aphros
Bythos
Ipotane, a race of half-horse, half-humans. The Ipotanes are considered the original version of the centaurs.
Keres, spirit of violent or cruel death.
Achlys, who may have been numbered amongst the Keres. She was represented on the shield of Heracles.
Kobaloi, a mischievous creature fond of tricking and frightening mortals.
Laestrygonians or Laestrygones, a tribe of giant cannibals.
Antiphates, King of the Laestrygonians.
Lion-Headed Giants
Leon or Lion, killed by Herakles in the war against the gods.
Manticore or Androphagos, having the body of a red lion and a human head with three rows of sharp teeth. The manticore can shoot spikes out of its tail, making it a deadly foe.
Merpeople, humans with fish tail after torso (Mermaid as female, Merman as male). They lure adventurers to drown them.
Mimas, a giant.
Minotaur, a monster with the head of a bull and the body of a man; slain by Theseus in the Labrynth created by Daedelus.
Multi-headed Dogs
Cerberus (Hellhound), the three-headed giant hound that guarded the gates of the Underworld.
Orthrus, a two-headed dog, brother of Cerberus, slain by Heracles.
Nymph
Odontotyrannos, a beast with black, horse-like head, with three horns protruding from its forehead, and exceeded the size of an elephant.
Onocentaur, part human, part donkey. It had the head and torso of a human with the body of a donkey.
Ophiotaurus (Bull-Serpent), a creature part bull and part serpent.
Orion, giant huntsman whom Zeus placed among the stars as the constellation of Orion by Artemis.
Ouroboros, an immortal self-eating, circular being. The being is a serpent or a dragon curled into a circle or hoop, biting its own tail.
Pallas, a giant.
Panes, a tribe of nature-spirits which had the heads and torsos of men, the legs and tails of goats, goatish faces and goat-horns.
Periboea, a Giantess. Daughter of the king of the giants.
Philinnion, unwed maiden who died prematurely and returned from the tomb as the living dead to consort with a handsome youth named Makhates. When her mother discovered the girl she collapsed back into death and was burned by the terrified townsfolk beyond the town boundaries.
Phoenix, a golden-red fire bird of which only one could live at a time, but would burst into flames to rebirth from ashes as a new phoenix.
Polybotes, a giant.
Porphyrion, a giant, king of the giants.
Satyrs and Satyresses, creatures with human upper bodies, and the horns and hindquarters of a goat. Some were companions of Pan and Dionysus.
Agreus
Ampelos
Marsyas
Nomios
Silenus or Papposilenus, companion and tutor to the wine god Dionysus.
Scylla, once a nereid, transformed by Circe into a many-headed, tentacled monster who fed on passing sailors in the straits between herself and Charybdis by plucking them off the ship and eating them.
Scythian Dracanae, upper body of a woman, lower body composed of two snake tails.
Sea goats, creatures having the back end of a fish and front parts of a goat.
Sirens, bird-like women whose irresistible song lured sailors to their deaths.
Skolopendra, giant sea monster said to be the size of a Greek trireme. It has a crayfish-like tail, numerous legs along its body which it uses like oars to move and extremely long hairs that protrude from its nostrils. Child of Phorcys and Keto.
Spartae, a malevolent spirit born from violence. Argo crew member Jason fought alongside these creatures after discovering the dragon teeth could create these violent spirits. Spartae are normally depicted as a skeletal being with some form of a weapon and military attire.
Sphinx
Androsphinx or simply Sphinx, a creature with the head of a human and the body of a lion.
Criosphinx, a creature with head of a ram and the body of a lion.
Hieracosphinx, a creature with head of a hawk and the body of a lion.
Stymphalian birds, man-eating birds with beaks of bronze and sharp metallic feathers they could launch at their victims.
Tarandos, a rare animal with the size of an ox and the head of a deer. It could change the color of its hair according to the environment that it was, like a chameleon. It was living at the land of the Scythians. Solinus, wrote about a similar creature in Aethiopia and called it Parandrus.
Taraxippi, ghosts that frightened horses.
Thoon, a giant.
Three-Bodied or Triple-Bodied Daemon, a winged monster with three human bodies ending in serpent-tails.
Tityos, a giant.
Triton, son of Poseidon and Amphitrite, half-man and half-fish.
Typhon or Typhoeus, a humongous savage monster with snake-coils instead of limbs; father of several other monsters with his mate Echidna. Almost destroyed the gods but foiled by Hermes and Zeus.
Unicorns or Monocerata, creatures as large as horses, or even larger with a large, pointed, spiraling horn projecting from their forehead.
Vampire Daemons/ Lamiai.
Corinthian Lamia, a vampiric demon who seduced the handsome youth Menippos in the guise of a beautiful woman to consume his flesh and blood.
Empousa, seductive female vampire demons with fiery hair, a leg of bronze and a donkey's foot. They are especially good at ensnaring men with their beauty before devouring them.
Lamia, a vampiric demon who by voluptuous artifices attracted young men, in order to enjoy their fresh, youthful, and pure flesh and blood.
Mormo or Mormolyceae or Mormolyce, a vampiric creature which preyed on children.
Mormolykeia, female underworld Daemons, attendants of the goddess Hecate.
Werewolf or Lycanthrope.
Agriopas, he tasted the viscera of a human child, and was turned into a wolf for ten years.
Damarchus, a boxer from Parrhasia (Arcadia) who is said to have changed his shape into that of a wolf at the festival of Lykaia, he became a man again after ten years.
Lycaon, turned into a wolf by the gods as punishment for serving them his murdered son Nyctimus' flesh at a feast.
Winged Horses or Pterippi, winged horses.
Pegasus, a divine winged stallion that is pure white, son of Medusa and Poseidon, brother of Chrysaor and father of winged horses.
Ethiopian Pegasus, winged, horned horses native to Ethiopia.
Animals from Greek mythology
Birds
Acanthis (Carduelis)
Alectryon (Rooster). Alectryon was a youth, charged by Ares to stand guard outside his door while the god indulged in illicit love with Aphrodite. He fell asleep, and Helios, the sun god, walked in on the couple. Ares turned Alectryon into a rooster, which never forgets to announce the arrival of the sun in the morning.
Autonous (Stone-curlew)
Birds of Ares or Ornithes Areioi, were a flock of birds that guarded the Amazons' shrine of the god on a coastal island in the Black Sea. The Argonauts encountered them in their quest for the Golden Fleece.
Cranes
Gerana, a queen of the Pygmy who was transformed by the goddess Hera into a crane.
Oenoe.
Diomedes Birds, the Diomedes companions were transformed into seabirds
Eagles
Aethon or Caucasian Eagle, a giant eagle, offspring of Typhon and Echidna. Zeus condemned Prometheus to having his liver eaten by the Caucasian Eagle for giving the Flames of Olympus to the mortals.
Aetos Dios, giant golden eagle of Zeus.
Hippodamia (Lark)
Kingfisher
Alcyone transformed by gods into halcyon birds, the Halcyon genus and Halcyonidae birds took the name from Alcyone.
Alkyonides, the seven daughters of Alcyoneus. When their father was slain by Heracles, they threw themselves into the sea, and were transformed into halcyons by Amphitrite.
Ceyx transformed by gods into halcyon birds, the Ceyx birds took their name from Ceyx.
Nightingale
Aëdon
Procne
Owls
Little Owl, bird of goddess Athena.
Nyctimene
Screech Owl (Ascalaphus), bird of god Hades.
Philomela (Swallow)
Ravens/Crows
Cornix
Coronis
Corvus, a crow or raven which served Apollo. Apollo was about to make a sacrifice on the altar and he needs some water to perform the ritual. The god sends the raven to fetch some water in his cup, but the bird gets distracted by a fig tree and spends a few days lazily resting and waiting for the figs to ripen. After feasting on the figs, the raven finally brings Apollo the cup filled with water and he also brings a water snake (Hydra) as an excuse for being so late. Apollo sees through the raven's lies and angrily casts all three – the cup (Crater, Crater (constellation)), the water snake (Hydra, Hydra (constellation)) and the raven (Corvus, Corvus (constellation)) into the sky. Apollo also casts a curse on the raven, scorching its feathers and making the bird eternally thirsty and unable to do anything about it. This, according to the myth, is how crows and ravens came to have black feathers and why they have such raspy voices.
Swans
Cycnus (Swan), Cycnus, was a good friend of Phaethon, when Phaethon died, he sat by the river Eridanos mourning his death. The gods turned him into a swan to relieve him of his pity.
Swans of Apollo, the swans drawing the chariot of Apollo.
Strix, birds of ill omen, product of metamorphosis, that fed on human flesh and blood.
Tereus (Hoopoe)
Boars
Calydonian Boar, a gigantic boar sent by Artemis to ravage Calydon. Was slain in the Calydonian Boar Hunt.
Clazomenae Boar, gigantic winged sow which terrorized the Greek town of Klazomenai in Ionia, Asia Minor.
Crommyonian Sow, the Crommyonian Sow was a wild pig that ravaged the region around the village of Crommyon between Megara and Corinth, and was eventually slain by Theseus in his early adventures.
Erymanthian Boar, a gigantic boar which Heracles was sent to retrieve as one of his labors.
Bugs
Gadflies, mythical insects sent by the gods to sting wicked mortals for their cruel acts.
Myrmekes, large ants that can range in size from small dogs to giant bears which guarded a hill that had rich deposits of gold.
Myrmidons, ants which transformed into humans.
Cattle
The Cattle of Geryon, magnificent cattle guarded by Orthrus.
The Cattle of Helios, immortal cattle of oxen and sheep owned by the sun titan Helios.
The black-skinned cattle of Hades, the cattle owned by Hades and guarded by Menoetes.
Cercopes, monkeys.
Cretan Bull/Marathonian Bull, was the bull Pasiphaë fell in love with, giving birth to the Minotaur.
Deer
Actaeon, Artemis turned him into a deer for spying on her while bathing. He was promptly eaten by his own hunting dogs.
Ceryneian Hind, an enormous deer which was sacred to Artemis; Heracles was sent to retrieve it as one of his labours
Elaphoi Khrysokeroi, four immortal golden-horned deer sacred to the goddess Artemis.
Dionysus' Panthers, the panthers that draw the chariot of Dionysus.
Dogs/Hounds
Actaeon's dogs
Argos, Odysseus' faithful dog, known for his speed, strength and his superior tracking skills.
Golden Dog, a dog that guarded the infant god Zeus.
Guard Dogs of Hephaestus Temple, a pack of sacred dogs that guarded the temple of Hephaestus at Mount Etna.
Laelaps, a female dog destined always to catch its prey.
Maera, the hound of Erigone, daughter of Icarius of Athens.
Hellhounds
Dolphins
Delphin, a dolphin who found the Amphitrite, when Poseidon was looking for her. For his service, Poseidon placed him in the sky as the constellation Delphinus.
Dolphin that saved Arion.
Dolphins of Taras. A dolphin saved Taras, who is often depicted mounted on a dolphin.
Donkeys
Donkey of Hephaestus, Hephaestus was often shown riding a donkey.
Donkey of Silenus, Silenus rode a donkey.
Scythian horned donkeys, in Scythia there were donkeys with horns, and these horns were holding water from the river Styx.
Goats
Amalthea, golden-haired female goat, foster-mother of Zeus.
Horses
Anemoi, the gods of the four directional winds in horse-shape drawing the chariot of Zeus.
Boreas
Eurus
Notos
Zephyrus or Zephyr
Arion, the immortal horse of Adrastus, which could run at fantastic speeds. Was said to eat gold.
Horses of Achilles, immortal horses.
Balius
Xanthus
Horses of Ares, immortal fire-breathing horses of the god Ares.
Aethon
Konabos
Phlogeous
Phobos
Horses of Autonous,
Horses of Eos, a pair of immortal horses owned by the dawn-goddess, Eos.
Lampus
Phaethon
Horses of Erechtheus, a pair of immortal horses owned by the king of Athens, Erechtheus.
Podarkes
Xanthos
Horses of Dioskouroi, the immortal horses of the Dioskouroi.
Harpagos
Kyllaros
Phlogeus
Xanthos
Horses of Hector
Aethon
Lampus
Podargus
Xanthus
Horses of Helios, immortal horses of the sun-god Helios.
Abraxas
Aethon
Bronte
Euos
Phlegon
Pyrois
Sterope
Therbeeo
Horses of Poseidon, immortal horses of the god Poseidon.
Mares of Diomedes, four man-eating horses belonging to the giant Diomedes.
Dinus
Lampus
Podargus
Xanthus
Ocyrhoe, daughter of Chiron and Chariclo. She was transformed into a horse.
Trojan Horses or Trojan Hippoi, twelve immortal horses owned by the Trojan king Laomedon.
Karkinos or Carcinus, a giant crab which fought Heracles alongside the Lernaean Hydra.
Leopards
Ampelus, Claudius Aelianus in the "Characteristics of Animals" write that there is a leopard called the Ampelus, it is not like the other leopards and has no tail. If it is seen by women it afflicts them with an unexpected ailment.
Dionysus' Leopard: Dionysus is often shown riding a leopard.
Lions
Nemean Lion, a gigantic lion whose skin was impervious to weapons; it was strangled by Heracles.
Lion of Cithaeron, a lion which was killed by Heracles or by Alcathous.
Rhea's Lions, the lions drawing the chariot of Rhea.
Snakes
Gigantic snakes of Libya, according to Diodorus, Amazons used the skins of large snakes for protective devices, since Libya had such animals of incredible size.
Snakes of Hera, Hera sent two big snakes to kill Herakles when he was an infant.
Water-snake, god Apollo was about to make a sacrifice on the altar and he needs some water to perform the ritual. The god sends the raven to fetch some water in his cup, but the bird gets distracted by a fig tree and spends a few days lazily resting and waiting for the figs to ripen. After feasting on the figs, the raven finally brings Apollo the cup filled with water and he also brings a water snake (Hydra) as an excuse for being so late. Apollo sees through the raven's lies and angrily casts all three – the cup (Crater, Crater (constellation)), the water snake (Hydra, Hydra (constellation)) and the raven (Corvus, Corvus (constellation)) into the sky.
Teumessian fox, a gigantic fox destined never to be hunted down.
Tortoises/Turtles
Giant turtle: Sciron robbed travelers passing the Sceironian Rocks and forced them to wash his feet. When they knelt before him, he kicked them over the cliff into the sea, where they were eaten by the giant sea turtle. Theseus killed him in the same way.
Tortoise from which Hermes created his tortoise shell lyre, when Hermes was a mere babe, found a tortoise, which he killed, and, stretching seven strings across the empty shell, invented a lyre.
Zeus and the Tortoise
Dragons
The dragons of Greek mythology were serpentine monsters. They include the serpent-like Drakons, the marine-dwelling Cetea and the she-monster Dracaenae. Homer describes the dragons with wings and legs.
The Colchian Dragon, an unsleeping dragon which guarded the Golden Fleece.
Cychreides, a dragon which terrorised Salamis before being slain, tamed or driven out by Cychreus.
Delphyne, female dragon.
Demeter's dragons, a pair of winged dragons that drew Demeter's chariot and, after having been given as a gift, that of Triptolemus.
Giantomachian dragon, a dragon that was thrown at Athena during the Giant war. She threw it into the sky where it became the constellation Draco.
The Ismenian Dragon, a dragon which guarded the sacred spring of Ares near Thebes; it was slain by Cadmus.
Ladon, a serpent-like dragon which guarded the golden apples of immortality of the Hesperides.
Lernaean Hydra, also known as King Hydra, a many-headed, serpent-like creature that guarded an Underworld entrance beneath Lake Lerna. It was destroyed by Heracles, in his second Labour. Son of Typhon and Echidna.
Maeonian Drakon, a dragon that lived in the kingdom of Lydia and was killed by Damasen.
Medea's dragons, a pair of flying dragons that pulled Medea's chariot. Born from the blood of the Titans.
Nemean dragon, a dragon that guarded Zeus' sacred grove in Nemea.
Ophiogenean dragon, a dragon that guarded Artemis' sacred grove in Mysia.
Pitanian dragon, a dragon in Pitane, Aeolis, that was turned to stone by the gods.
Pyrausta, a four-legged insect with filmy wings and a dragon's head.
Python, a dragon which guarded the oracle of Delphi; it was slain by Apollo.
Rhodian dragons, serpents that inhabited the island of Rhodes; they were killed by Phorbus.
Thespian dragon, a dragon that terrorized the city of Thespiae in Boeotia.
Trojan dragons, a pair of dragons or giant serpents from Tenedos sent by various gods to kill Laocoön and his sons in order to stop him from telling his people that the Wooden Horse was a trap.
Drakons
Drakons ("δράκους" in Greek, "dracones" in Latin) were giant serpents, sometimes possessing multiple heads or able to breathe fire (or even both), but most just spit deadly poison. They are usually depicted without wings.
The Ethiopian Dragon was a breed of giant serpent native to the lands of Ethiopia. They killed elephants, and rival the longest-lived animals. They mentioned in the work of Aelian, On The Characteristics Of Animals ()
The Indian Dragon was a breed of giant serpent which could fight and strangle the elephants of India.
The Laconian Drakon was one of the most fearsome of all the drakons.
Cetea
Cetea were sea monsters. They were usually featured in myths of a hero rescuing a sacrificial princess.
The Ethiopian Cetus was a sea monster sent by Poseidon to ravage Ethiopia and devour Andromeda. It was slain by Perseus.
The Trojan Cetus was a sea monster that plagued Troy before being slain by Heracles.
Dracaenae
The Dracaenae were monsters that had the upper body of a beautiful woman and the lower body of any sort of dragon. Echidna, the mother of monsters, and Ceto, the mother of sea-monsters, are two famous dracaenae. Some Dracaenae were even known to have had in place of two legs, one (or two) serpent tail.
Campe, a dracaena that was charged by Cronus with the job of guarding the gates of Tartarus; she was slain by Zeus when he rescued the Cyclopes and Hecatoncheires from their prison.
Ceto (or Keto), a marine goddess who was the mother of all sea monsters as well as Echidna and other dragons and monsters.
Echidna, wife of Typhon and mother of monsters.
Poena, a dracaena sent by Apollo to ravage the kingdom of Argos as punishment for the death of his infant son Linos; killed by Coraebus.
Scylla, a dracaena that was the lover of Poseidon, transformed by Circe into a multi-headed monster that fed on sailors on vessels passing between her and Charybdis.
Scythian Dracaena, the Dracaena queen of Scythia; she stole Geryon's cattle that Heracles was herding through the region and agreed to return them on condition he mate with her.
Sybaris, a draceana that lived on a mountain near Delphi, eating shepherds and passing travellers; she was pushed off the cliff by Eurybarus.
Automatons
Automatons, or Colossi, were men/women, animals and monsters crafted out of metal and made animate in order to perform various tasks. They were created by the divine smith, Hephaestus. The Athenian inventor Daedalus also manufactured automatons.
The Hippoi Kabeirikoi, four bronze horse-shaped automatons crafted by Hephaestus to draw the chariot of the Cabeiri.
The Keledones, singing maidens sculpted out of gold by Hephaestus.
The Khalkotauroi also known as the Colchis Bulls, fire-breathing bulls created by Hephaestus as a gift for Aeëtes.
The Kourai Khryseai, golden maidens sculpted by Hephaestus to attend him in his household.
Talos, a giant man made out of bronze to protect Europa.
Mythic humanoids
Acephali/Headless men (Greek ἀκέφαλος akephalos, plural ἀκέφαλοι akephaloi, from ἀ- a-, "without", and κεφαλή kephalé, "head") are humans without a head, with their mouths and eyes being in their breasts.
Amazons, a nation of all-female warriors.
Aegea, a queen of the Amazons.
Aella (Ἄελλα), an Amazon who was killed by Heracles.
Alcibie (Ἀλκιβίη), an Amazonian warrior, killed by Diomedes at Troy.
Alke (Ἁλκή), an Amazonian warrior
Antandre (Ἀντάνδρη), an Amazonian warrior, killed by Achilles at Troy.
Antiope (Ἀντιόπη), a daughter of Ares and sister of Hippolyta.
Areto (Ἀρετώ), an Amazon.
Asteria (Ἀστερία), an Amazon who was killed by Heracles.
Bremusa (Βρέμουσα), an Amazonian warrior, killed by Idomeneus at Troy.
Celaeno (Κελαινώ), an Amazonian warrior, killed by Heracles.
Eurypyle (Εὐρυπύλη), an Amazon leader who invaded Ninus and Babylonia.
Hippolyta (Ἱππολύτη), a queen of Amazons and daughter of Ares.
Hippothoe (Ἱπποθόη), an Amazonian warrior, killed by Achilles at Troy.
Iphito (Ἰφιτώ), an Amazon who served under Hippolyta.
Lampedo (Λαμπεδώ), an Amazon queen who ruled with her sister Marpesia.
Marpesia (Μαρπεσία), an Amazon queen who ruled with her sister Lampedo.
Melanippe (Μελανίππη), a daughter of Ares and sister of Hippolyta and Antiope.
Molpadia (Μολπαδία), an Amazon who killed Antiope.
Myrina (Μύρινα), a queen of the Amazons.
Orithyia (Ὠρείθυια), an Amazon queen.
Otrera (Ὀτρήρα), an Amazon queen, consort of Ares and mother of Hippolyta.
Pantariste (Πανταρίστη), an Amazon who fought with Hippolyta against Heracles.
Penthesilea (Πενθεσίλεια), an Amazon queen who fought in the Trojan War on the side of Troy.
Thalestris (Θάληστρις), a queen of the Amazons.
Anthropophage, mythical race of cannibals.
Arimaspi, a tribe of one-eyed men.
Astomi, race of people who had no need to eat or drink anything at all.
Atlantians, people of Atlantis.
Bebryces, a tribe of people who lived in Bithynia
Chalybes, a Georgian tribe of Pontus and Cappadocia in northern Anatolia.
Curetes, legendary people who took part in the quarrel over the Calydonian Boar.
Cynocephaly, dog-headed people.
Dactyls, mythical race of small phallic male beings.
Acmon
Gargareans, were an all-male tribe.
Halizones, people that appear in Homer's Iliad as allies of Troy during the Trojan War.
Hemicynes, half-dog people.
Hyperboreans, mythical people who lived "beyond the North Wind".
Korybantes, were armed and crested dancers.
Lapiths
Corythus, a Lapith killed by the centaur Rhoetus at the Centauromachy.
Crantor
Dryas, a Lapith who fought against the centaurs at the Centauromachy.
Elatus, a Lapith chieftain of Larissa.
Euagrus or Evagrus, a Lapith killed by the centaur Rhoetus at the Centauromachy.
Ixion, king of the Lapiths.
Pirithous, king of the Lapiths.
Lotus-eaters, people living on an island dominated by lotus plants. The lotus fruits and flowers were the primary food of the island and were narcotic, causing the people to sleep in peaceful apathy.
Machlyes, hermaphrodites whose bodies were male on one side and female on the other.
Minyans
Monopodes or Skiapodes, a tribe of one-legged Libyan men who used their gigantic foot as shade against the midday sun.
Myrmidons, legendary warriors commanded by Achilles.
Panotii, a tribe of northern men with gigantic, body-length ears.
Pygmies, a tribe of one and a half foot tall African men who rode goats into battle against migrating cranes.
Gerana
Oenoe
Spartoi, mythical warriors who sprang up from the dragon's teeth.
Telchines
Troglodytae
Deified human beings
In addition to the famous deities, the ancient Greeks also worshiped a number of deified human beings. For example, Alabandus at Alabanda, Tenes at Tenedos, Leucothea and her son Palaemon were worshiped throughout Greece.
See also
List of Greek mythological figures - primordial deities, Titans, Olympians, Moirai, Charites, Muses, Nymphs and others
List of minor Greek mythological figures
List of legendary creatures
List of legendary creatures by type
References
Sources
Morford, Mark; Robert Lenardon (2003). Classical Mythology (7 ed.). New York: Oxford University Press.
Greek legendary creatures
Greek mythological creatures
creatures |
2052267 | https://en.wikipedia.org/wiki/List%20of%20University%20of%20Illinois%20Urbana-Champaign%20people | List of University of Illinois Urbana-Champaign people | This is a list of notable people affiliated with the University of Illinois Urbana-Champaign, a public research university in Illinois.
Notable alumni
Not all listed alumni graduated from the university, and are so noted if the information is known.
Nobel Prize winners
Edward Doisy, B.S. 1914, M.S. 1916 – Physiology or Medicine, 1943
Vincent Du Vigneaud, B.S. 1923, M.S. 1924 – Chemistry, 1955; also served as faculty member
Robert W. Holley, B.A. 1942 – Physiology or Medicine, 1968
Jack Kilby, B.S. 1947 – Physics, 2000; inventor of the integrated circuit
Edwin G. Krebs, B.A. 1940 – Physiology or Medicine, 1992
Polykarp Kusch, M.S. 1933, Ph.D. 1936 – Physics, 1955
John Schrieffer, M.S. 1954, Ph.D. 1957 – Physics, 1972; also served as faculty member
Phillip Sharp, Ph.D. 1969 – Chemistry, 1993
Wendell Stanley, M.S. 1927, PhD. 1929 – Chemistry 1946
Rosalyn Yalow, M.S. 1942, Ph.D. 1945 – Physiology or Medicine, 1977
Pulitzer Prize winners
Leonora LaPeter Anton, B.S. 1986 – Investigative Journalism, 2016
Barry Bearak, M.S. 1974 – International Reporting, 2002
Michael Colgrass, B.A. 1956 – Music, 1978
George Crumb, M.A. 1952 – Music, 1968
David Herbert Donald, M.A. 1942, Ph.D. 1946 – Biography, 1961 and 1988
Roger Ebert, B.S. 1964 – Criticism, 1975
Roy J. Harris, B.A. 1925 – Public Service, 1950
Beth Henley, Drama, 1981
Hugh F. Hough, B.S. 1951 – Local General or Spot News Reporting, 1974
Paul Ingrassia, B.S. 1972 – Beat Reporting, 1993
Allan Nevins, B.A. 1912, M.A. 1913 – Biography, 1933 and 1937
Richard Powers, B.A. 1978, M.A. 1980 – Fiction, 2019
James Reston, B.S. 1932 – National Reporting, 1945 and 1957
Robert Lewis Taylor, B.A. 1933 – Fiction, 1959
Carl Van Doren, B.A. 1907 – Biography, 1939
Mark Van Doren, B.A. 1914 – Poetry, 1940
Academia
Notable professors and scholars
Warren Ambrose, B.S. 1935, M.S. 1936, Ph.D. 1939 – Mathematics, Professor Emeritus of Mathematics at MIT; he is often considered one of the fathers of modern geometry.
Icek Ajzen, M.A. 1967, Ph.D. 1969. - Social Psychology, Professor Emeritus at University of Massachusetts Amherst; Considered the most influential social psychologist. Known by his work on the theory of planned behavior. By 2021 has over 350,000 citations (google scholar).
Steven Bachrach, B.S., Ph.D. (University of California, Berkeley) – Dean of Science at Monmouth University, previously the Dr D. R. Semmes Distinguished Professor of Chemistry at Trinity University in San Antonio, Texas
George C. Baldwin, Ph.D. 1943 – theoretical and experimental physicist and Professor of Nuclear Engineering, at General Electric Company, Rensselaer Polytechnic Institute, and Los Alamos National Laboratory
Nancy Baym, M.A. 1988, Ph.D. 1994 – Professor of Communication Studies at the University of Kansas
Arnold O. Beckman, B.S. 1922, M.S. 1923 – former Professor of Chemistry at Caltech
Colin J. Bennett, Ph.D. 1986 – Professor of Political Science at the University of Victoria
Saint Elmo Brady, Ph.D. 1916 – notable HBCU educator, first African American to obtain a Ph.D. degree in chemistry in the United States
Roger Crossgrove, M.F.A. 1951 – Professor of Art Emeritus at the University of Connecticut
Paul S. Dunkin, M.A. 1931, B.S. 1935, Ph.D. 1937 – Professor Emeritus of Library Services at Rutgers University
Abdul Haque Faridi, Bangladeshi academic
Gerald R. Ferris, Ph.D. – Francis Eppes Professor of Management and professor of psychology at Florida State University
Jessica Greenberg - assistant professor of Anthropology and Russian, East European, and Eurasian Studies
Allan Hay, Ph.D. 1955 – Tomlinson Emeritus Professor of Chemistry at McGill University
Nick Holonyak, Jr., B.S. 1950, M.S. 1951, Ph.D. 1954 –
John Bardeen Endowed Chair Emeritus in Electrical and Computer Engineering and Physics at UIUC, member of National Academy of Engineering in Electronics, Communication & Information Systems Engineering and Materials Engineering for contributions to development of semiconductor controlled rectifiers, light emitting diodes, and diode laser, two time Nobel Prize Winner in Physics for work on the transistor and then for the BCS Theory of Superconductivity
John Honnold, William A. Schnader Professor of Commercial Law at University of Pennsylvania Law School
A.C. Littleton, B.S. 1912, M.S. 1918, Ph.D. 1931 – Professor and accounting historian University of Illinois, editor-in-chief The Accounting Review, Accounting Hall of Fame inductee
Douglas A. Melton, B.S. – biologist, Xander University Professor at Harvard University
Jennifer Mercieca, Ph.D.— American rhetorical scholar and Professor at Texas A&M University, author of Demagogue for President: The Rhetorical Genius of Donald Trump
Michael Moore – professor of theoretical physics at the University of Manchester
James Purdy, scholar of digital rhetoric
Nora C. Quebral, Ph.D. – proponent of the development communication discipline; Professor Emeritus of development communication at University of the Philippines Los Baños
Mark Reckase, University Distinguished Professor Emeritus of Michigan State University
Maurice H. Rees, Medical educator and Dean of University of Colorado School of Medicine from 1925 to 1945
Bernard Rosenthal, Ph.D. 1968 – Professor Emeritus of English at Binghamton University.
Roy Vernon Scott M.A. 1953, Ph.D. 1957 – Professor Emeritus of History at Mississippi State University
Guy Standing, M.A. 1972 – Professor of Development Studies at the School of Oriental and African Studies (SOAS), University of London
Dewey Stuit, American educational psychologist; dean of the College of Arts at the University of Iowa from 1948 to 1977
Clyde Summers B.S. 1939, J.D. 1942, labor lawyer and law professor at the Yale Law School and University of Pennsylvania Law School, subject of In re Summers
Maurice Cole Tanquary, A.B. 1907, M.A. 1908, Ph.D. 1912 – Professor of Entomology at several universities and member of the Crocker Land Expedition
James Thomson B.S. 1981 – Professor of Microbiology, University of Wisconsin – Madison
Janis Driver Treworgy, M.A. 1983, Ph.D. 1985 - American academic and sedimentary geologist
College presidents and vice-presidents
Dr. Benjamin Allen – President, University of Northern Iowa
John L. Anderson M.S., Ph.D. – eighth president, Illinois Institute of Technology; former Provost, Case Western Reserve University
Robert M. Berdahl M.A. – President of American Association of Universities, former Chancellor of UC Berkeley, former President of University of Texas at Austin
Warren E. Bow M.A. - President of Wayne State University
Alvin Bowman Ph.D. – President, Illinois State University
Tom Buchanan Ph.D. – twenty-third president, University of Wyoming
David L. Chicoine Ph.D. – President, South Dakota State University
*Coching Chu B.S. 1913 – sixteenth president, Zhejiang University (National Chekiang University period); former vice president, Chinese Academy of Sciences
Ralph J. Cicerone M.S. 1967, Ph.D. 1970 – President, National Academy of Sciences, former Chancellor of UC Irvine
Lewis Collens B.S., M.A. – seventh president, Illinois Institute of Technology
John E. Cribbet J.D. – legal scholar, Dean of the University of Illinois College of Law, and Chancellor of the University of Illinois
Lois B. DeFleur, Ph.D. – President, Binghamton University, former Provost of University of Missouri
W. Kent Fuchs M.S. 1982, Ph.D. 1985 – twelfth president, University of Florida
Philip Handler Ph.D. 1939 – President, National Academy of Sciences
Tori Haring-Smith, Ph.D. – President, Washington & Jefferson College
Freeman A. Hrabowski III M.A., Ph.D. – President, University of Maryland, Baltimore County
Emil Q. Javier B.S. 1964 – seventeenth president, University of the Philippines
Alain E. Kaloyeros, Ph.D. 1987 – first president, State University of New York Polytechnic Institute
Robert W. Kustra Ph.D. – President, Boise State University
Ray A. Laird B.S. 1932 – President of Laredo Community College in Laredo, Texas, 1960 to 1974; born in Milford, Illinois, in 1907
Judy Jolley Mohraz Ph.D. 1974 – ninth president, Goucher College
John Niland, Ph.D. 1970 – fourth president, University of New South Wales, Australia
J. Wayne Reitz M.S. 1935 – fifth president, University of Florida
Steven B. Sample B.S. 1962, M.S. 1963, Ph.D. 1965 – tenth president, University of Southern California
David J. Schmidly Ph.D., – twentieth president, University of New Mexico
Michael Schwartz, B.S. 1958, M.A. 1959, Ph.D. 1962 – President Cleveland State University
James J. Stukel M.S. 1963, Ph.D. 1968 – fifteenth president, University of Illinois
William D. Underwood J.D. – eighteenth president, Mercer University
Marvin Wachman Ph.D. – President, Temple University, former President of Lincoln University
Herman B Wells – President, Indiana University
Chen Xujing – Vice President, Nankai University and Zhongsan University; President, Lingnan University and Jinan University
College provosts and vice provosts
Joseph A. Alutto M.A. – Provost, Ohio State University
Richard C. Lee Ph.D. – Vice Provost, University of Nevada, Las Vegas
Architecture
Max Abramovitz, B.S. 1929 – architect on many campus and prominent international buildings including the United Nations Building, Assembly Hall (Champaign) and the Avery Fisher Hall at Lincoln Center in New York City
Henry Bacon – architect of the Lincoln Memorial in Washington, D.C.
Temple Hoyne Buell – architect for the first American central mall
Jeanne Gang, B.S. 1986 – architect
Ralph Johnson, B. Arch 1971 – principal architect of the Perkins+Will
David Miller, M. Arch 1972 – principal architect of the Miller/Hull partnership, FAIA
César Pelli, M. Arch. 1954 – architect for the Petronas Twin Towers
William Pereira, M. Arch. 1930 – notable mid-20th century American architect in Los Angeles, known for Transamerica Pyramid and Geisel Library
Nathan Clifford Ricker, D. Arch. 1871 – first architect to receive a degree in architecture from an American institution
Patricia Saldaña Natke, B. Arch 1986 – architect
William L. Steele – architect of the Prairie School during the early-twentieth century
Ralph A. Vaughn (1907–2000) – academic, architect and film set designer; founded the Pi Psi chapter of Omega Psi Phi
Art
Mark Staff Brandl, B.F.A. 1978 – artist, art historian and critic
Christopher Brown, B.F.A. 1973 – painter, printmaker, and professor
Annie Crawley – underwater photographer
Greg Drasler, B.F.A. 1980; M.F.A. 1983 – artist and educator
Leslie Erganian – artist and writer
Hart D. Fisher, B.A. 1992 – comics book creator, comics publisher
Tom Goldenberg, B.F.A. 1970 – artist and educator
David Klamen, B.F.A. 1983 – artist and academic
Susan Rankaitis, B.F.A. 1971 – artist
Angela M. Rivers, B.F.A. 1975 - Artist, Art Curator
Leo Segedin, B.F.A. 1948; M.F.A. 1950 – artist and educator
Deb Sokolow, B.A. 1996 – artist
Lorado Taft – sculptor, writer and educator
Charles H. Traub, B.A. – photographer and educator
Vivian Zapata – Painter, Official artist of the 2005 Latin Grammys
Barbara Zeigler – artist
Astronauts
Scott Altman, B.S. 1981
Lee J. Archambault, B.S. 1982, M.S. 1984
Dale A. Gardner, B.S. 1970
Michael S. Hopkins, B.S. 1992
Steven R. Nagel, B.S. 1969
Joseph R. Tanner, B.S. 1973
Business
Irving Azoff, attended – CEO of Ticketmaster (2008-present); Executive Chairman Live Nation Entertainment
Sunil Benimadhu, M.B.A 1992- chief executive officer of the Stock Exchange of Mauritius (2002–present)
Jim Cantalupo, 1966 – chairman and chief executive officer of McDonald's (1991–2004)
Stephen Carley, A.B. circa 1973 – chief executive officer of El Pollo Loco, former president and chief operating officer of Universal City Hollywood
Jerry Colangelo, B.S. 1962 – president and chief executive officer of Phoenix Suns; managing general partner of Arizona Diamondbacks
Jon Corzine, A.B. 1969 – chairman and chief executive officer of Goldman Sachs (1994–1999), cross listed in Politics section
Bob Dudley, B.S. – managing director and chief executive officer-designate of BP
Martin Eberhard, 1960 – co-founder and chief executive officer of Tesla Motors
George T. Felbeck, B.S.M.E. 1919, M.S.M.E. 1921 – president of Union Carbide (1944–1962)
George M.C. Fisher, 1962 – chief executive officer of Eastman Kodak (1993–2000)
Ravin Gandhi – founder of GMM Nonstick Coatings
John Georges, 1951 – chief executive officer of International Paper (1985–1996)
Harry Gray, 1941 – chief executive officer of United Technologies (1974–1986)
E.B. Harris, 1935 – president of the Chicago Mercantile Exchange
Robert L. Johnson – founder of Black Entertainment Television; principal owner of the Charlotte Bobcats
Pete Koomen, M.S. 2006 – co-founder of Optimizely
Bruce Krasberg, 1930 – business executive and horticulturist
Michael P. Krasny, B.S. 1975 – founder and chairman emeritus of CDW
Arvind Krishna, M.S. 1987, Ph.D 1990 - Chief executive officer of IBM
Stephen McLin, B.S. 1968 – former Bank of America executive
Christopher Michel, B.A. 1990 – founder and chief executive officer of Military.com (1999–2007); founder and chief executive officer of Affinity Labs
Steven L. Miller, B.S. 1967 – chief executive officer of Shell Oil (1999–2002)
Tom Murphy, B.S. 1938 – chairman of General Motors
Jim Oberweis – chairman of Oberweis Dairy
Ron Popeil – attended (left after one year) – inventor of the infomercial
C. W. Post - attended (left after two years) - breakfast cereal magnate
Jasper Sanfilippo, Sr. - businessman and industrialist who led and substantially grew his family's nut business, John B. Sanfilippo & Son, Inc., into one of the largest in the world
Abe Saperstein – creator of the Harlem Globetrotters
Steve Sarowitz (born 1965/1966) - billionaire founder of Paylocity
Reshma Saujani – founder and CEO of Girls Who Code
Therese Tucker - CEO and Founder of BlackLine
Barbara Turf – CEO of Crate & Barrel (2008–2012)
Jack Welch, M.S. 1959, Ph.D. 1961 – chief executive officer of General Electric (1981–2001)
C. E. Woolman, 1912 – founder of Delta Air Lines
Yi Gang, Ph.D. 1986 – director of State Administration of Foreign Exchange
John D. Zeglis, B.S. 1969 – former president of AT&T; former chairman and chief executive officer of AT&T Wireless
Marcin Kleczynski, B.S. 2012 – founder and CEO of Malwarebytes
Engineering and technology
Shoaib Abbasi, B.S. 1980, M.S. 1980 – president and chief executive officer of Informatica
Harlan Anderson, B.S., M.S. – computer pioneer and founder of Digital Equipment Corporation
Marc Andreessen, B.S. 1993 – co-creator of Mosaic, co-founder of Netscape, currently co-founder of venture-capital firm Andreessen Horowitz
Bruce Artwick, M.S. 1976 – creator of Microsoft Flight Simulator
William F. Baker, M.S. 1980 – best known for being the structural engineer of Burj Khalifa, the world's tallest man-made structure
Ken Batcher, Ph.D. 1969 – ACM/IEEE Eckert-Mauchly Award winner for work on parallel computers
Arnold O. Beckman, B.S. 1922, M.S. 1923 – inventor of the pH meter, founder of Beckman Instruments; major donor to the university which included a gift to found the Beckman Institute; namesake of the Beckman Quadrangle
Eric Bina, B.S. 1986, M.S. 1988 –- co-creator of the Mosaic and among the first employees of Netscape
Donald Bitzer, B.S. 1955, M.S. 1956, Ph.D. 1960–2003 Emmy Award in Technical Achievement for the invention of the plasma display
Ed Boon, B.S. 1986 – creator of the Mortal Kombat video game series
Keith Brendley, B.S., 1980—leading authority on active protection systems and president of Artis, a research and development company
Mike Byster, 1981 – mental calculator, mathematician
Steve Chen – co-founder of YouTube
Ven Te Chow, Ph.D. – professor of hydrology
John Cioffi, B.S. 1978 – father of DSL (broadband internet connection), Marconi Prize winner, founder of Amati Communications (sold to Texas Instruments), IEEE Fellow
Alan M. Davis, M.S. 1973, Ph.D. 1975 – IEEE Fellow for contributions to software engineering, author, entrepreneur
Lemuel Davis, M.S. – software engineer in the field of computer animation; winner of a 1992 Academy of Motion Picture Arts and Sciences Scientific and Engineering Award
James DeLaurier, B.S. – designed the first microwave-powered aircraft, the first engine-powered ornithopter, and the first human-carrying ornithopter
Daniel W. Dobberpuhl, B.S. 1967 – creator of Alpha and StrongARM microprocessors at DEC
Steve Dorner, B.S. 1983 – creator of Eudora
Russell Dupuis, B.S. 1970, M.S. 1971, Ph.D. 1972 – professor at the Georgia Institute of Technology; co-recipient of the 2002 National Medal of Technology; awarded the 2007 IEEE Edison Medal; pioneer in metalorganic chemical vapor deposition and the commercialization of LEDs
Brendan Eich, M.S. 1986 – creator of JavaScript; chief technology officer of Mozilla Corporation
Larry Ellison, attended (left after sophomore year) – founder of Oracle Corporation
Michael Hart, B.A. 1973 – founder of Project Gutenberg
Tomlinson Holman, B.S. 1968 – creator of THX, professor at the USC School of Cinematic Arts
John C. Houbolt, B.S. 1940, M.S. 1942 – retired NASA engineer who successfully promoted lunar orbit rendezvous for Apollo Space Program
Jawed Karim, B.S. 2004 – co-founder of YouTube
Fazlur Khan, Ph.D. 1955 – designer and builder of the Sears Tower, the tallest building in the world when it was built in 1973
Shahid Khan, B.S. 1971 – owner of Flex-N-Gate Corp.; owner of Jacksonville Jaguars
Ed Krol – author of Whole Internet User's Guide and Catalog
Chris Lattner – author of LLVM and related projects, such as the compiler Clang and the programming language Swift. At the start of 2017 he started working at Tesla Motors as vice president of Autopilot Software.
Max Levchin, B.S. 1997 – co-founder of PayPal
Jenny Levine, M.L.I.S. 1992 – evangelist for library technology and American Library Association Internet strategist
Russel Simmons – co-founder and chief technical officer of Yelp!
Bob Miner, B.A. (mathematics) 1963 – co-founder of Oracle Corporation
Ray Ozzie, B.S. 1979 – creator of Lotus Notes cofounder of Lotus, co-president of Microsoft
Anna Patterson, Ph.D 1988 – Vice President of Engineering, Artificial Intelligence at Google and co-founder of Cuil
Cecil Peabody – writer, graduate of MIT (1877) and professor at MIT
Jerry Sanders, B.S. 1958 – co-founder and former chief executive officer of Advanced Micro Devices
Peter Shirley, Ph.D. 1991 – Distinguished Scientist at NVIDIA recognized for contributions to real time ray tracing
Thomas Siebel, B.A. 1975, M.B.A. 1983, M.S. 1985 – founder of Siebel Systems
H. Gene Slottow, Ph.D. 1964 – 2003 Emmy Award in Technical Achievement for the invention of the plasma display
Nadine Barrie Smith, B.S. 1985, M.S. 1989, Ph.D. 1996 – biomedical researcher in therapeutic ultrasound
Jeremy Stoppelman – co-founder and chief executive officer of Yelp!
Bill Stumpf – designer of the Aeron and Ergon ergonomic chairs
Parisa Tabriz – head of security at Google Chrome
Mark Tebbe – B.S. 1983 – co-founder of Lante Corporation and Answers.com
Craig Vetter – BFA Industrial Design c. 1966 – founder of Vetter Fairing Company and Motorcycle Hall of Fame inductee
Kevin Warwick – Senior Beckman Fellow, 2004 – cyborg scientist, University of Reading
Journalism and non-fiction broadcasting
Jabari Asim, scholar-in-residence 2008–2010 – former editor-in-chief of The Crisis, The Washington Post Book World deputy editor, columnist; author
Dan Balz, B.A. 1968, M.A. 1972 – Washington Post national political reporter and editor; author
Claudia Cassidy, 1921 – Chicago Tribune music and drama critic
John Chancellor – political analyst and newscaster for NBC Nightly News
Roger Ebert, B.S. 1964 – film critic
Sean Evans – host of YouTube series Hot Ones
Bill Geist, 1968 – CBS News correspondent
Robert Goralski, 1949 – NBC News correspondent
Bob Grant – radio talk show personality
Steven Hager – editor of High Times and founder of the Cannabis Cup
Herb Keinon – columnist and journalist for The Jerusalem Post
Frederick C Klein, B.A. 1959 – sportswriter for The Wall Street Journal and author
Will Leitch – writer and founding editor of Deadspin
Jane Marie, B.A. 2002 - journalist and podcaster, former producer of This American Life and founder of Little Everywhere
Carol Marin, A.B. 1970 – former news anchor; 60 Minutes correspondent; Illinois Journalist of the Year (1988)
Tom Merritt, B.S. journalism – technology journalist and broadcaster on TWiT.tv
Charles (Charlie) Meyerson, B.S., 1977; M.S., 1978, journalism — radio, newspaper and internet reporter
Robert Novak, B.A. 1952 – political commentator and columnist
Suze Orman, B.A. 1976 – financial adviser and author
Ian Punnett – radio talk-show personality, and Saturday-night host of Coast to Coast AM
B. Mitchel Reed, B.S., M.A. – radio personality in Los Angeles and New York City
Taylor Rooks, B.S. broadcast journalism – Big Ten Network television personality and sideline reporter
Dan Savage – advice columnist (Savage Love) and theater director
Gene Shalit, 1949 – film critic
Patricia Thompson, 1969 – film and television producer
Terry Teachout M.A. music—theater critic and writer
Douglas Wilson – television personality and designer (Trading Spaces)
Gregor Ziemer – author and journalist, provided expert testimony during the Nuremberg Trials
Literature
Nelson Algren, B.S. 1931 – author of 1950 National Book Award-winning The Man With the Golden Arm
William Attaway, B.A. 1935 – author of Blood on the Forge
Ann Bannon, B.A. 1955 – pulp-fiction writer, author of The Beebo Brinker Chronicles
Dee Brown, M.S. 1951 – author of Bury My Heart at Wounded Knee
John F. Callahan, M.A., Ph.D. – literary executor for Ralph Ellison
Iris Chang, B.A. 1989 – author of The Rape of Nanking
Mary Tracy Earle (1864-1955), American author
Dave Eggers, attended 1980s and 90s, B.S. 2002 – author of A Heartbreaking Work of Staggering Genius, What Is the What, and Zeitoun (book)
Stanley Elkin, B.A. 1952, Ph.D. 1961 – National Book Critics Circle Award winner for George Mills in 1982 and for Mrs. Ted Bliss in 1995
Lee Falk, 1932 – creator of The Phantom and Mandrake the Magician
Rolando Hinojosa, Ph.D. 1969 – author of Klail City Death Trip Series
Irene Hunt, B.A. 1939 – Newbery Medal-winning author of Up a Road Slowly
Richmond Lattimore, Ph.D. 1935 – poet; translator of the Iliad and the Odyssey
William Keepers Maxwell, Jr., B.A. 1930 – novelist and fiction editor of The New Yorker (1936–1976)
Tulika Mehrotra, B.A 2002 - Author of Delhi Stopover and Crashing B-Town. Writer for magazines such as Harper's Bazaar, Vogue, India Today and Men's Health
Nnedi Okorafor, B.A. 1996 – author of Binti, Who Fears Death, and Akata Witch
Harry Mark Petrakis, attended – novelist
Richard Powers, M.A. 1979 – novelist and writer
Shel Silverstein, attended (expelled) – poet, singer-songwriter, musician, composer, cartoonist, screenwriter and author of children's books (Where the Sidewalk Ends)
Anne Valente, M.S. 2007 – novelist, author of Our Hearts Will Burn Us Down and By Light We Knew Our Name
Larry Woiwode, 1964 – poet and novelist
Media
Robert "Buck" Brown – Playboy cartoonist, creator of the libidinous "Granny" character, whose drawings also regularly addressed racial equality issues
Dianne Chandler – Playboy Playmate of the Month, 1966
Brant Hansen – radio personality for Air 1 network
Erika Harold – Miss America 2003
Judith Ford (Judi Nash), B.S. — Miss America 1969
Hugh Hefner, B.A. 1949 – founder of Playboy magazine
Nicole Hollander, B.A. 1960 – syndicated cartoonist of Sylvia
James Holzhauer, B.S. 2005 – Jeopardy record-breaker and professional gambler
Ken Paulson, J.D. – editor-in-chief of USA Today (2004–2008)
Henry Petroski, Ph.D. 1968 – civil engineer and writer
Irna Phillips, 1923 – creator of the soap opera
Military
Lew Allen, Jr., M.S. 1952, Ph.D. 1954 – Chief of Staff of the United States Air Force
Kenneth D. Bailey 1935 – Medal of Honor recipient
Casper H. Conrad Jr. B.S. 1922 – U.S. Army brigadier general
Reginald C. Harmon, LLB 1927 – first United States Air Force Judge Advocate General
Thomas R. Lamont, J.D. 1972 – United States Assistant Secretary of the Army (Manpower and Reserve Affairs)
Jerald D. Slack – U.S. Air National Guard Major General, Adjutant General of Wisconsin
Herbert Sobel – U.S. Army Lieutenant Colonel, commander of Easy Company, 506th Infantry Regiment during World War II, featured in Band of Brothers
Eugene L. Tattini – U.S. Air Force Lieutenant General
David M. Van Buren, B.S. 1971 – Assistant Secretary of the Air Force (Acquisition)
Music
Anton Armstrong – choral director
Jay Bennett – musician for band Wilco
Marty Casey, B.A. – lead vocalist of the band Lovehammers
Rene Clausen – composer, conductor
Alexander Djordjevic – pianist
Neal Doughty, attended late 1960s – keyboard player and founding member of REO Speedwagon
Dan Fogelberg – singer-songwriter
Nathan Gunn – baritone, opera singer
John B. Haberlen – director of Georgia State University school of music
Jerry Hadley – opera singer
Chan Hing-yan – composer and music educator
Kenneth Jennings – composer and music educator
Craig Hella Johnson – choir conductor
Curtis Jones – house music producer
Mike Kinsella, 1999 - indie-rock musician; frontman of American Football (band)
Jeffrey Kurtzman, musicologist and music editor
Donald Nally – choral director
Bob Nanna – indie-rock musician; founder of the bands Friction, Braid (band), Hey Mercedes, and The City on Film
Psalm One – hip-hop artist
John Pierce (tenor) (born 1959), operatic tenor and academic voice teacher
Mary McCarty Snow - composer
Matt Wertz – singer-songwriter
Carolyn Kuan – conductor, pianist, music director for Hartford Symphony Orchestra
Brian Courtney Wilson – Grammy Nominated Gospel artist
Noam Pikelny – Banjo player; recipient of Steve Martin Award for Excellence in Banjo and Bluegrass
Performing arts
Ruth Attaway - Broadway and film actress (You Can't Take It With You, Raintree County, Porgy and Bess, and Being There)
Barbara Bain, B.S. – winner of three consecutive Emmy Awards for the role of Cinnamon Carter in Mission: Impossible
Betsy Brandt, B.F.A. 1996 – television actress (Marie Schrader in Breaking Bad)
Timothy Carhart – film and television actor (Pink Cadillac, The Hunt for Red October)
Terrence Connor Carson – singer and stage, voice, and television actor
Arden Cho – actress
Andrew Davis – film director (The Fugitive)
Janice Ferri Esser, B.F.A. 1981, M.S. 1982 – Daytime Emmy-winning writer (The Young & the Restless)
Dominic Fumusa, M.F.A. 1994 - actor (Nurse Jackie)
Grant Gee – film director (Meeting People Is Easy)
Nancy Lee Grahn, briefly attended – Daytime Emmy-winning actress
Gene Hackman, attended – five-time Academy Award-nominated actor
Shanola Hampton – actor (Shameless)
Arte Johnson, 1949 – Laugh-In television personality
Margaret Judson – television actress (The Newsroom)
Chris Landreth, B.S. 1984, M.S. 1986 – Academy Award-winning animator (Best Animated Short Film, 2004, '"Ryan")
Ang Lee, B.F.A. 1980 – Academy Award-winning movie director (Best Director, 2005, Brokeback Mountain; 2012, Life of Pi)
Ned Luke, 1979 – actor (Grand Theft Auto V)
John Franklin, 1983– Isaac (Children of the Corn (1984 film))
Mary Elizabeth Mastrantonio, 1980 – actress (Scarface, Robin Hood: Prince of Thieves, The Color of Money)
John McNaughton – film and television director (Henry: Portrait of a Serial Killer, Wild Things)
Ryan McPartlin – actor (Chuck)
Donna Mills – film and television actress (Knots Landing)
Ben Murphy – television actor (Alias Smith and Jones)
Lucas Neff – actor (Raising Hope)
Nick Offerman, 1993 – actor (Parks and Recreation)
Jerry Orbach, B.A. – Broadway, film and television actor (Dirty Dancing, Detective Lennie Briscoe in Law & Order)
Peter Palmer – actor and singer; played "Li'l Abner" on Broadway and film
Larry Parks – Academy-Award-nominated actor; blacklisted in Hollywood after testifying before the House Un-American Activities Committee
Andy Richter, briefly attended – actor and Conan O'Brien sidekick
Alan Ruck – actor (Ferris Bueller's Day Off, Star Trek Generations, Spin City)
Jonathan Sadowski – actor ($#*! My Dad Says)
Allan Sherman – comedian (known for the Grammy Award-winning novelty song "Hello Muddah, Hello Faddah"; television writer and producer (co-creator of I've Got a Secret)
Sushanth, B.E. – Telugu actor
Lynne Thigpen, B.A. 1970–1997 Tony Award-winning actress (Where in the World Is Carmen Sandiego?)
Prashanth Venkataramanujam, B.S. 2009 – Television comedy writer and producer
Grant Williams – film actor (The Incredible Shrinking Man) and operatic tenor
Roger Young, M.S. – Emmy Award-winning TV and movie director
Politics and Government
U.S. Senate
Carol Moseley Braun, – first African-American female United States Senator (Illinois, 1993–1999); U.S. Ambassador to New Zealand and Samoa (1999–2001)
Prentiss M. Brown – United States Senator from Michigan (1936–1943); U.S. Representative from Michigan (1933–1936)
Jon Corzine, A.B. 1969 – Governor of New Jersey (2006–2010) and U.S. Senator from New Jersey (2001–2006), cross listed in Business section
Alan J. Dixon, B.S. – United States Senator from Illinois (1981–1993); 34th Illinois Secretary of State
John Porter East, Law, 1959 – United States Senator from North Carolina (1981–1986)
Kelly Loeffler, B.S. 1992 - United States Senator from Georgia (2020-2021)
U.S. House of Representatives
John Anderson – U.S. Representative from Illinois (1961–1981); 1980 presidential candidate
Willis J. Bailey, 1879 – United States Representative and the 16th Governor of Kansas
Terry L. Bruce – U.S. Representative from Illinois's 19th congressional district (1985–1993). He earned his B.A. in 1966 and his J.D. in 1969.
Larry Bucshon — U.S. Representative from Indiana (since 2011)
Edwin V. Champion – U.S. Representative from Illinois (1937–1939)
William J. Graham, B.L. 1893 – U.S. Representative from Illinois (1917-1924)
George Evan Howell, B.S. 1927, LL.B. 1930 – U.S. Representative from Illinois (1941-1947)
Jesse Jackson, Jr., J.D. 1993 – U.S. Representative from Illinois (1995–2012)
Tim Johnson, B.A. 1969, J.D. 1972 – U.S. Representative from Illinois (2001–2013)
Lynn Morley Martin, B.A. 1960 – U.S. Representative from Illinois (1981–1991) and Secretary of Labor in the cabinet of George H.W. Bush (1991–1993)
Peter Roskam, B.A. 1983 – U.S. Representative from Illinois (since 2007), House Republican Chief Deputy Whip (2011–2014)
Kurt Schrader, B.S. 1975, D.V.M. 1977 – U.S. Representative from Oregon (since 2009)
Jan Schakowsky, B.S. 1965 – U.S. Representative from Illinois (since 1999)
Steve Schiff, B.A. 1968 – U.S. Representative from New Mexico (1989–1998)
Harold H. Velde, J.D. 1937 – U.S. Representative from Illinois (1949–1957)
Jerry Weller, B.S. 1979 – U.S. Representative from Illinois (1995–2009)
Executive Branch Officials
Nancy Brinker, 1968 – founder of Susan G. Komen for the Cure; Chief of Protocol of the United States, United States Ambassador to Hungary (2001–2003)
Mark Filip, B.A. 1988 – acting Attorney General of the United States (2009); Deputy Attorney General of the United States (2008–2009); Judge for the U.S. District Court for the Northern District of Illinois (2004–2008)
William Marion Jardine – served as the United States Secretary of Agriculture and the U.S. Ambassador to Egypt
Neel Kashkari, B.S. 1995, M.S. 1997 – Interim Assistant Secretary of the Treasury for Financial Stability in the United States Department of the Treasury
Julius B. Richmond, B.S., M.S. 1939 – 12th United States Surgeon General and the United States Assistant Secretary for Health (1977–1981); vice admiral in the United States Public Health Service Commissioned Corps; first national director for Project Head Start
Samuel K. Skinner, 1960 – Secretary of Transportation (1989–1991); White House Chief of Staff during the George H. W. Bush Administration (1992)
Louis E. Sola, M.S. 1998 - Commissioner, Federal Maritime Commission.
Phillips Talbot – United States diplomat, United States Ambassador to Greece (1965–1969)
Statewide Offices
Russell Olson, attended – 39th Lieutenant Governor of Wisconsin (1979–1983)
Ashton C. Shallenberger – 15th Governor of Nebraska
Samuel H. Shapiro – 34th Governor of Illinois (1968); 38th Lieutenant Governor of Illinois (1961–1968)
Juliana Stratton – 48th lieutenant governor of Illinois
Frank White, 1880 – eighth Governor of North Dakota
Leslie Munger – former Illinois Comptroller (2015–2017)
State Legislators
Aaron Ortiz, B.A. 2013 - Illinois House of Representatives & Chicago 14th Ward Committeeman (since 2018)
David S. Olsen, B.S. 2011- Illinois House of Representatives (since 2016-2019)
Kiah Morris, B.S. 2006- Vermont House of Representatives (since 2014)
Tom Fink, J.D. 1952 – Speaker of the Alaska House of Representatives (1973), Mayor of Anchorage (1987–1994)
Allen J. Flannigan – Wisconsin State Assemblyman (1957–1966)
Jehan Gordon-Booth – Illinois House of Representatives (since 2009)
Chuck Graham, B.S. 1987 – Missouri House of Representatives (1996–2002), Missouri State Senate 2004
Robert W. Pritchard – Illinois House of Representatives (since 2003), former Chairman of the DeKalb County Board (1998–2003)
Thomas P. Sinnett, 1909 – Illinois House of Representatives (1924–1940), Democratic Party Floor Leader (1932–1934)
Judiciary
Wayne Andersen, J.D. 1970 – Judge of the United States District Court for the Northern District of Illinois
Harold Baker, J.D. 1956 – Judge of the United States District Court for the Central District of Illinois
Charles Guy Briggle, LL.B. 1904 – Judge of the United States District Court for the Southern District of Illinois
Henry M. Britt, 1941 and 1947 (law) – Arkansas Republican pioneer and circuit judge in Hot Springs
Colin S. Bruce, B.A. 1986, J.D. 1989 – Judge of the United States District Court for the Central District of Illinois
Owen McIntosh Burns, B.A. 1916, LL.B. 1921 – Judge of the United States District Court for the Western District of Pennsylvania
Thomas R. Chiola, J.D. 1977 – Judge of the Illinois Circuit Court of Cook County, first openly gay elected official in Illinois
Brian Cogan, B.A. 1975 – Judge of the United States District Court for the Eastern District of New York
Bernard Martin Decker, B.A. 1926 – Judge of the United States District Court for the Northern District of Illinois
Arno H. Denecke, LL.B. 1939 – Chief Justice of the Oregon Supreme Court
Richard Everett Dorr, B.S. 1965 – Judge of the United States District Court for the Western District of Missouri
Thomas M. Durkin, B.S. 1975 – Judge of the United States District Court for the Northern District of Illinois
Mark Filip, B.A. 1988 – Judge of the United States District Court for the Northern District of Illinois
James L. Foreman, B.S. 1950, J.D. 1952 – Judge of the United States District Court for the Southern District of Illinois
Rita B. Garman, B.S. 1965 – Justice of the Illinois Supreme Court (since 2001)
John Phil Gilbert, B.S. 1971 – Judge of the United States District Court for the Southern District of Illinois
William J. Graham, B.L. 1893 – Judge of the United States Court of Customs and Patent Appeals
James F. Holderman, B.S. 1968, J.D. 1971 – Judge of the United States District Court for the Northern District of Illinois
George Evan Howell, B.S. 1927, LL.B. 1930 – Judge of the United States Court of Claims
William F. Jung, J.D. 1983 – Judge of the United States District Court for the Middle District of Florida
Frederick J. Kapala, J.D. 1976 – Judge of the United States District Court for the Northern District of Illinois
Lloyd A. Karmeier, B.A. 1962, J.D. 1964 – Justice of the Illinois Supreme Court (since 2004)
Alfred Younges Kirkland Sr., B.A. 1941, J.D. 1943 – Judge of the United States District Court for the Northern District of Illinois
Ray Klingbiel, LL.B. 1924 – Chief Justice of the Illinois Supreme Court
Walter C. Lindley, LL.B. 1904, J.D. 1910 – Judge of the United States Court of Appeals for the Seventh Circuit
J. Warren Madden, B.A. 1911 – Judge of the United States Court of Claims
George M. Marovich, B.S. 1952, J.D. 1954 – Judge of the United States District Court for the Northern District of Illinois
Prentice Marshall, B.S. 1949, J.D. 1951 – Judge of the United States District Court for the Northern District of Illinois
William J. Martinez, B.A. 1977, B.S. 1977 – Judge of the United States District Court for the District of Colorado
Frederick Olen Mercer, LL.B. 1924 – Judge of the United States District Court for the Southern District of Illinois
Patricia Millett, B.A. 1985 – Judge of the United States Court of Appeals for the District of Columbia Circuit
Ramon Ocasio III – 6th Judicial Subcircuit Judge, Cook County, Illinois (since 2006)
George True Page – Judge of the United States Court of Appeals for the Seventh Circuit
Casper Platt, B.A. 1914 – Judge of the United States District Court for the Eastern District of Illinois
Philip Godfrey Reinhard, B.A. 1962, J.D. 1964 – Judge of the United States District Court for the Northern District of Illinois
Scovel Richardson, B.A. 1934, M.A. 1936 – Judge of the United States Court of International Trade
Nancy J. Rosenstengel, B.A. 1990 – Judge of the United States District Court for the Southern District of Illinois
Stanley Julian Roszkowski, B.S. 1949, J.D. 1954 – Judge of the United States District Court for the Northern District of Illinois
Howard C. Ryan – Chief Justice of the Illinois Supreme Court
John Sanders, B.S. Political Science 1985 – Williamson County Judge
Roy Solfisburg, J.D. 1940 – Chief Justice of the Illinois Supreme Court
Robert C. Underwood, LL.B. 1939 – Justice of the Illinois Supreme Court (1962–1984)
Fred Louis Wham, LL.B. 1909 – Judge of the United States District Court for the Eastern District of Illinois
Harlington Wood Jr., B.A. 1942, J.D. 1948 – Judge of the United States Court of Appeals for the Seventh Circuit
Staci Michelle Yandle, B.S. 1983 – Judge of the United States District Court for the Southern District of Illinois
Local Offices
Michael Cabonargi, commissioner of the Cook County Board of Review
Bob Fioretti, B.A. in Political Science 1975, Chicago alderman, 2007-2015
M.J. Khan, Master's in Engineering – former member of the Houston City Council
Dick Murphy, B.A. 1965 – Mayor of San Diego (2000–2005)
Thomas D. Westfall (1927–2005) – former mayor of El Paso, Texas
Latasha R. (Johnson) Thomas, B.A. in Political Science 1987, Chicago alderman, 2001-2015
Activists
James Brady, 1962 – White House Press Secretary under Ronald Reagan, hand-gun-control advocate
Dorothy Day, 1918 – founder of the Catholic Worker Movement
Jesse Jackson – civil-rights leader; presidential candidate; founder of the Rainbow/PUSH Coalition
Victor Kamber, B.S. 1965 – formed The Kamber Group, working for Democratic Party candidates and labor unions
Vashti McCollum – political activist for the separation of religion and public education and the plaintiff of the McCollum case
Carlos Montezuma (Wassaja), B.S. 1884 – Native American activist and a founding member of the Society of American Indians
Atour Sargon, B.A. - Assyrian American activist, first ethnic Assyrian elected to the Lincolnwood board of trustees
Albert Shanker – president of the United Federation of Teachers (1964–1984); president of the American Federation of Teachers (1974–1997)
International Figures
Giorgi Kvirikashvili, M.S. 1998 – Prime Minister of Georgia
Berhane Abrehe, M.S. 1972 – Third Minister of Finance of Eritrea
Rafael Correa, Ph.D. 2001 — President and former Secretary (Minister) of Finances of Ecuador
Cüneyd Düzyol, M.S. 1996 – Turkish Minister of Development
Mustafa Khalil, M.S. 1948, Ph.D., 1951 – former Prime Minister of Egypt (1978–1980)
Atef Ebeid, Ph.D. 1962 – former Prime Minister of Egypt (1999–2004)
Annette Lu – former vice-president of Taiwan (2000–2008)
Oran McPherson – former Speaker of the Legislative Assembly of Alberta; Minister of Public Works for the United Farmers of Alberta government
Maxwell Mkwezalamba, Ph.D. 1995 – Commissioner for Economic Affairs for the African Union Commission (since 2004)
Fidel V. Ramos, 1951 – former President of the Philippines (1992–1998)
Kandeh Yumkella, Ph.D. 1991 – Director-General of the United Nations Industrial Development Organization
Lin Chuan, Ph.D. – Current Premier of Taiwan and former Minister of Finance.
Sri Mulyani Indrawati, M.Sc., Ph.D. - 26th Finance Minister of Indonesia (2016-now) & Managing Director of the World Bank Group (1 June 2010 – 27 July 2016)
Bambang Brodjonegoro, Prof., S.E., M.U.P., Ph.D. - 13th Minister of National Development Planning of Indonesia (2016-now) & 29th Finance Minister of Indonesia (27 October 2014 – 27 July 2016)
Rajai Muasher, M.Sc., Ph.D. - Jordan's Deputy Prime Minister and Minister of State for Prime Ministry Affairs
Other
Jill Wine-Banks, B.S. - Watergate prosecutor; General Counsel of the Army (1977–1980); Executive Director of the American Bar Association
Science and mathematics
MiMi Aung, BSEE '88, MS '90 – lead engineer on the Mars Helicopter Ingenuity
Rudolf Bayer, Ph.D. 1966 - Mathematician and Computer Scientist known for b-tree and red-black tree
Ahmet Nihat Berker, Ph.D. 1977 – condensed matter physicist; president of Sabancı University, Istanbul–Turkey
David Blackwell, Ph.D. 1941 – mathematician; 2010 Rao–Blackwell theorem; first African American to be inducted into the National Academy of Sciences (1965); first black tenured faculty member at the University of California, Berkeley
Murray S. Blum – entomologist, authority on chemical ecology and pheromones
Harold E. Brooks, Ph.D. 1990 – atmospheric scientist; tornado climatology expert
John Carbon, B.S. 1952 – biochemist; National Academy of Sciences member
Stephen S. Chang, Ph.D. 1952 – food scientist; recipient, IFT Stephen S. Chang Award for Lipid or Flavor Science
Alfred Y. Cho, B.S. 1960, M.S. 1961, Ph.D. 1968 – father of molecular beam epitaxy; received the National Medal of Science in 1993
Karl Clark, Ph.D. – discovered the hot water oil separation process
Cutler J. Cleveland, Ph.D. – editor-in-chief of the Encyclopedia of Energy and the Encyclopedia of Earth
Ronald Cohn, B.S. 1965, M.S. 1967, Ph.D. 1971 – researcher and cameraman who helped document Koko, the mountain gorilla
Donald Geman, B.A. 1965 – applied mathematician, who discovered the Gibbs sampler method in computer vision, Random forests in machine learning, and the Top Scoring Pairs (TSP) classifier in bioinformatics; professor at Johns Hopkins University
Josephine Burns Glasgow, A.B., 1909, Master's degree, Ph.D. in mathematics, 1913 – the second woman to receive a Ph.D. from Illinois University
Gene H. Golub, B.S. 1953, M.A. 1954, Ph.D. 1959 – B. Bolzano Gold Medal for Merits in the Field of Mathematical
T. R. Govindachari, Post-doc 1946–49, Natural product chemist, Shanti Swarup Bhatnagar laureate
Temple Grandin, Ph.D. 1989 – animal scientist; bestselling author; consultant to the livestock industry in animal behavior; her biopic (about her life as a woman diagnosed with autism at age two) won five Emmy Awards in 2010
Paul Halmos, B.S. 1935, Ph.D. 1938 – mathematician
Richard Hamming, Ph.D. 1942 – mathematician; developed Hamming code and Hamming distance; winner of 1968 ACM Turing Award; namesake of the IEEE's Richard W. Hamming Medal
Leslie M. Hicks, Ph.D. 2005- analytical chemist
Donald G. Higman, Ph.D. 1952– mathematician, discovered the Higman–Sims group
Donald Johanson, B.S. 1966 – anthropologist, discoverer of oldest known hominid, "Lucy"
Cheryl Johnson, M.S. 1948 – Inorganic crystallography major; attended law school, has served since 1999 on the Texas Court of Criminal Appeals in Austin
W. Dudley Johnson, B.S. 1951 – cardiac surgeon known as the father of coronary artery bypass surgery
David A. Johnston, B.S. 1971 – USGS volcanologist killed in the 1980 eruption of Mount St. Helens
Charles David Keeling, B.S. 1948 – chemist, alerted the world about the possible connection between climate change and human activity
Michael Lacey, Ph.D. 1987 – awarded the Salem Prize for solving conjectures about the Bilinear Hilbert Transform
Richard Leibler, Ph.D. 1939 – mathematician and cryptanalyst; formulated the Kullback–Leibler divergence, a measure of similarity between probability distributions; directed the Princeton center of the Institute for Defense Analysis
Sandra Leiblum, Ph.D. – sexologist
Stephanie A. Majewski, B.S. 2002; Ph.D. Stanford University 2007 – physicist
Eloisa Biasotto Mano (1924–2019), Brazilian chemist, professor
Jeffrey S. Moore, Ph.D. 1989 – chemist
Catherine J. Murphy, B.S. 1986 – chemist
P. T. Narasimhan, Post-doc 1957–59 – theoretical chemist, Shanti Swarup Bhatnagar laureate
Rahul Pandit, MS and PhD 1977–82 – condensed matter physicist, Shanti Swarup Bhatnagar laureate
Francine Patterson, B.S. 1970 – researcher who taught a modified version of American Sign Language to a mountain gorilla named Koko
Mary Lynn Reed, Ph.D. 1995 – Chief of Mathematics Research at the National Security Agency and president of the Crypto-Mathematics Institute
Harold Reetz, Ph. D. crop physiology and ecology, agronomist and former President of the Foundation for Agronomic Research
Idun Reiten, Ph.D. 1971 – professor of mathematics; considered to be one of Norway's greatest living mathematicians
John A. Rogers, – physical chemist and a materials scientist
Allan Sandage, B.S., 1948 – astronomer and cosmologist; winner of 1991 Crafoord Prize
Pierre Sokolsky, Ph.D. 1973 – astrophysicist, Panofsky Prize Laureate, directed the HiRES Cosmic Ray Detector project and pioneer in ultra-high-energy cosmic ray physics
Leia Stirling – American Association for the Advancement of Science Leshner Leadership Fellow and Massachusetts Institute of Technology Professor in Human–computer interaction
Steven Takiff, Ph.D. 1970 – mathematician
Charles W. Woodworth, B.S. 1885, M.S. 1886 – founder of the Division of Entomology, University of California, Berkeley; the PBESA gives the C. W. Woodworth Award
Andrew Chi-Chih Yao, Ph.D. 1975 – computer scientist, winner of 2000 ACM Turing Award
K. R. Sridhar, M.S. 1984, Ph.D. 1989- Founder of Bloom Energy
Alessandro Piccolo (agricultural scientist), chemist and agricultural scientist Humboldt Prize in Chemie 1999
D Sangeeta, Ph.D. 1990 – Materials Chemistry, awarded 26 patents related to jet engine technology and material science, author of "Handbook Of Inorganic Materials Chemistry(1997)", co-author “Inorganic Materials Chemistry Desk Reference (2004)”, CEO & Founder Gotara
Sports
Administration
Ron Guenther, B.S. 1967, M.S. 1968 – Illinois Fighting Illini Athletic Director (1992-2011)
Tony Khan, B.S. 2007 - President of All Elite Wrestling, Senior Vice President of Football Administration and Technology of Jacksonville Jaguars, Vice Chairman and Director of Football Operations of Fulham F.C., son of Shahid Khan
Chester Pittser, B.S. 1924 – Miami University football and basketball coach (1924–1931), Montclair State College football, basketball and baseball coach (1934–1943)
Doug Mills – (1926–1930), Illinois Fighting Illini Athletic Director (1941–1966), Illinois Fighting Illini men's basketball Head Coach (1936–1947)
Josh Whitman, B.S. 2001, J.D. 2008 – Illinois Fighting Illini Athletic Director (2016–present), former NFL player
Baseball
Jason Anderson – Major League Baseball player
Dick Barrett – former Major League Baseball player, member of Pacific Coast League Hall of Fame
Fred Beebe – late Major League Baseball player
Lou Boudreau – late Major League Baseball player; member of the Baseball Hall of Fame
Mark Dalesandro – former Major League Baseball catcher and third baseman
Hoot Evers – former Major League Baseball outfielder (two-time All-Star)
Darrin Fletcher – former Major League Baseball catcher
Moe Franklin - Major League Baseball player.
Tom Haller – former Major League Baseball catcher
Ken Holtzman – former Major League Baseball 2-time All-Star pitcher and Israel Baseball League manager
Tanner Roark – Major League Baseball pitcher, Washington Nationals
Marv Rotblatt – Major League Baseball pitcher, Chicago White Sox
Scott Spiezio – has played for the St. Louis Cardinals, Oakland Athletics, Anaheim Angels, and Seattle Mariners
Terry Wells – retired Major League Baseball pitcher
Basketball
Nick Anderson – (1987–1989), played professionally for the NBA's Orlando Magic and Sacramento Kings
James Augustine – basketball (2002–2006), played two seasons for the NBA's Orlando Magic, all-time leader in rebounds at Illinois
Steve Bardo – former National Basketball Association player, current ESPN basketball analyst
Kenny Battle – played in 4 NBA seasons for the Phoenix Suns, Denver Nuggets, Boston Celtics and Golden State Warriors
Tal Brody – American-Israeli former Euroleague basketball player
Dee Brown – former National Basketball Association player
Chuck Carney – (1918–1921), First Big Ten athlete to be named a football and basketball All-American, Helms Foundation College Basketball Player of the Year (1922), twice named a Helms Foundation All-American for basketball (1920 & 1922)
Jerry Colangelo – (1958–1962), Former owner of the NBA's Phoenix Suns, the WNBA's Phoenix Mercury, the CISL's Arizona Sandsharks, the Arena Football League's Arizona Rattlers and MLB's Arizona Diamondbacks
Brian Cook – (1999–2003), Fifth all-time scorer for the Illini, played professionally in NBA
Nnanna Egwu – professional basketball player for the National Basketball League of Australia and New Zealand
Kendall Gill – (1986–1990), 1990 consensus All-American and Big 10 Player of the Year, played professionally for 15 seasons in the NBA
Lowell Hamilton – (1985–1989), played Professional Basketball in Greece.
Derek Harper – (1980–1983), played professionally for 16 seasons in the NBA, ranked 11th all-time in steals and 17th in assists
Luther Head – (2001–2005), guard for the Sacramento Kings
Malcolm Hill – (2013–2017), professional basketball player for the Star Hotshots of the Philippine Basketball Association
Eddie Johnson – played professionally for 17 seasons in the NBA, and the league's 35th all-time leading scorer
Johnny "Red" Kerr – member of the 1952 Final Four team, played professionally for 11 seasons in the NBA, first head coach for both the Chicago Bulls and Phoenix Suns, and a former broadcaster for the Chicago Bulls.
Meyers Leonard – (2010–2012), center for the Portland Trail Blazers, eleventh overall pick in 2012 NBA draft
Demetri McCamey – Turkish Basketball League player
Ken Norman – (1984–1987), played professionally for 10 seasons in the NBA
Don Ohl – basketball (1954–1958), played 10 seasons (1960–1970) in the NBA for three teams (Detroit Pistons, Baltimore Bullets, St. Louis/Atlanta Hawks ), 5xNBA All-Star
Johnny Orr – basketball (1944–45), Named the National Coach of the Year for the 1976 season and Big Ten Coach of the Year in college basketball while coaching at Michigan
Stan Patrick– former National Basketball Association player
Andy Phillip – basketball (1941–1943, 1946–1947), Member of the "Whiz Kids", played 11 seasons of professional basketball for the Chicago Stags, Philadelphia Warriors, Fort Wayne Pistons and Boston Celtics (1947–1958), Head Coach of the St. Louis Hawks (1958–1959), 5xNBA All-Star, 2x Consensus All-American
Roger Powell – former National Basketball Association player
Brian Randle (born 1985) – basketball player for Maccabi Tel Aviv of the Israeli Basketball Super League
Dave Scholz – former National Basketball Association player
Cindy Stein – basketball, head women's basketball coach at the University of Missouri since 1998
Jaylon Tate – professional basketball player in the National Basketball League of Canada
Deon Thomas – American-Israeli former Euroleague basketball player
Deron Williams – National Basketball Association player
Frank Williams – has been part of the NBA's New York Knicks, Denver Nuggets, Chicago Bulls, and Los Angeles Clippers
Ray Woods – basketball (1913–1917), Names Helms Foundation College Basketball Player of the Year (1917), 3xHelms Foundation All-American (1915–1917), 3xFirst Team All-Big Ten
Football
Paul Adams - Former Deerfield High School coach
Alex Agase – Former National Football League player, Cleveland Browns, Member of the College Football Hall of Fame
Ron Acks – former National Football League player, linebacker for the Atlanta Falcons
Jeff Allen – football (2008–2011), offensive tackle for the Kansas City Chiefs
Alan Ball – National Football League player, cornerback for the Jacksonville Jaguars
Arrelious Benn – National Football League player, wide receiver for the Tampa Bay Buccaneers
Chuck Boerio – National Football League player, linebacker for the Green Bay Packers
Ed Brady – former National Football League player, linebacker for the Cincinnati Bengals
Josh Brent – National Football League player, defensive tackles for the Dallas Cowboys
Bill Brown – former National Football League player, running back for the Minnesota Vikings
Darrick Brownlow – former National Football League player, linebacker for the Dallas Cowboys
Lloyd Burdick – National Football League tackle
Dick Butkus – National Football League linebacker; member of the Pro Football Hall of Fame
Luke Butkus– National Football League coach, offensive line coach for the Chicago Bears, nephew of Dick Butkus
J. C. Caroline – former National Football League player, defensive back and halfback for the Chicago Bears
Danny Clark IV – National Football League player, linebacker for the New Orleans Saints
Steve Collier – National Football League player, offensive tackle for the Green Bay Packers
Jameel Cook – former National Football League player, fullback for the Tampa Bay Buccaneers
Vontae Davis – National Football League player, cornerback for the Indianapolis Colts
Mark Dennis – former National Football League player, offensive tackle for the Miami Dolphins
David Diehl – National Football League player, offensive guard for the New York Giants
Doug Dieken – former National Football League player, offensive tackle for the Cleveland Browns
Ken Dilger – (1991–1994), played professionally for the Indianapolis Colts and Tampa Bay Buccaneers; starting Tight end in Super Bowl XXXVII
Charles Carroll "Tony" Eason – (1979–1983) played professionally for the New England Patriots; led team to Super Bowl XX
Moe Gardner – former National Football League player, former defensive line for the Atlanta Falcons
Jeff George – first overall pick of 1990 NFL Draft by the Indianapolis Colts, also played for a variety of teams including the Atlanta Falcons, Oakland Raiders, and the Washington Redskins
Lou Gordon – former National Football League player, defensive end for the Chicago Cardinals
Red Grange – charter member of the Pro Football Hall of Fame
Howard Griffith – former National Football League player, fullback for the Denver Broncos
George Halas – former National Football League coach for the Chicago Bears; charter member of the Pro Football Hall of Fame
Don Hansen – former National Football League player, linebacker for the Atlanta Falcons
Kevin Hardy – played professionally for the NFL's Jacksonville Jaguars, Dallas Cowboys, and Cincinnati Bengals
Kelvin Hayden – National Football League player, cornerback for the Chicago Bears
Brad Hopkins – first round pick in the 1993 NFL Draft by the Tennessee Titans and future all-pro.
Michael Hoomanawanui – (2007–2009), tight end for the New England Patriots
A.J. Jenkins – (2008–2011)), wide receiver for the Kansas City Chiefs, thirtieth overall pick in 2012 NFL Draft
Henry Jones – former National Football League player, safety for the Buffalo Bills
Brandon Jordan – Canadian Football League player, defensive tackle for the BC Lions
William G. Kline – head coach for the University of Florida and University of Nebraska football and basketball teams
Mikel Leshoure – National Football League player, running back for the Detroit Lions
Greg Lewis – National Football League player, wide receiver for the Philadelphia Eagles
Brandon Lloyd – (1999–2002), wide receiver for the San Francisco 49ers, 2010 Pro Bowler and 2010 NFL receiving yards leader
Corey Liuget – (2008–2010), defensive end for the San Diego Chargers, eighteenth overall pick in 2011 NFL Draft
Rashard Mendenhall – National Football League player, running back for the Arizona Cardinals and Pittsburgh Steelers.
Whitney Mercilus – (2009–2011), linebacker for the Houston Texans, twenty-sixth overall pick in the 2012 NFL Draft
Brandon Moore – former National Football League player, former offensive guard for the New York Jets
Aaron Moorehead – National Football League player, wide receiver for the Indianapolis Colts
Ray Nitschke – played professionally for the NFL's Green Bay Packers, and an enshrined member of the Pro Football Hall of Fame
Tony Pashos – National Football League player, offensive tackle for the Baltimore Ravens
Preston Pearson – (1963–1967), Played 13 seasons in the NFL for the Colts, Steelers and Cowboys despite not playing college football
Frosty Peters – former National Football League player
Neil Rackers – National Football League player, kicker for the Houston Texans
Simeon Rice – former National Football League player, defensive end
Scott Studwell – football (1972–1976), Played 14 seasons (1977–1990) for the Minnesota Vikings, 2-time Pro-Bowler
Marques Sullivan – Playboy All-American Tackle that played 4 season with NFL's Buffalo Bills, New York Giants, and New England Patriots
Pierre Thomas – National Football League player, running back for the New Orleans Saints
Bruce Thornton – former National Football League player, defensive tackle for the Dallas Cowboys
Fred Wakefield – National Football League player, offensive guard for the Arizona Cardinals
Steve Weatherford – National Football League player, punter for the New York Giants
Eugene Wilson – National Football League player, defensive back for the New England Patriots
Isiah John "Juice" Williams – football (2006–2009), NFL Free Agent
Golf
Bob Goalby – professional golfer; won 1968 Masters Tournament
D. A. Points – golf, PGA Golfer (1999–present)
Steve Stricker – (1986–1990), PGA Golfer (1990–present)
Thomas Pieters - (2010-2013), PGA Golfer (2013-present)
Wrestling
David Otunga – professional wrestler; two-time WWE Tag Team Champion
Lindsey Durlacher – two-time All-American Greco-Roman wrestler
Mark Jayne – wrestler; two-time NWCA All-Star Member
Jeff Monson – wrestler; two-time gold medalist (99' and 05') ADCC Submission Wrestling World Championships, current mixed martial artist, formerly for the Ultimate Fighting Championship
Jesse Delgado – wrestler, three-time All-American, two-time National Champion at 125 lbs.
Olympics
Kevin Anderson – Olympian in men's tennis 2008 Summer Olympics in Beijing; 2015 U.S. Open quarterfinalist
Michelle Bartsch-Hackley – gold medalist in women's volleyball 2020 (2021) Summer Olympics in Tokyo
Avery Brundage, B.S. 1909 – Olympian, International Olympic Committee (IOC) President (1952–1972)
Dike Eddleman – (1947–49), also tied for 2nd at the 1948 Summer Olympics in the high jump
Abie Grossfeld – Olympic, Pan Am, and Maccabiah Games gymnast and coach
George Kerr – (1958–1960), all-time Big Ten Olympian list, champion sprinter and 400/800 meter runner from Jamaica, 1960 Rome, Italy Summer Olympic bronze medal 800 meter winner
Don Laz – track & field, record setting American pole vaulter and silver medalist in Pole Vault in the 1952 Olympic Games in Helsinki, Finland
Daniel Kinsey – gold medalist in men's 110 m hurdles, 1924 Summer Olympics in Paris
Jonathan Kuck – silver medalist in speed skating in the 2010 Winter Olympics in Vancouver
Don Laz – silver medalist in pole vault in the 1952 Helsinki, Finland Games; architect in Champaign, Illinois; his design career was cut short by a stroke
Tatyana McFadden – USA paralympian athlete competing mainly in category T54 sprint events, team member for the 2012 London Olympics
Herb McKenley – silver medalist in 400 m, 1948 Summer Olympics in London; silver medal in 100 m and 400 m, gold medal in 4 × 400 m relay, 1952 Summer Olympics in Helsinki
Harold Osborn – won two gold medals in the 1924 Summer Olympics, charter member of U.S. Track & Field Hall of Fame
Jordyn Poulter – gold medalist in women's volleyball 2020 (2021) Summer Olympics in Tokyo
Bob Richards – gold medalist in pole vault in the 1952 Helsinki and 1956 Melbourne Games
Ashley Spencer – bronze medalist in 2016 Rio de Janeiro Olympics, 400 meter hurdles; 2013 world champion, 4-x-400 relay
Justin Spring – (2002–2006), member of the bronze medal-winning men's gymnastics team at the 2008 Summer Olympics
Craig Virgin – long-distance runner, 1975 NCAA cross country champion, 1980 and 1981 world cross-country champion
Deron Williams – USA basketball team member for the 2012 London Olympics
Other
Perdita Felicien – first female in Illinois history to win a gold medal in an individual event at the World Championships
Belal Muhammad – (Law) professional mixed martial artist for the UFC
Billy Arnold – Race driver and winner of the 1930 Indianapolis 500 mile race
Fictional
Lt. Col. Henry Blake, portrayed by McLean Stevenson on M*A*S*H
Cam Tucker, portrayed by Eric Stonestreet on Modern Family
Miscellaneous
Fred Goetz AKA "Shotgun" George Ziegler, prohibition-era gunman and associate of mobsters Gus Winkler and Fred Burke.
Notable faculty
Presidents
Chancellors
Nobel laureates
John Bardeen, 1951–1991– awarded Nobel Prizes for Physics in 1953 for co-inventing the transistor and again in 1972 for work on superconductivity (one of the four people in the world to win multiple Nobel Prizes and the only one who won twice in Physics)
Elias James (E.J.) Corey, 1951–1959 – Nobel laureate (Chemistry, 1990)
Leonid Hurwicz, 1950–1951, 2001 – Nobel laureate (Economics, 2007)
Paul Lauterbur, 1985–2007 – Nobel laureate (Physiology or Medicine, 2003)
Anthony James Leggett, 1983 – Nobel laureate (Physics, 2003)
Salvador Luria, 1950–1959 – Nobel laureate (Physiology or Medicine, 1969)
Rudolph Marcus, 1964–1968 – Nobel laureate (Chemistry, 1992)
Franco Modigliani, 1948–1952 – Nobel laureate (Economics, 1985)
Alvin E. Roth, 1974–1982 – Nobel laureate (Economics, 2012 )
Pulitzer Prize winners
Leon Dash, faculty – Explanatory Journalism, 1995
Bill Gaines, faculty – Investigative Reporting, 1976 and 1988
Richard Powers, faculty – Fiction, 2019
Other
Elmer H. Antonsen, Ph.D. 1961, faculty 1967-1996, chair of the Department of Germanic Languages and Literatures, later chair of the Department of Linguistics
William Bagley, faculty 1908–1917 – an original proponent of educational essentialism
Tamer Başar – Swanlund Endowed Chair & CAS Professor of Department of Electrical and Computer Engineering; winner of Richard E. Bellman Control Heritage Award in 2006
Gordon Baym, Professor Emeritus in Physics, a theoretician in a wide range of fields including condensed matter physics, nuclear physics, and astrophysics.
Nina Baym, Professor of English 1963–2004, literary critic and literary historian
Richard Blahut – former chair of the Electrical and Computer Engineering Department at the University of Illinois Urbana-Champaign, best known for his Blahut–Arimoto algorithm used in rate–distortion theory; winner of IEEE Claude E. Shannon Award in 2005 and the recipient of IEEE Third Millennium Medal
Leonard Bloomfield, faculty 1910-1921 - linguist who led the development of structural linguistics
Eleanor Blum, Professor Emerita of Library Science at the University of Illinois Urbana-Champaign.
Zong-qi Cai, leads the Forum on Chinese Poetic Cultire
Ira Carmen, 1968–2009 – first political scientist elected to the Human Genome Organization; co-founder of the social science subdiscipline of genetics and politics
Wallace Hume Carothers – organic chemist, inventor of nylon and first synthetic rubber (Neoprene)
Ron Dewar – music educator, jazz saxophonist, leader of influential Memphis Nighthawks
Anne Haas Dyson, Professor in Curriculum and Instruction
Jan Erkert, chair of the Department of Dance; Fulbright scholar
Jean Bourgain, faculty – Fields Medal in Mathematics of International Mathematical Union, 1994
Joseph L. Doob, faculty 1935–1978 – developed a theory of mathematical martingales
Donald B. Gillies, 1928–1975, professor of mathematics, pioneer in computer science and game theory
Heini Halberstam, 1980-1996 - professor of mathematics, known for the Elliott–Halberstam conjecture Elliott–Halberstam conjecture
David Gottlieb, 1946–1982 – discovered chloramphenicol; Guggenheim Fellow, Biology-Plant Science, 1963
Donald J. Harris, 1966–1967 – asst. professor of economics, later prof. of economics at Stanford University; father of Vice President Kamala D. Harris
Lejaren Hiller, faculty 1952–1968 – chemist and composer; invented process for dyeing Orlon; pioneer in music composition by computer (1950s)
Nick Holonyak, Jr. – Lemelson-MIT Prize (2004), National Medal of Technology (2002), National Medal of Science (1990); credited for the invention of the LED and the first semiconductor laser to operate in the visible spectrum
Sri Mulyani Indrawati, M.A., Ph.D. 1992 – managing director of the World Bank Group (since 2010), former Finance Minister of Indonesia (2005–2010)
Donald William Kerst, 1938–1957 – developed the betatron
Petar V. Kokotovic – winner of Richard E. Bellman Control Heritage Award in 2002
Frederick Wilfrid Lancaster, Library and Information Science Professor from 1972 to 1992. He was later promoted to professor emeritus (a position he held until 2013) of Library and Information Science
Jean-Pierre Leburton – Gregory E. Stillman Professor of Electrical and Computer Engineering and professor of Physics
Stephen E. Levinson – professor of Electrical and Computer Engineering
Stephen P. Long – environmental plant physiologist, Fellow of the Royal Society and member of the National Academy of Sciences studying how to improve photosynthesis to increase the yield of food and biofuel crops
Francis Wheeler Loomis, Head of Physics Department 1929–1957 – former Guggenheim Fellow; established school's physics department
Catherine J. Murphy – professor of chemistry
Lisa Nakamura, Director of the Asian American Studies Program – Author of "Digitizing Race: Visual Cultures of the Internet" (2008), "Cybertypes: Race, Ethnicity and Identity on the Internet" (2002), and co-editor of "Race in Cyberspace" (2002)
Marie Hochmuth Nichols, faculty 1939–1976 – influential rhetorical critic
Mangalore Anantha Pai, power engineer, Shanti Swarup Bhatnagar laureate
Don Patinkin (1922–1995) - Israeli-American economist, and President of the Hebrew University of Jerusalem
Herbert Penzl, faculty 1938-1950 - Austrian-American linguist specialized in Germanic philology
Ernst Alfred Philippson, faculty 1947-1968 - German philologist, longtime editor of the Journal of English and Germanic Philology
Abram L. Sachar, 1923–1948 – founding president of Brandeis University
Theodore Sougiannis, distinguished professor of accountancy
Dora Dougherty Strother, 1949-1950 - aviation instructor, test pilot, Women Airforce Service Pilot, and one of the first women to pilot a B-29 bomber.
Fred W. Tanner, 1923–1956 – food microbiologist; charter member of the Institute of Food Technologists; founder of scientific journal Food Research (now the Journal of Food Science)
Alexandre Tombini, Governor of the Central Bank of Brazil
Brian Wansink, 1997–2005 – Julian Simon professor and author of Mindless Eating: Why We Eat More Than We Think
William Warfield, 1976–1990 – bass-baritone singer; chair of the Division of Voice in the College of Music
Elmo Scott Watson, 1916–1924 – journalism professor who specialized in the American West
Carl Woese – Crafoord Prize recipient (bioscience, 2003); professor of microbiology; foreign member of the Royal Society; defined the Archaea
Ladislav Zgusta, faculty 1971-1995 - chair of the Department of Linguistics; director of the Center for Advanced Study; historical linguist and lexicographer from Czechoslovakia
See also
List of people from Illinois
References
Lists of people by university or college in Illinois |
5078775 | https://en.wikipedia.org/wiki/Apple%20Corps%20v%20Apple%20Computer | Apple Corps v Apple Computer | Between 1978 and 2006 there were a number of legal disputes between Apple Corps (owned by The Beatles) and the computer manufacturer Apple Computer (now Apple Inc.) over competing trademark rights. The High Court of Justice in England handed down a judgment on 8 May 2006 in favour of Apple Computer. The companies reached a final settlement, as revealed on 5 February 2007.
History of trademark disputes
1978–1981
In 1978, Apple Corps, the Beatles-founded holding company and owner of their record label, Apple Records, filed a lawsuit against Apple Computer for trademark infringement. The suit was settled in 1981 with an undisclosed amount being paid to Apple Corps. This amount was later revealed to be $80,000. As a condition of the settlement, Apple Computer agreed not to enter the music business, and Apple Corps agreed not to enter the computer business.
1986–1989
In 1986, Apple Computer added MIDI and audio-recording capabilities to its computers, which included putting the advanced Ensoniq 5503 DOC sound chip from famous synthesizer maker Ensoniq into the Apple IIGS computer. In 1989, this led Apple Corps to sue again, claiming violation of the 1981 settlement agreement. The outcome of this litigation effectively ended all forays at the time by Apple Computer into the multimedia field in parallel with the Amiga, and any future advanced built-in musical hardware in the Macintosh line.
1991
In 1991, another settlement involving payment of around $26.5 million to Apple Corps was reached. This time, an Apple Computer employee named Jim Reekes had included a sampled system sound called Chimes to the Macintosh operating system (the sound was later renamed to sosumi, to be read phonetically as "so sue me"). Outlined in the settlement was each company's respective trademark rights to the term "Apple". Apple Corps held the right to use Apple on any "creative works whose principal content is music", while Apple Computer held the right to use Apple on "goods or services ... used to reproduce, run, play or otherwise deliver such content", but not on content distributed on physical media. In other words, Apple Computer agreed that it would not package, sell or distribute physical music materials.
2003–2006
In September 2003, Apple Corps sued Apple Computer again, this time for breach of contract, in using the Apple logo in the creation and operation of Apple Computer's iTunes Music Store, which Apple Corps contended was a violation of the previous agreement. Some observers believed the wording of the previous settlement favoured Apple Computer in this case. Other observers speculated that if Apple Corps was successful, Apple Computer would be forced to offer a much larger settlement, perhaps resulting in Apple Corps becoming a major shareholder in Apple Computer, or perhaps in Apple Computer splitting the iPod and related business into a separate entity.
The trial opened on 29 March 2006 in England, before a single judge of the High Court. In opening arguments, a lawyer for Apple Corps stated that in 2003, shortly before the launch of Apple Computer's on-line music store, Apple Corps rejected a $1 million offer from Apple Computer to use the Apple name on the iTunes store.
On 8 May 2006 the court ruled in favour of Apple Computer, with Mr Justice Mann holding that "no breach of the trademark agreement [had] been demonstrated".
The Judge focused on section 4.3 of that agreement:
The Judge held Apple Computer's use was covered under this clause.
In response, Neil Aspinall, manager of Apple Corps, indicated that the company did not accept the decision: "With great respect to the trial judge, we consider he has reached the wrong conclusion. [...] We will accordingly be filing an appeal and putting the case again to the Court of Appeal." The judgment orders Apple Corps to pay Apple Computer's legal costs at an estimated GB£2 million, but pending the appeal the judge declined Apple Computer's request for an interim payment of £1.5 million.
The verdict coincidentally led to the Guy Goma incident on BBC News 24, in which a job applicant mistakenly appeared on air after he was confused with computing expert Guy Kewney.
2007
There was a hint that relations between the companies were improving at the January 2007 Macworld conference, when Apple Inc. CEO Steve Jobs featured Beatles content heavily in his keynote presentation and demonstration of the iPhone. During that year's All Things Digital conference, Jobs quoted the Beatles song "Two of Us" in reference to his relationship with co-panelist Microsoft chairman Bill Gates. Speculation abounded regarding the much anticipated arrival of the Beatles' music to the iTunes Store.
As revealed on 5 February 2007, Apple Inc. and Apple Corps reached a settlement of their trademark dispute under which Apple Inc. will own all of the trademarks related to "Apple" and will license certain of those trademarks back to Apple Corps for their continued use. The settlement ends the ongoing trademark lawsuit between the companies, with each party bearing its own legal costs, and Apple Inc. will continue using its name and logos on iTunes. The settlement includes terms that are confidential, although newspaper accounts at the time stated that Apple Computer was buying out Apple Corps' trademark rights for a total of $500 million.
Commenting on the settlement, Apple Inc. CEO Steve Jobs said, "We love the Beatles, and it has been painful being at odds with them over these trademarks. It feels great to resolve this in a positive manner, and in a way that should remove the potential of further disagreements in the future."
Commenting on the settlement on behalf of the shareholders of Apple Corps, Neil Aspinall, manager of Apple Corps said, "It is great to put this dispute behind us and move on. The years ahead are going to be very exciting times for us. We wish Apple Inc. every success and look forward to many years of peaceful co-operation with them."
Reports in April 2007 that Apple Corps had settled another long-running dispute with EMI (and that Neil Aspinall had retired and been replaced by Jeff Jones) further fueled media speculation that The Beatles' catalogue would appear on iTunes.
In early September 2007, an Apple press release for the new iPod touch, related iPod updates, and iPhone price cut was entitled "The Beat Goes On", the title of the Beatles' last press release before splitting up. Although Beatles content was still unavailable from the iTunes store, each Beatle's solo work could be accessed and downloaded on this service. Paul McCartney was quoted in Rolling Stone as saying that their catalogue would be released through digital music stores such as iTunes in the first quarter of 2008, but this did not happen until 2010.
See also
Apple Inc. litigation
A moron in a hurry, a legal test referenced by Apple's lawyers
Confusing similarity, a test in trademark law
References
Bibliography
Apple Corps
Apple Inc. litigation
High Court of Justice cases
2006 in British case law |
19834934 | https://en.wikipedia.org/wiki/Sugar%20Labs | Sugar Labs | Sugar Labs is a community-run software project whose mission is to produce, distribute, and support the use of Sugar, an open source desktop environment and learning platform. Sugar Labs was initially established as a member project of the Software Freedom Conservancy, an umbrella organization for free software (FLOSS) projects., but in 2021, it became an independent 501(c)(3) organization.
About every six months, the Sugar Labs community releases a new version of the Sugar software. The most recent stable release is available as a Fedora Linux spin. Through the on-going generosity of Nexcopy's RecycleUSB program, Sugar Labs provides Sugar on a Stick to elementary schools.
The Sugar Labs community participates in events for teachers, students, and software developers interested in the Sugar software, such as the Montevideo Youth Summit and Turtle Art Day.
Sugar Labs had participated in Google Code-in, which served as an outlet for young programmers. Sugar Labs is a long-time participant in Google Summer of Code.
References
External links
Sugar Labs homepage
2008 establishments in Massachusetts
Free software companies
One Laptop per Child
Organizations established in 2008 |
53764660 | https://en.wikipedia.org/wiki/Crypto-shredding | Crypto-shredding | Crypto-shredding is the practice of 'deleting' data by deliberately deleting or overwriting the encryption keys.
This requires that the data have been encrypted. Data comes in these three states: data at rest, data in transit and data in use. In the CIA triad of confidentiality, integrity, and availability all three states must be adequately protected.
Getting rid of data at rest like old backup tapes, data stored in the cloud, computers, phones, and multi-function printers can be challenging when confidentiality of information is of concern; when encryption is in place it allows for smooth disposal of data. Confidentiality and privacy are big drivers of encryption.
Motive
The motive for deleting data can be: defective product, older product, no further use for data, no legal right to use or retain data any more, etc. Legal obligations can also come from rules like: the right to be forgotten, the General Data Protection Regulation, etc.
Use
In some cases everything is encrypted (e.g. harddisk, computer file, database, etc.) but in other cases only specific data (e.g. passport number, social security number, bank account number, person name, record in a database, etc.) is encrypted. In addition, the same specific data in one system can be encrypted with another key in another system.
The more specific pieces of data are encrypted (with different keys) the more specific data can be shredded.
Example: iOS devices use crypto-shredding when activating the "Erase all content and settings" by discarding all the keys in 'effaceable storage'. This renders all user data on the device cryptographically inaccessible.
Best practices
Storing encryption keys securely is important for shredding to be effective. There is no effect when a symmetric or asymmetric encryption key is shredded when it has already been compromised (copied). A Trusted Platform Module addresses this issue. A hardware security module is one of the safest ways to use and store encryption keys.
Bring your own encryption refers to a cloud computing security model to help cloud service customers to use their own encryption software and manage their own encryption keys.
Salt: Hashing can be inadequate for confidentiality, because the hash is always the same. For example: The hash of a specific social security number can be reverse engineered by the help of rainbow tables. Salt addresses this problem.
Security considerations
Encryption strength can be weaker over time when computers get faster or flaws are found.
Brute-force attack: If the data is not adequately encrypted it is still possible to decrypt the information through brute force. Quantum computing has the potential to speed up a brute force attack in the future. However, quantum computing is less effective against symmetric encryption than public-key encryption. Assuming the use of symmetric encryption, the fastest possible attack is Grover's algorithm, which can be mitigated by using larger keys.
Data in use. For example: the (plaintext) encryption keys temporarily used in RAM can be threatened by cold boot attacks, hardware advanced persistent threats, rootkits/bootkits, computer hardware supply chain attacks, and physical threats to computers from insiders (employees).
Data remanence: For example: When data on a harddisk is encrypted after it has been stored there is a chance that there is still unencrypted data on the harddisk. Encrypting data does not automatically mean it will overwrite exactly the same location of the unencrypted data. Also bad sectors cannot be encrypted afterwards. It is better to have encryption in place before storing data.
Hibernation is a threat to the use of an encryption key. When an encryption key is loaded into RAM and the machine is hibernated at that time, all memory, including the encryption key, is stored on the harddisk (outside of the encryption key's safe storage location).
The mentioned security issues are not specific to crypto-shredding, but apply in general to encryption. In addition to crypto-shredding, data erasure, degaussing and physically shredding the physical device (disk) can mitigate the risk further.
References
Data security
Cryptography
Key management
Public-key cryptography
Security |
1996430 | https://en.wikipedia.org/wiki/Computer%20fraud | Computer fraud | Computer fraud is a cybercrime and the act of using a computer to take or alter electronic data, or to gain unlawful use of a computer or system. In the United States, computer fraud is specifically proscribed by the Computer Fraud and Abuse Act, which criminalizes computer-related acts under federal jurisdiction.
Types of computer fraud include:
Distributing hoax emails
Accessing unauthorized computers
Engaging in data mining via spyware and malware
Hacking into computer systems to illegally access personal information, such as credit cards or Social Security numbers
Sending computer viruses or worms with the intent to destroy or ruin another party's computer or system.
Phishing, social engineering, viruses, and DDoS attacks are fairly well-known tactics used to disrupt service or gain access to another's network, but this list is not inclusive.
Notable incidents
The Melissa Virus/Worm
The Melissa Virus appeared on thousands of email systems on 26 March 1999. It was disguised in each instance as an important message from a colleague or friend. The virus was designed to send an infected email to the first 50 email addresses on the users’ Microsoft Outlook address book. Each infected computer would infect 50 additional computers, which in turn would infect another 50 computers. The virus proliferated rapidly and exponentially, resulting in substantial interruption and impairment of public communications and services. Many system administrators had to disconnect their computer systems from the Internet. Companies such as Microsoft, Intel, Lockheed Martin and Lucent Technologies were forced to shut down their email gateways due to the vast number of emails the virus was generating.
After an investigation conducted by multiple branches of government and law enforcement, the Melissa Virus/Worm was attributed to David L. Smith, a 32-year-old New Jersey programmer, who was eventually charged with computer fraud. Smith was one of the first people ever to be prosecuted for the act of writing a virus. He was sentenced to 20 months in federal prison and was fined $5,000. In addition, he was also ordered to serve three years of supervised release after completion of his prison sentence. The investigation involved members of New Jersey State Police High Technology Crime Unit, the Federal Bureau of Investigation (FBI), the Justice Department’s Computer Crime and Intellectual Property Section, and the Defense Criminal Investigative Service.
See also
Information security
Information technology audit
References
External links
Information Security & Computer Fraud Cases & Investigations
Cornell Law: Computer and Internet Fraud
Internet fraud
Information technology audit |
170531 | https://en.wikipedia.org/wiki/Star%20Raiders | Star Raiders | Star Raiders is a first-person space combat simulator for the Atari 8-bit family of computers. It was written by Doug Neubauer, an Atari employee, and released as a cartridge by Atari in 1979. The game is considered the platform's killer app. It was later ported to the Atari 2600, Atari 5200, and Atari ST.
The game simulates 3D space combat between the player's ship and an invading fleet of alien "Zylon" vessels. Star Raiders was distinctive for its graphics, which, in addition to various map and long range scan views, provided forward and aft first-person views, with movement conveyed by a streaming 3D starfield as the player engaged enemy spacecraft.
While there had already been target-shooting games using the first-person perspective (including 1978's Cosmic Conflict), Star Raiders had considerably higher quality visuals and more complex gameplay. It inspired imitators throughout the 1980s as well as later-generation space combat simulation games including Elite, Wing Commander, and Star Wars: X-Wing.
In 2007, Star Raiders was included in a list of the 10 most important video games of all time, as compiled by Stanford University's History of Science and Technology Collections.
Gameplay
Like the text-based Star Trek games, in Star Raiders the player's ship maneuvers about a two-dimensional grid fighting a fleet of enemy spaceships. In Star Raiders, this part of the game takes the form of a "Galactic Chart" display dividing the game's large-scale world into a grid of sectors, some of which are empty, while other are occupied by enemy ships or a friendly "starbase". The Galactic Map is the equivalent of the earlier Star Trek'''s Long Range Scan.
Flying about in the 3D view with the ship's normal engines is sufficient for travel within a sector; travel between sectors is via "hyperspace", accomplished through an elaborate and noisy "hyperwarp" sequence with graphics loosely reminiscent of the Star Wars and Star Trek films in which the stars seemed to stretch to radial lines. On the higher difficulty levels, hyperwarp has a skill element; the player has to keep a wandering cross hair roughly centered during the sequence in order to arrive precisely at the desired destination.
Combat, damage and resources
To the Star Trek formula, the game added real-time 3D battles as a space combat simulator. In the main first-person display, the player can look out of the ship and shoot "photons" at Zylon ships, which come in three different varieties reminiscent of ships from Star Wars, Star Trek, and Battlestar Galactica (whose villains were the similarly titled Cylons).
A small targeting display in the lower right corner gives a general indication of a distant enemy or starbase's position relative to the player's ship, and also indicates when weapons are locked on the enemy, at which point the player's weapons will fire two torpedoes simultaneously. There is also a "long-range scan" screen showing the surrounding region in a third-person overhead view centered on your ship, operating like a long-range radar display.
Enemy ships come in three types. The standard Fighters resemble the TIE fighter. The Patrol ships, which do not fire until fired upon, loosely resemble the front-on view of a Cylon Raider or Klingon Battlecruiser. The most powerful Zylon ship, the Basestar, has a pulsating orange glow and resembles a Cylon Basestar. It also has shields, which protect it from incoming fire, thus requiring the player to either hit it multiple times in rapid succession at close range or get it into a Target Lock, which results in two torpedoes being fired simultaneously and tracking the target until impact.
The game has four difficulty levels; on all but the lowest "Novice" level players must steer the ship into hyperspace and collisions with random meteoroids and enemy fire can cause damage to the player's ship. Such damage includes malfunctioning or nonfunctional shields, engines, weapons or information displays. Any collision when shields are down destroys the ship and ends the game. Running out of energy likewise ends the game.
The player has to manage finite energy reserves as well as damage to the ship; it can be repaired and restocked by rendezvous with a friendly starbase. The enemy can also destroy a starbase if allowed to surround its Galactic Chart sector for too long, so the starbases have to be defended. All this lends Star Raiders a degree of complexity and a sense of player immersion that was rare in action games of the era.
Scoring
In contrast to many games of the era, the player can actually win the game by destroying all enemy ships in the galaxy. However, there is no running score display; only upon winning, dying or quitting the game will the player receive a "rating", which is a quasi-military rank accompanied by a numerical class with particularly bad play earning a rank of "Garbage Scow Captain" or "Galactic Cook". The rating depends on a formula involving the game play level, energy and time used, star bases destroyed (both by player or the enemy), the number of enemies destroyed, and whether the player succeeded in destroying all enemies, was destroyed, or aborted (quit or ran out of energy) the mission. Some possible ratings reach from Rookie to Star Commander.
Development
Wanting to make an action-oriented Star Trek-type game, Doug Neubauer designed Star Raiders in about eight to ten months while working for Atari. He left the company while the game was still a prototype to return home to Oregon and join Hewlett-Packard, and reported that it took him six months to reach the highest player-level during development. Star Raiders was unusual at the time for Atari, as it made relatively few game cartridges for its computers, with most being adaptations of Atari 2600 titles.
Technical details
The main simulation continues running even when the user is interacting with other displays. For instance, one might be attacked while examining the Galactic Map. This was unusual for the time, if not unique.
The primary playfield/star field is drawn in the graphics mode that provides 160×96 bitmapped pixels utilizing four color registers at a time out of a palette of 128 colors provided by the CTIA chip in the early Atari computers. This is called ANTIC mode D, but accessed in Atari BASIC by use of the "GRAPHICS 7" command. The Atari's use of an indirect palette means that color changes associated with the presence or absence of energy shields, emergency alarms, and the screen flash representing destruction of the ship can be accomplished by simply changing the palette values in memory registers.
Enemy ships, shots, and most other moving objects use Atari's variant of hardware sprites, known as player-missile graphics, which have their own color registers independent of the current screen graphics mode. The radar display in the lower right of screen is drawn using the background graphics, and updated less frequently than the sprites.
The debris particles emitted when an enemy ship is destroyed are calculated as 3D points. Since the 6502 processor in the Atari 8-bit family does not have a native multiply or divide command, the game slows down considerably when several of these particles are active.
When rotating the view of the player's ship, the new positions of enemy ships, shots, and other moving objects are computed in 3D, applying a variation of the CORDIC algorithm using only addition, subtraction, and bitshift operations.
The Atari 8-bit family allows different graphics modes and color palettes to be used in different horizontal bands on the screen, by using a simple display list and a type of horizontal blank interrupt. While other games make more extensive use of these techniques, Star Raiders uses them in a relatively simple fashion to combine text displays and graphics; the cockpit display uses a custom character set to display futuristic-looking characters and symbols reminiscent of MICR.Star Raiders' sounds of engines, shots, explosions, alarms, etc. are algorithmically synthesized directly using the POKEY sound chip's capabilities. Neubauer was involved in the design of POKEY.
The entire game, code and data, fits into 8K (8192 bytes) of ROM, and requires only 8K of RAM for its working data and display visuals; thus it can run on any Atari 8-bit computer.
Ports
Versions of Star Raiders were created for the Atari 2600, Atari 5200, and Atari ST series of computers.
The Atari 5200 version was done by programmer Joe Copson and released in autumn 1982. This version is nearly identical to the computer version, but takes advantage of the 5200's analog joystick by allowing for variable speed turning, and puts all the game functions in the player's hand via the controller's 12-button keypad. Other changes are graphical improvements to the Sector Scan mode by displaying small images of enemy ships and objects instead of pinpoints, alterations to some of the text responses to be more specific to the game-ending action, and automatically switching to Forward View when Hyperspace is engaged.
The Atari 2600 version was programmed by Carla Meninsky and released in 1982. It suffers somewhat due to the 2600's weaker graphics and sound capabilities. It shipped with a special keypad controller, the Video Touch Pad, to take the place of the computer keyboard. Although the controller was designed to accept overlays for compatibility with multiple games, Star Raiders was the only game to utilize it. In this version the Zylons are renamed "Krylons".
The Atari ST version was designed and programmed by Robert Zdybel with graphics and animation by Jerome Domurat and released by Atari Corporation in 1986. It is a very different game in many ways, with more enemy ship types, different weapons, slower action, and a map featuring a triangular grid instead of a square one, which makes it much easier for the Zylon ships to surround starbases.
Reception
While criticizing the violent gameplay, after seeing a demonstration Ted Nelson wrote: "The Atari machine is the most extraordinary computer graphics box ever made, and Star Raiders is its virtuoso demonstration game". Compute! in 1980 wrote that Star Raiders is "incredibly exciting to play and just about as much fun to watch!" It praised the game's use of color and sound to alert the player, and warned that "THIS GAME IS ADDICTIVE!". The magazine InfoWorld wrote: "This game is absolutely guaranteed to put calluses on your trigger-finger". The magazine reported that Star Raiders complexity encouraged cooperative gameplay, and that "over twenty hours of grueling tests by a battery of ingenious children" had proven that it was free of bugs.BYTE wrote in 1981 that it was the Atari's killer app: "What can you say about a game that takes your breath away? There are not enough superlatives to describe Star Raiders. Just as the VisiCalc software ... has enticed many people into buying Apple II computers, I'm sure that the Star Raiders cartridge ... has sold its share of Atari 400 and 800 computers". It concluded: "To all software vendors, this is the game you have to surpass to get our attention". Electronic Games agreed, reporting that it "is the game that, in the opinion of many, sells a lot of 400 computers systems", and "has established the standards prospective software marketed will be trying to surpass over the next year or so".Softline in 1982 called Star Raiders "quite a game ... stands repeat play well and remains quite difficult". In 1983 the magazine's readers named it "The Most Popular Atari Program Ever", with 65% more weighted votes than second-place Jawbreaker, and in 1984 they named the game the most popular Atari program of all time. The Addison-Wesley Book of Atari Software 1984 gave it an overall A rating, praising the realistic graphics and sound. The book concluded that "the game is simply great" and that despite imitations, "Star Raiders remains the classic". Antic in 1986 stated that "it was the first program that showed all of the Atari computer's audio and visual capabilities. It was just a game, yes, but it revolutionized the idea of what a personal computer could be made to do".
Jerry Pournelle of BYTE named the Atari ST version his game of the month for August 1986, describing it as "like the old 8-bit Star Raiders had died and gone to heaven. The action is fast, the graphics are gorgeous, and I've spent entirely too much time with it".
In 2004, the Atari 8-bit version of Star Raiders was inducted into GameSpot's list of the greatest games of all time. On March 12, 2007, The New York Times reported that Star Raiders was named to a list of the ten most important video games of all time, the so-called game canon. The Library of Congress took up a video game preservation proposal and began with the games from this list, including Star Raiders.
Legacy
Sequels
Star Raiders II (1986)
The Star Raiders II that was released in 1986 by Atari Corp. had no relationship to the original other than the name, and was, in fact, merely a rebranded game originally developed as a licensed tie-in for the movie The Last Starfighter.
Unreleased Star Raiders II
In 2015, Kevin Savetz, host of the ANTIC Podcast, was contacted by former Atari, Inc. programmer Aric Wilmunder. Wilmunder mentioned he had been part of a team working on 8-bit computer games and decided to make a sequel to Star Raiders.
This version of Star Raiders II is faithful to the original gameplay, but designed to make use of new 32 kB cartridges. The torpedoes were replaced with a laser-like weapon that can be aimed semi-independently of the ship's motion. The enemies are now 3D wireframe ships instead of 2D sprites. The gridded galactic map was replaced with a free-form version. In this rendition, the player's home planet is in the upper left of the map, and the enemy ships are ultimately attempting to attack it. A number of planets can also be attacked on the surface in a view based on the Star Wars arcade game, which was being developed down the hall from Wilmunder's office.
The main part of the game was complete by early 1984, but Atari was in disarray and undergoing a continual downsizing. Eventually, he was laid off, but kept the source code when he left. He later unsuccessfully tried to interest the new Atari Corporation in the project.
The game, in an untuned but functionally complete and playable state, was added to the Internet Archive along with basic documentation and Wilmunder's telling of the history of the game.
2011 remake
A re-imagining of Star Raiders, developed by Incinerator Studios and published by Atari SA, was released for the PlayStation 3, Xbox 360 and Microsoft Windows on May 11, 2011.
Clones and influenced games
Many games heavily inspired by Star Raiders appeared, such as Starmaster (Atari 2600), Space Spartans (Intellivision), Moonbeam Express (TI-99/4A), Codename MAT (ZX Spectrum and Amstrad CPC), and Star Luster (Famicom–Japan only). Neubauer's own Solaris, for the Atari 2600, is both similar and in some ways more sophisticated than his earlier game, despite the difference in technology between the two systems.Star Raiders inspired later space combat games like Elite, the Wing Commander series and BattleSphere.Star Rangers, an homage to Star Raiders, was released in 2010 for the iPhone. It was written by former 8-bit game programmer Tom Hudson, who was at one time a technical editor for Atari hobbyist magazine ANALOG Computing. As of October 2014, possibly earlier, Star Rangers is no longer in the iOS App Store.
In popular culture
In 1983 DC Comics published a graphic novel inspired by the game; it was the first title of the DC Graphic Novel series. It was written by Elliot S! Maggin and illustrated by José Luis García-López. Early production copies of the Atari 2600 version of the game were accompanied by an Atari Force mini-comic (published by DC Comics). This particular issue was #3 in the series, preceded by mini-comics accompanying the Defender and Berserk games. Two final mini-comics followed with the games Phoenix and Galaxian.
Source code
A scan of the original assembly language source code became available in October 2015 via the Internet Archive. It was used to create a text-file version in GitHub.
References
External links
Interview with Doug Neubauer, October 1986 issue of ANALOG Computing''
Reverse engineered source code with documentation
1979 video games
Atari 2600 games
Atari 5200 games
Atari 8-bit family games
Atari ST games
Science fiction video games
Space combat simulators
Video games developed in the United States
Commercial video games with freely available source code |
1570613 | https://en.wikipedia.org/wiki/Pharming | Pharming | Pharming is a cyberattack intended to redirect a website's traffic to another, fake site by installing a malicious program on the computer. Pharming can be conducted either by changing the hosts file on a victim's computer or by exploitation of a vulnerability in DNS server software. DNS servers are computers responsible for resolving Internet names into their real IP addresses. Compromised DNS servers are sometimes referred to as "poisoned". Pharming requires unprotected access to target a computer, such as altering a customer's home computer, rather than a corporate business server.
The term "pharming" is a neologism based on the words "farming" and "phishing". Phishing is a type of social-engineering attack to obtain access credentials, such as user names and passwords. In recent years, both pharming and phishing have been used to gain information for online identity theft. Pharming has become of major concern to businesses hosting ecommerce and online banking websites. Sophisticated measures known as anti-pharming are required to protect against this serious threat. Antivirus software and spyware removal software cannot protect against pharming.
Pharming vulnerability at home and work
While malicious domain-name resolution can result from compromises in the large numbers of trusted nodes from a name lookup, the most vulnerable points of compromise are near the leaves of the Internet. For instance, incorrect entries in a desktop computer's hosts file, which circumvents name lookup with its own local name to IP address mapping, is a popular target for malware. Once rewritten, a legitimate request for a sensitive website can direct the user to a fraudulent copy. Personal computers such as desktops and laptops are often better targets for pharming because they receive poorer administration than most Internet servers.
More worrisome than host-file attacks is the compromise of a local network router. Since most routers specify a trusted DNS to clients as they join the network, misinformation here will spoil lookups for the entire LAN. Unlike host-file rewrites, local-router compromise is difficult to detect. Routers can pass bad DNS information in two ways: misconfiguration of existing settings or wholesale rewrite of embedded software (aka firmware). Many routers allow the administrator to specify a particular, trusted DNS in place of the one suggested by an upstream node (e.g., the ISP). An attacker could specify a DNS server under his control instead of a legitimate one. All subsequent resolutions would go through the bad server.
Alternatively, many routers have the ability to replace their firmware (i.e. the internal software that executes the device's more complex services). Like malware on desktop systems, a firmware replacement can be very difficult to detect. A stealthy implementation will appear to behave the same as the manufacturer's firmware; the administration page will look the same, settings will appear correct, etc. This approach, if well executed, could make it difficult for network administrators to discover the reconfiguration, if the device appears to be configured as the administrators intend but actually redirects DNS traffic in the background. Pharming is only one of many attacks that malicious firmware can mount; others include eavesdropping, active man in the middle attacks, and traffic logging. Like misconfiguration, the entire LAN is subject to these actions.
By themselves, these pharming approaches have only academic interest. However, the ubiquity of consumer grade wireless routers presents a massive vulnerability. Administrative access can be available wirelessly on most of these devices. Moreover, since these routers often work with their default settings, administrative passwords are commonly unchanged. Even when altered, many are guessed quickly through dictionary attacks, since most consumer grade routers don't introduce timing penalties for incorrect login attempts. Once administrative access is granted, all of the router's settings including the firmware itself may be altered. These attacks are difficult to trace because they occur outside the home or small office and outside the Internet.
Instances of pharming
On 15th January 2005, the domain name for a large New York ISP, Panix, was hijacked to point to a website in Australia. No financial losses are known. The domain was later restored on 17th January, and ICANN's review blames Melbourne IT (now known as "Arq Group") "as a result of a failure of Melbourne IT to obtain express authorization (sic) from the registrant in accordance with ICANN's Inter-Registrar Transfer Policy."
In February 2007, a pharming attack affected at least 50 financial companies in the U.S., Europe, and Asia. Attackers created a similar page for each targeted financial company, which requires effort and time. Victims clicked on a specific website that had a malicious code. This website forced consumers' computers to download a Trojan horse. Subsequent login information from any of the targeted financial companies was collected. The amount of individuals affected is unknown but the incident continued for three days.
In January 2008, Symantec reported a drive-by pharming incident, directed against a Mexican bank, in which the DNS settings on a customer's home router were changed after receipt of an e-mail that appeared to be from a legitimate Spanish-language greeting-card company.
Controversy over the use of the term
The term "pharming" has been controversial within the field. At a conference organized by the Anti-Phishing Working Group, Phillip Hallam-Baker denounced the term as "a marketing neologism designed to convince banks to buy a new set of security services".
See also
Phishing
DNS spoofing
IT risk
Mutual authentication
Trusteer
Notes
References
Sources
External links
After Phishing? Pharming!
Types of malware
Computer security exploits |
102225 | https://en.wikipedia.org/wiki/SuperH | SuperH | SuperH (or SH) is a 32-bit reduced instruction set computing (RISC) instruction set architecture (ISA) developed by Hitachi and currently produced by Renesas. It is implemented by microcontrollers and microprocessors for embedded systems.
At the time of introduction, SuperH was notable for having fixed-length 16-bit instructions in spite of its 32-bit architecture. This was a novel approach; at the time, RISC processors always used an instruction size that was the same as the internal data width, typically 32-bits. Using smaller instructions had consequences, the register file was smaller and instructions were generally two-operand format. But for the market the SuperH was aimed at, this was a small price to pay for the improved memory and processor cache efficiency.
Later versions of the design, starting with SH-5, included both 16- and 32-bit instructions, with the 16-bit versions mapping onto the 32-bit version inside the CPU. This allowed the machine code to continue using the shorter instructions to save memory, while not demanding the amount of instruction decoding logic needed if they were completely separate instructions. This concept is now known as a compressed instruction set and is also used by other companies, the most notable example being ARM for its Thumb instruction set.
, many of the original patents for the SuperH architecture are expiring and the SH-2 CPU has been reimplemented as open source hardware under the name J2.
History
SH-1 and SH-2
The SuperH processor core family was first developed by Hitachi in the early 1990s. The design concept was for a single instruction set (ISA) that would be upward compatible across a series of CPU cores.
In the past, this sort of design problem would have been solved using microcode, with the low-end models in the series performing non-implemented instructions as a series of more basic instructions. For instance, an instruction to perform a 32 x 32 -> 64-bit multiply, a "long multiply", might be implemented in hardware on high-end models but instead be performed as a series of additions on low-end models.
One of the key realizations during the development of the RISC concept was that the microcode had a finite decoding time, and as processors became faster, this represented an unacceptable performance overhead. To address this, Hitachi instead developed a single ISA for the entire line, with unsupported instructions causing traps on those implementations that didn't include hardware support. For instance, the initial models in the line, the SH-1 and SH-2, differed only in their support for 64-bit multiplication; the SH-2 supported , and , whereas the SH-1 would cause a trap if these were encountered.
The ISA uses 16-bit instructions for better code density than 32-bit instructions, which was a great benefit at the time, due to the high cost of main memory. The downsides to this approach were that there were fewer bits available to encode a register number or a constant value. In the SuperH ISA, there were only 16 registers, requiring four bits for the source and another four for the destination. The instruction itself was also four bits, leaving another four bits unaccounted. Some instructions used these last four bits for offsets in array accesses, while others combined the second register slot and last four bits to produce an 8-bit constant.
Two models were initially introduced. The SH-1 was the basic model, supporting a total of 56 instructions. The SH-2 added 64-bit multiplication and a few additional commands for branching and other duties, bringing the total to 62 supported instructions. The SH-1 and the SH-2 were used in the Sega Saturn, Sega 32X and Capcom CPS-3.
SH-3
A few years later, the SH-3 core was added to the family; new features included another interrupt concept, a memory management unit (MMU), and a modified cache concept. These features required an extended instruction set, adding six new instructions for a total of 68. The SH-3 was bi-endian, running in either big-endian or little-endian byte ordering.
The SH-3 core also added a DSP extension, then called SH-3-DSP. With extended data paths for efficient DSP processing, special accumulators and a dedicated MAC-type DSP engine, this core unified the DSP and the RISC processor world. A derivative of the DSP was also used with the original SH-2 core.
Between 1994 and 1996, 35.1 million SuperH devices were shipped worldwide.
SH-4
In 1997, Hitachi and STMicroelectronics (STM) started collaborating on the design of the SH-4 for the Dreamcast. SH-4 featured superscalar (2-way) instruction execution and a vector floating-point unit (particularly suited to 3d graphics). SH-4 based standard chips were introduced around 1998.
Licensing
In early 2001, Hitachi and STM formed the IP company SuperH, Inc., which was going to license the SH-4 core to other companies and was developing the SH-5 architecture, the first move of SuperH into the 64-bit area. The earlier SH-1 through 3 remained the property of Hitachi.
In 2003, Hitachi and Mitsubishi Electric formed a joint-venture called Renesas Technology, with Hitachi controlling 55% of it. In 2004, Renesas Technology bought STMicroelectronics's share of ownership in the SuperH Inc. and with it the licence to the SH cores. Renesas Technology later became Renesas Electronics, following their merger with NEC Electronics.
The SH-5 design supported two modes of operation. SHcompact mode is equivalent to the user-mode instructions of the SH-4 instruction set. SHmedia mode is very different, using 32-bit instructions with sixty-four 64-bit integer registers and SIMD instructions. In SHmedia mode the destination of a branch (jump) is loaded into a branch register separately from the actual branch instruction. This allows the processor to prefetch instructions for a branch without having to snoop the instruction stream. The combination of a compact 16-bit instruction encoding with a more powerful 32-bit instruction encoding is not unique to SH-5; ARM processors have a 16-bit Thumb mode (ARM licensed several patents from SuperH for Thumb) and MIPS processors have a MIPS-16 mode. However, SH-5 differs because its backward compatibility mode is the 16-bit encoding rather than the 32-bit encoding.
The last evolutionary step happened around 2003 where the cores from SH-2 up to SH-4 were getting unified into a superscalar SH-X core which formed a kind of instruction set superset of the previous architectures, and added support for symmetric multiprocessing.
Continued availability
Since 2010, the SuperH CPU cores, architecture and products are with Renesas Electronics and the architecture is consolidated around the SH-2, SH-2A, SH-3, SH-4 and SH-4A platforms. The system-on-chip products based on SH-3, SH-4 and SH-4A microprocessors were subsequently replaced by newer generations based on licensed CPU cores from Arm Ltd., with many of the existing models still marketed and sold until March 2025 through the Renesas Product Longevity Program.
As of 2021, the SH-2A based SH72xx microcontrollers continue to be marketed by Renesas with guaranteed availability until February 2029, along with newer products based on several other architectures including Arm, RX, and RH850.
J Core
The last of the SH-2 patents expired in 2014. At LinuxCon Japan 2015, j-core developers presented a cleanroom reimplemention of the SH-2 ISA with extensions (known as the "J2 core" due to the unexpired trademarks). Subsequently, a design walkthrough was presented at ELC 2016.
The open source BSD licensed VHDL code for the J2 core has been proven on Xilinx FPGAs and on ASICs manufactured on TSMC's 180 nm process, and is capable of booting µClinux. J2 is backwards ISA compatible with SH-2, implemented as a 5-stage pipeline with separate Instruction and Data memory interfaces, and a machine generated Instruction Decoder supporting the densely packed and complex (relative to other RISC machines) ISA. Additional instructions are easy to add. J2 implements instructions for dynamic shift (using the SH-3 and later instruction patterns), extended atomic operations (used for threading primitives) and locking/interfaces for symmetric multiprocessor support. Plans to implement the SH-2A (as "J2+") and SH-4 (as "J4") instruction sets as the relevant patents expire in 2016–2017.
Several features of SuperH have been cited as motivations for designing new cores based on this architecture:
High code density compared to other 32-bit RISC ISAs such as ARM or MIPS important for cache and memory bandwidth performance
Existing compiler and operating system support (Linux, Windows Embedded, QNX)
Extremely low ASIC fabrication costs now that the patents are expiring (around for a dual-core J2 core on TSMC's 180 nm process).
Patent and royalty free (BSD licensed) implementation
Full and vibrant community support
Availability of low cost hardware development platform for zero cost FPGA tools
CPU and SoC RTL generation and integration tools, producing FPGA and ASIC portable RTL and documentation
Clean, modern design with open source design, generation, simulation and verification environment
Models
The family of SuperH CPU cores includes:
SH-1 - used in microcontrollers for deeply embedded applications (CD-ROM drives, major appliances, etc.)
SH-2 - used in microcontrollers with higher performance requirements, also used in automotive such as engine control units or in networking applications, and also in video game consoles, like the Sega Saturn. The SH-2 has also found home in many automotive engine control unit applications, including Subaru, Mitsubishi, and Mazda.
SH-2A - The SH-2A core is an extension of the SH-2 core including a few extra instructions but most importantly moving to a superscalar architecture (it is capable of executing more than one instruction in a single cycle) and two five-stage pipelines. It also incorporates 15 register banks to facilitate an interrupt latency of 6 clock cycles. It is also strong in motor control application but also in multimedia, car audio, powertrain, automotive body control and office + building automation
SH-DSP - initially developed for the mobile phone market, used later in many consumer applications requiring DSP performance for JPEG compression etc.
SH-3 - used for mobile and handheld applications such as the Jornada, strong in Windows CE applications and market for many years in the car navigation market. The Cave CV1000, similar to the Sega NAOMI hardware's CPU, also made use of this CPU. The Korg Electribe EMX and ESX music production units also use the SH-3.
SH-3-DSP - used mainly in multimedia terminals and networking applications, also in printers and fax machines
SH-4 - used whenever high performance is required such as car multimedia terminals, video game consoles, or set-top boxes
SH-5 - used in high-end 64-bit multimedia applications
SH-X - mainstream core used in various flavours (with/without DSP or FPU unit) in engine control unit, car multimedia equipment, set-top boxes or mobile phones
SH-Mobile - SuperH Mobile Application Processor; designed to offload application processing from the baseband LSI
SH-2
The SH-2 is a 32-bit RISC architecture with a 16-bit fixed instruction length for high code density and features a hardware multiply–accumulate (MAC) block for DSP algorithms and has a five-stage pipeline.
The SH-2 has a cache on all ROM-less devices.
It provides 16 general purpose registers, a vector-base-register, global-base-register, and a procedure register.
Today the SH-2 family stretches from 32 KB of on-board flash up to ROM-less devices. It is used in a variety of different devices with differing peripherals such as CAN, Ethernet, motor-control timer unit, fast ADC and others.
SH-2A
The SH-2A is an upgrade to the SH-2 core that added some 32-bit instructions. It was announced in early 2006.
New features on the SH-2A core include:
Superscalar architecture: execution of 2 instructions simultaneously
Harvard architecture
Two 5-stage pipelines
Mixed 16-bit and 32-bit instructions
15 register banks for interrupt response in 6 cycles.
Optional FPU
The SH-2A family today spans a wide memory field from 16 KB up to and includes many ROM-less variations. The devices feature standard peripherals such as CAN, Ethernet, USB and more as well as more application specific peripherals such as motor control timers, TFT controllers and peripherals dedicated to automotive powertrain applications.
SH-4
The SH-4 is a 64-bit RISC CPU and was developed for primary use in multimedia applications, such as Sega's Dreamcast and NAOMI game systems. It includes a much more powerful floating-point unit and additional built-in functions, along with the standard 32-bit integer processing and 16-bit instruction size.
SH-4 features include:
FPU with four floating-point multipliers, supporting 32-bit single precision and 64-bit double precision floats
4D floating-point dot-product operation and matrix-vector multiplication
128-bit floating-point bus allowing 3.2 GB/sec transfer rate from the data cache
64-bit external data bus with 32-bit memory addressing, allowing a maximum of 4 GB addressable memory with a transfer rate of 800 MB/sec
Built-in interrupt, DMA, and power management controllers
There is no FPU in the custom SH4 made for Casio, the SH7305.
SH-5
The SH-5 is a 64-bit RISC CPU.
Almost no non-simulated SH-5 hardware was ever released, and unlike the still live SH-4, support for SH-5 was dropped from gcc and Linux.
References
Citations
Bibliography
External links
Renesas SuperH, Products, Tools, Manuals, App.Notes, Information
J-core Open Processor
Linux SuperH development list
in-progress Debian port for SH4
Embedded microprocessors
Instruction set architectures
Japanese inventions
Renesas microcontrollers
Open-source hardware
32-bit microprocessors |
5492275 | https://en.wikipedia.org/wiki/Reunion%20%28genealogy%20software%29 | Reunion (genealogy software) | Reunion is genealogy software made by Leister Productions, Inc., a privately held firm established by Frank Leister in 1984 located in Mechanicsburg, Pennsylvania. The company operates as a genealogy (family tree) software developer exclusively for macOS and iOS. Reunion was initially a Macintosh application, programmed in Apple's HyperCard. Version 4 was available for Windows and Macintosh until the Windows version was sold to Sierra in 1997.
Reunion provides methods to create, manipulate and generate reports about a family history. It has the capability to produce charts depicting family relationships and the ability to produce Web pages for publishing a family history online. Reunion can also be used to gather family statistics. It allows integration of images and movies into Reunion family files.
Reunion version history
The announcement pages for the respective versions offer more details as to the exact changes.
Reunion 13 was announced in November 2020. Updates added MacOS Big Sur support and native support for Apple M1 CPUs.
Reunion 12 was updated in May 2018. New features include a new Duplicate Check, further improvements to syncing with Reunion's mobile app "ReunionTouch" for iOS, a new Citations List, improvements to Sorting, and a number of other upgrades.
Reunion 11 was announced in April 2015. New features include better syncing with Reunion's mobile app, Book creator to automatically generate PDF books, improved editing, and "on-the-fly" relationships identification.
Reunion 10 was announced in May 2012. New features include web searching, mapping of places, a tree view, a nav bar and a sidebar, image dragging from a web browser, side-by-side matching and merging people, and graphic relationship charts.
Reunion 9 was announced in March 2007. This version became a universal binary Cocoa-based application, which runs under OS X. New features include Unicode support and a less "modal" design, allowing index and source windows to remain open for easier access.
Reunion 8 was announced in September 2002. This version became a Mac OS X native application, providing users of OS X and prior versions of the Macintosh operating system the ability to utilize the software. Charting was significantly enhanced with the move to Reunion 8.
Reunion 7 was announced in May 2000 and among the changes seen at this time was the integration of SuperChart into a single Reunion application and the ability to have multiple family files open at one time.
Reunion 6 was announced in November 1998 and saw the genealogy software change to include pictures into the family card view and introduced the Match & Merge tool that can be used to detect and remove duplicate records in the family file.
Reunion 5 was announced in September 1997 and saw the introduction of drag and drop capabilities when working with the family card. Editing also became easier with start of tabbed windows to allow for faster, more efficient data entry.
Reunion 4 was the last version available for Windows.
Languages
English
The following translations have been completed by Reunion users:
Dutch (version 8 & 9)
French (version 8 & 9)
German
Norwegian
Swedish
External Tools
Third party utilities/add-ons - for Reunion.
References
External links
Reunion Software - for macOS and iOS
ReunionTalk - Forum
Software Review on GenealogyTools.com
Reunion |
10063533 | https://en.wikipedia.org/wiki/Hellenic%20Police | Hellenic Police | The Hellenic Police (, Ellinikí Astynomía, abbreviated ) is the national police service and one of the three security forces of Greece. It is a large agency with responsibilities ranging from road traffic control to counter-terrorism. Police Lieutenant General Michail Karamalakis currently serves as Chief of the Hellenic Police. He replaces Aristeidis Andrikopoulos.
The Hellenic Police force was established in 1984 under Law 1481/1-10-1984 (Government Gazette 152 A) as the result of the fusion of the Gendarmerie (, Chorofylakí) and the Cities Police (, Astynomía Póleon) forces.
According to Law 2800/2000, the Hellenic Police is a security organ whose primary aims are:
Ensuring peace and order as well as citizens' unhindered social development, a mission that includes general policing duties and traffic safety.
Prevention and suppression of crime as well as protecting the state and its democratic form of government within the framework of the constitutional order, a mission which includes the implementation of public and state security policy.
The Hellenic Police is constituted along central and regional lines. The force takes direction from the Minister for Citizen Protection.
The force consists of police officers, civilians, border guards and Special Police Guards.
Structure
Overview
The Hellenic Police force is headed in a de jure sense by the Minister of Public Order and Citizen Protection, however, although the Minister sets the general policy direction of Greece's stance towards law and order as a whole, the Chief of Police is the day-to-day head of the force. Underneath the Chief of Police is the Deputy Chief of Police whose role is largely advisory, though in the event of the Chief of Police being unable to assume his duties the Deputy Chief will take over as the interim head. Regular meetings are also held with the Council of Planning and Crisis Management who are drawn from the heads of the main divisions of the police force and raise relevant issues with the Chief of Police him/herself. Underneath the Deputy Chief of Police is the Head of Staff, who, in addition to acting as 'Principal' of the Police Academy, heads the Security and Order Branch, Administrative Support Branch and Economical-Technical and Information Support Branch. Equal in rank to the Head of Staff are the General Inspectors of Southern and Northern Greece, who have under their jurisdiction the regional services of both these divisions. The Security and Order Branch is by far the most important, and includes the General Police Division, the Public Security Division and the State Security Division, among others.
Regional jurisdiction
Greece is divided into two sectors for policing, both headed by an Inspector General. These sectors both contain several regions, headed by an Inspector General.
Northern Greece
East Macedonia and Thrace region
Central Macedonia region
West Macedonia region
Thessaly region
Epirus region
North Aegean region
Southern Greece
Central Greece region
Peloponnese region
West Greece region
Ionian Islands region
South Aegean region
Crete region
Special services
The Greek Police force has several special services divisions under the authority of the Chief of Police and working in conjunction with regional and other police sectors where necessary, these are as follows:
Cyber Crime Department ()
Special Violent Crime Squad ( - Dieufthinsi Antimetopisis Eidikon Egklimaton Vias)
Forensic Science Division ( - Dieufthinsi Egklimatologikon Ereunon)
Division of Internal Affairs ( - Dieufthinsi Esoterikon Hypotheseon)
International Police Cooperation Division ( - Dieufthinsi Diethnous Astynomikis Synergasias)
Informatics Division ( - Dieufthinsi Plirophorikis)
Special Suppressive Antiterrorist Unit ( - Eidiki Katastaltiki Antitromokratiki Monada)
Department of Explosive Devices Disposal ( - Tmima Exoudeterosis Ekriktikon Mechanismon)
Hellenic Police Air Force Service ( - Hypiresia Enaerion Meson Hellinikis Astynomias)
Zeta Group (Motorcycle Police) ( - Omada Zeta)
Teams of Bicycle-mounted Police ( - Omades Dicyclis Astynomeusis)
Force of Control Fast Confrontation ( - Dynami Elenchou Tachias Antimetopisis)
Special Guards ( - Eidikoi Frouroi)
Border Guards ( - Synoriophylakes)
Road traffic police ( - Trochaia)
Units for the Reinstatement of (Public) Order (Riot Police) ( - Monades Apokatastasis Taxis)
Unit of Police Dogs ( - Omada Astynomikon Skylon)
Personnel
Ranks of the Hellenic Police Force
History
19th century
Though there was what constituted a police force under the provisional Government of Greece during the Greek War of Independence, the first organized police force in Greece was the Greek Gendarmerie which was established in 1833 after the enthronement of King Otho. It was at that time formally part of the army and under the authority of the Defence Ministry (later the entirety of the organization including the Police Academy was brought under its authority). A city police force was also established but its role remained a secondary one in comparison to the Army's role (mainly dealing with illegal gambling, a severe problem at the time), several foreign advisers (particularly from Bavaria, which emphasized elements of centralization and authoritarianism), were also brought in to provide training and tactical advice to the newly formed Police force. The main task of the police force under the army as a whole during this period was firstly to combat theft but also to contribute to the establishment of a strong executive government.
The army's links to the police and the nature of the structure of the police force and its hierarchy (that of being similar to the army) was maintained throughout the 19th century for a number of reasons. Largely the socio-political unrest that characterized the period including disproportionate poverty, governmental oppression, sporadic rebellions and political instability. As a result of this, as well as the input of the armed forces, the police force remained a largely conservative body throughout the period, there was also a certain amount of politicization during training as the police force were trained in military camps.
20th century
In 1906 the Greek police force underwent its first major restructuring at an administrative level. It acquired its own educational and training facilities independent of those of the army (though still remaining titularly part of the armed forces). Despite this the Gendarmarie still maintained a largely military based structured based on its involvement in the Macedonian Struggle, and the Balkan and First World Wars, as a result it tended to neglect civilian matters and was partially unresponsive to the needs of Greek society at the time. However, together with the establishment of a civilian city police force for Athens in 1920 (which would eventually be expanded to the entire country), it set a precedent for further change that came in 1935 because of rapid technological, demographic and economic changes which helped it to become more responsive to civilian policing needs of the time.
However, modernization of the police force was stunted by the successive periods of political instability. The dictatorship of Ioannis Metaxas, compounded with both the Second World War and the Greek Civil War led to a retardation of reform throughout the late 1930s and early to mid-1940s. After the war however, British experts were brought in to help reform the police along the lines of the British Police, as a result, after 1946 the police force ceased to be a part of the Defence Ministry, however even then it did not abandon its military features and was still prevalently a military based institution. The Civil war of the period also contributed to excesses on both sides (government forces and the guerillas of the communist led Democratic Army of Greece), torture and abuse of human rights were widespread especially during the early periods of the war when parts of the country where in a state of near lawlessness. Despite this, after the war the police force did reach a respectable level of civilian policing throughout the mid-1960s which was stunted by the rise to power of the Military dictatorship of the Colonels from 1967 to 1974 where it was largely employed as a method of quelling popular discontent along with the newly established Greek Military Police force of the dictatorship.
After the fall of the Colonels the Greek Military Police was eventually disbanded and Greece became a Republic. Despite strong opposition from the Gendarmerie, in 1984 both the city police and the Gendarmerie were merged into a single unified Greek Police Force which maintained elements of a military structure and hierarchy. Because of the long tradition of militaristic elements within the structure of the police even the Council of State of Greece ruled that the police should be regarded as a military body and that members are not civilians but members of the military engaged in a wider role together with the Armed Forces to supplement the Army in defence of the homeland. This has however in recent years been relegated to policing duties such as border patrols and combating illegal immigration and is not reflective of any de facto military duties outside of that of a defensive role in the event of an invasion. Today the Greek Police assist in training various emerging Eastern European and African police forces and Greece has one of the lowest crime rates within the European Union.
Social service
Since 2012, the Hellenic Police is operating a website that it is providing useful informations to kids and parents using the Internet.
In 2013, the Cyber Crime Center of the Hellenic Police, under the auspices of the Ministry for Citizens Protection, organised a number of Conferences all over the country, to inform kids and parents about the dangers that a child can have while using the internet.
The Cyber Crime Center of the Hellenic Police, which is currently run by Police Brigadier General Emmanuel Sfakianakis (a highly skilled agent trained by the F.B.I.), has intervened in an extremely large amount of suicide attempts saving altogether over 600 lives.
A significant part of the training for officers of all posts, is the protection and safeguarding of the well-being of children, while any form of child abuse is faced with the "Zero Tolerance" policy .
Additionally, the Hellenic Police has shown an active support, with a financial donation at the Children's Smile () and the reassurance that the agency was, is and will remain "for life", an active supporter of the Organization.
Current issues
There are several current issues affecting the police in Greece today, of particular importance is the rise in drug related crimes, sometimes attributed to increased immigration from Albania and other former Eastern Bloc countries, this has particularly affected Athens and in particular Omonoia Square which has become a central point for drug-related activities within Greece.
Illegal immigration is also a problem as Greece remains both a destination and transit point for illegal immigrants, particularly from Albania (as well as increasingly African and Asian countries). There has been an effort in recent years to step up the security procedures along Greece's borders (though some allege there has been too much of a heavy handed approach to this issue). The issue of the recruitment of immigrants has also been brought up by opposition PASOK MPs in Parliament several times.
Greece is also one of the few EU countries where there is a rising crime rate, though comparatively the crime rate is still very low by EU standards. Some also allege there is a division within the Greek Police force between the 'Modern' and 'Traditional' elements, they claim the traditional element is underpinned by the long history of links with the military whereas the 'Modern' element is geared towards the police playing a greater social role in society (for example, drug rehabilitation).
Controversies
Torture methods
According to some organizations Greek police has been accused of overt and, generally unpunished, brutality, in specific cases like after the 2008 Greek riots and during the 2010–2012 Greek protests sparked by the Greek government-debt crisis. Amnesty international has issued a detailed report on police violence in Greece, concerning its practices in patrolling demonstrations, treatment of illegal immigrants, and other, while the Human Rights Watch has criticized the organization concerning its stance against immigrants and allegations of torture of detainees and the Reporters Without Borders have accused the police of deliberately targeting journalists.
Furthermore, it has been accused of allegedly planting evidence on detainees and mistreatment of arrested individuals. A 29-year-old Cypriot man, Avgoustinos Dimitriou, has been awarded €300,000 in damages following his videotaped beating by plainclothed police officers during a 2006 demonstration in Thessaloniki.
In November 2019, Amnesty International again made a report regarding the police violence and the use of torture methods. In 2020, 26-year-old Vasilis Maggos from Volos, was found dead one moth after his arrest (during an environmental demonstration) and his beating by police officers that caused him serious organ damage.
In 2021, the Border Violence Monitoring Network published a report into the use of torture and inhuman treatment during pushbacks by Greek police. They assert that:
89% of pushbacks carried out by Greek authorities contained one or more forms of violence and abuse that we assert amounts to torture or inhuman treatment
52% of pushback groups subjected to torture or inhuman treatment by Greek authorities contained children and minors
Coalition of Radical Left controversy
In 2012, Syriza political party, disagreed with the measures taken by the State authorities and the police against illegal immigration.
At early November 2012, the minister of Public Order, Nikos Dendias, accused various MPs of the Coalition of Radical Left of "impersonating authority". According to the accusations the members of the party stopped a number of policemen while they were on duty in order to check their credentials. Moreover, they took photographs of the plainclothes police officers and uploaded them on the Internet site of the party (left.gr). The accusations prompted an angry reply from the party's spokesman, who replied that they are "dirty accusations".
Allegations of ties with the far Right
In a 1998 interview with the newspaper Eleftherotypia, Minister for Public Order Georgios Romaios (PASOK) alleged the existence of "fascist elements in the Greek police", and vowed to suppress them.
Before the surrender of Androutsopoulos, an article by the newspaper Ta Nea claimed that the neo-Nazi political party Golden Dawn had close relationships with some parts of the Greek police force.
The newspaper published then a photograph of a typewritten paragraph with no identifiable insignia as evidence of the secret investigation. In the article, the Minister for Public Order, Michalis Chrysochoidis, responded that he did not recollect such a probe. Chrysochoidis also denied accusations that far right connections within the police force delayed the arrest of Periandros. He said that leftist groups, including the ultra-left anti-state resistance group 17 November, responsible for several murders, had similarly evaded the police for decades. In both cases, he attributed the failures to "stupidity and incompetence" on behalf of the force.
Golden Dawn stated that rumours about the organisation having connections to the Greek police and the government are untrue, and that the police had intervened in Golden Dawn's rallies and had arrested members of the Party several times while the New Democracy party was in power (for example, during a rally in Thessaloniki in June 2006, and at a rally for the anniversary of the Greek genocide, in Athens, also in 2006). Also, on January 2, 2005, anti-fascist and leftist groups invaded Golden Dawn's headquarters in Thesaloniki, under heavy police surveillance. Although riot police units were near the entrance of the building alongside the intruders, they allegedly did not attempt to stop their actions.
The "communicating vessels" between Police and Neo-Nazis re-surfaced on the occasion of riot that broke during protest on march June 28, 2011, when squads of riot police rushed to protect agents provocateurs isolated by the angry crowd, two of them A. Soukaras and A. Koumoutsos both unionists of ETHEL (ΕΘΕΛ) well known for both their extreme opinions, as well as their frequent presence in riots.
In July 2012, it was reported that Nils Muižnieks, Council of Europe Commissioner for Human Rights, had placed the alleged ties of Greek Police and Golden Dawn under scrutiny, following reports of the Greek state's continued failure to acknowledge the problem.
According to political analyst Paschos Mandravelis, "A lot of the party's backing comes from the police, young recruits who are a-political and know nothing about the Nazis or Hitler. For them, Golden Dawn supporters are their only allies on the frontline when there are clashes between riot police and leftists.
Following the May 6, 2012 Greek Parliamentary election, in which Golden Dawn entered the Greek parliament, it was revealed that more than one out of two police officers voted for the party in districts adjacent to Athens' Attica General Police Directorate (GADA). Since the election, Greek police officers have been implicated in violent incidents between Golden Dawn members and migrants. In September, one police officer was suspended for participating in a Golden Dawn raid against migrant-owned kiosks in an open market at Mesolongi; seven other officers were identified. Anti-fascist demonstrators were allegedly tortured in police custody that same month. In October, Greek police allegedly stood by while Golden Dawn members attacked a theater holding a production of the controversial play Corpus Christi.
Transportation
The most common police vehicles in Greece are the white with blue stripes Citroën Xsara, Škoda Octavia, Mitsubishi Lancer Evolution X, Hyundai i30, Citroën C4, Citroën C4 Picasso, Suzuki SX4, Jeep Liberty, Peugeot 308, Volkswagen Golf, and Nissan Qashqai. Other vehicles that Greek Police has used throughout the years are the following:
1984,1985 Mitsubishi Galant
1985 Mitsubishi Lancer
1985 Daihatsu Charmant
1986, 1990, 1992 Nissan Sunny
1991 Renault 19
1991 Opel Vectra
1991 Volvo 460
1995 Citroën ZX
1995, 1997 Toyota RAV4
1995, 1996, 1999, 2000 Opel Astra
1996 Suzuki Baleno
1997, 1998 Nissan Primera
1998,2000 Toyota Corolla
1998 Citroën Saxo
1998, 1999 Citroën Xantia
1998, 1999 Nissan Almera
1999, 2000 Nissan Terrano II
2000 Kia Sportage
The original livery featured white roofs and doors, with the rest of the bodyshell in dark blue.
The current livery was first introduced on the Citroën ZX's, although the blue stripe on the earlier models was not reflective, from this became another nickname "stroumfakia"(smurfs) for the Greek police.
Most Greek police vehicles are equipped with a customized Car PC, which offers GPS guidance and is connected directly with the Hellenic "Police On Line" network.
As of 2011, a number of police vehicles are being modified to be equipped with onboard surveillance cameras.
Police equipment
Handguns:
Beretta M9 (ITA)
Heckler & Koch USP (GER)
Smith & Wesson Model 910 (US)
Ruger GP100 (US)
Submachine guns:
Heckler & Koch MP5 (GER)
Uzi (ISR)
FN P90 (BEL)
Assault rifles:
FN FAL (BEL)
AK-47 (RUS)
AK-74M (RUS)
M4 carbine (US)
Kefeus (GR)
Police Academy
The Hellenic Police Academy was established in 1994 with the voting of law 2226/1994 through Parliament. It is situated in Athens and is under the jurisdiction of the chief of police (i.e. the minister of public order). However the Chief of Police can make recommendations and act as an advisor to the Minister on improvements and other such issues (for example structural reform) pertaining to the Academy. The Minister and the Chief of Police make annual speeches at the Academy to prospective Police Officers. The school is made up of University Professors, special scientists (for areas such as forensics) and high-ranking police officers who have specialist field experience. Entrance to the academy is based on examinations and an interview, though it differs depending on which particular school of the academy the student wishes to join.
The Police Academy includes:
The School for Police Officers, for high school students who wish to become commissioned Police officers.
The School for Police Constables, for high school students who wish to become non-commissioned Police officers.
The School for Postgraduate Education and lifelong learning.
The National Security School, for high-ranking police personnel (also open to other categories of public servants such as Firemen).
Training
Hellenic Police has a basic requirement of knowledges that apply for all positions within the agency.
These are the protection of the Constitution, tackling of criminal activities and assisting in disaster situations. The emphasis during training on the support and protection of children is such, that a number of highly successful individuals that were raised as orphans, have stated that they couldn't say with certainty that they would make it all the way to the top, without the social service that the Hellenic Police provided to them during their childhood.
See also
Europol
Interpol
N.I.S. of Greece
Crime in Greece
Notes
After the War was Over, Mark Mazower (Reconstructing the family, nation and state in Greece).
External links
Hellenic Police website
National Central Bureaus of Interpol
1984 establishments in Greece
Emergency services in Greece
Human rights abuses in Greece |
39171547 | https://en.wikipedia.org/wiki/Hardware%20code%20page | Hardware code page | In computing, a hardware code page (HWCP) refers to a code page supported natively by a hardware device such as a display adapter or printer. The glyphs to present the characters are stored in the alphanumeric character generator's resident read-only memory (like ROM or flash) and are thus not user-changeable. They are available for use by the system without having to load any font definitions into the device first. Startup messages issued by a PC's System BIOS or displayed by an operating system before initializing its own code page switching logic and font management and before switching to graphics mode are displayed in a computer's default hardware code page.
Code page assignments
In North American IBM-compatible PCs, the hardware code page of the display adapter is typically code page 437. However, various portable machines as well as (Eastern) European, Arabic, Middle Eastern and Asian PCs used a number of other code pages as their hardware code page, including code page 100 ("Hebrew"), 151 ("Nafitha Arabic"), 667 ("Mazovia"), 737 ("Greek"), 850 ("Multilingual"), encodings like "Roman-8", "Kamenický", "KOI-8", "MIK", and others. Most display adapters support a single 8-bit hardware code page only. The bitmaps were often stored in an EPROM in a DIP socket. At most, the hardware code page to be activated was user-selectable via jumpers, configuration EEPROMs or CMOS setup. However, some of the display adapters designed for Eastern European, Arabic and Hebrew PCs supported multiple software-switchable hardware code pages, also named font pages, selectable via I/O ports or additional BIOS functions.
In contrast to this, printers frequently support several user-switchable character sets, often including various variants of the 7-bit ISO/IEC 646 character sets such as code page 367 ("ISO/IEC 646-US / ASCII"), sometimes also a couple of 8-bit code pages like code page 437, 850, 851, 852, 853, 855, 857, 860, 861, 863, 865, and 866. Printers for the Eastern European or Middle Eastern markets sometimes support other locale-specific hardware code pages to choose from. They can be selected via DIP switches or configuration menus on the printer, or via specific escape sequences.
Support in operating systems
When operating systems initialize their code page switching logic, they need to know but have no means to determine the previously active hardware code page by themselves. Therefore, for code page switching to work correctly, the hardware code page needs to be specified.
Under DOS and Windows 9x this is accomplished by specifying the hardware code page as a parameter (hwcp) to the device drivers DISPLAY.SYS and PRINTER.SYS in CONFIG.SYS:
DEVICE=…\DISPLAY.SYS CON=(type,hwcp,n|(n,m))
DEVICE=…\PRINTER.SYS PRN=(type,hwcp,n)
If multiple hardware code pages are supported in OEM issues, the first hardware code page (hwcp1) in the list specifies the default hardware code page:
DEVICE=…\DISPLAY.SYS CON=(type,(hwcp1,hwcp2,…),n|(n,m))
DEVICE=…\PRINTER.SYS PRN=(type,(hwcp1,hwcp2,…),n)
If no hardware code page(s) are specified, these drivers default either to a dummy code page number 999 or assume the hardware code page to be equal to the primary code page (the first code page listed in COUNTRY.SYS files for a particular country with the country code either specified in the CONFIG.SYS COUNTRY directive or assumed to be the operating system's internal default, usually 1 (US) in Western issues of DOS).
In many English-speaking countries, the primary code page is either 437 (f.e. in the US) or 850 (f.e. in the UK, Ireland and Canada), so that, without specifying a different code page, the system would often assume one of these to be the corresponding device's default hardware code page as well.
If a hardware code page does not match one of those with official code page assignments, an arbitrary number from the range 57344–61439 (E000h–EFFFh) for user-definable code pages or 65280–65533 (FF00h–FFFDh) for private use code pages could be specified per IBM CDRA to give the operating system a non-conflictive "handle" to select that code page.
Arabic and Hebrew MS-DOS do not use DISPLAY.SYS and PRINTER.SYS, but provide similar facilities using ARABIC.COM, HEBREW.COM, and SK.
OEM code pages
Hardware code pages are also OEM code pages. The designation "OEM", for "original equipment manufacturer", indicates that the character set could be changed by the manufacturer to meet different markets.
However, OEM code pages do not necessarily reside in ROM, but include so called prepared code pages, (aka downloadable character sets or downloadable fonts), character sets loaded as raster fonts into the font RAM of suitable display adapters (like Sirius 1/Victor 9000, NEC APC, HP 100LX/200LX/700LX, Persyst's BoB Color Adapter, Hercules' HGC+, InColor and Network Plus with RAMFONT, and IBM's MCGA, EGA, VGA, etc.) and printers as well. Hence, the group of OEM code pages is a superset of hardware code pages.
See also
PC-9800 series
Alt codes
Notes
References
External links
DOS code pages
Character encoding |
898290 | https://en.wikipedia.org/wiki/Bug%20tracking%20system | Bug tracking system | A bug tracking system or defect tracking system is a software application that keeps track of reported software bugs in software development projects. It may be regarded as a type of issue tracking system.
Many bug tracking systems, such as those used by most open-source software projects, allow end-users to enter bug reports directly. Other systems are used only internally in a company or organization doing software development. Typically bug tracking systems are integrated with other project management software.
A bug tracking system is usually a necessary component of a professional software development infrastructure, and consistent use of a bug or issue tracking system is considered one of the "hallmarks of a good software team".
Making
A major component of a bug tracking system is a database that records facts about known bugs. Facts may include the time a bug was reported, its severity, the erroneous program behavior, and details on how to reproduce the bug; as well as the identity of the person who reported it and any programmers who may be working on fixing it.
Typical bug tracking systems support the concept of the life cycle for a bug which is tracked through the status assigned to the bug. A bug tracking system should allow administrators to configure permissions based on status, move the bug to another status, or delete the bug. The system should also allow administrators to configure the bug statuses and to what extent a bug in a particular status can be moved. Some systems will e-mail interested parties, such as the submitter and assigned programmers, when new records are added or the status changes.
It is possible to perform automated diagnosis based on the content of the bug report.
For instance, one can do automated detection of bug duplicates or automatic bug fixing.
Usage
The main benefit of a bug-tracking system is to provide a clear centralized overview of development requests (including both bugs and improvements, the boundary is often fuzzy), and their state. The prioritized list of pending items (often called backlog) provides valuable input when defining the product road map, or maybe just "the next release".
In a corporate environment, a bug-tracking system may be used to generate reports on the productivity of programmers at fixing bugs. However, this may sometimes yield inaccurate results because different bugs may have different levels of severity and complexity. The severity of a bug may not be directly related to the complexity of fixing the bug. There may be different opinions among the managers and architects.
A local bug tracker (LBT) is usually a computer program used by a team of application support professionals (often a help desk) to keep track of issues communicated to software developers. Using an LBT allows support professionals to track bugs in their "own language" and not the "language of the developers." In addition, an LBT allows a team of support professionals to track specific information about users who have called to complain—this information may not always be needed in the actual development queue. Thus, there are two tracking systems when an LBT is in place.
Part of integrated project management systems
Bug and issue tracking systems are often implemented as a part of integrated project management systems.
This approach allows including bug tracking and fixing in a general product development process, fixing bugs in several product versions, automatic generation of a product knowledge base and release notes.
Distributed bug tracking
Some bug trackers are designed to be used with distributed revision control software. These distributed bug trackers allow bug reports to be conveniently read, added to the database or updated while a developer is offline. Fossil and Veracity both include distributed bug trackers.
Recently, commercial bug tracking systems have also begun to integrate with distributed version control. FogBugz, for example, enables this functionality via the source-control tool, Kiln.
Although wikis and bug tracking systems are conventionally viewed as distinct types of software, ikiwiki can also be used as a distributed bug tracker. It can manage documents and code as well, in an integrated distributed manner. However, its query functionality is not as advanced or as user-friendly as some other, non-distributed bug trackers such as Bugzilla. Similar statements can be made about org-mode, although it is not wiki software as such.
Bug tracking and test management
While traditional test management tools such as HP Quality Center and IBM Rational Quality Manager come with their own bug tracking systems, other tools integrate with popular bug tracking systems.
See also
Application lifecycle management
Comparison of issue-tracking systems – Including bug tracking systems
Comparison of project management software – Including bug tracking systems
References
External links
How to Report Bugs Effectively
List of distributed bug tracking software
How to Write a Bug Report that Will Make Your Developers Happy
Bug tracking system
Help desk
Project management software |
38362359 | https://en.wikipedia.org/wiki/Comm100 | Comm100 | Comm100 (Comm100 Network Corporation) is a provider of customer service and communication products. All its products are available as a SaaS (Software as a Service). The company serves over 200,000 businesses.
History
The company was founded on 3 July 2009 in Vancouver, British Columbia, Canada. The first product (Comm100 Live Chat) was introduced on 5 August 2009. The company reached the 100,000 registered business users milestone in April 2011. The company's products and services were offered for free until the Christmas of 2011. In May 2013, the company joined M3AAWG as a supporter. In 2013 Comm100 was a silver winner of Best in Biz Awards International that year.
Products and services
The company provides a suite of business communication tools, from customer service to marketing. The two most popular products are:
Comm100 Live Chat: live support software that allows users to track and chat with their website visitors in real time.
Comm100 Email Marketing: bulk email software that helps users send opt-in email newsletters to subscribers.
References
Software companies of Canada
Business software companies
Cloud applications
Customer relationship management software
Software companies established in 2009
2009 establishments in British Columbia |
7645334 | https://en.wikipedia.org/wiki/Wordfast | Wordfast | The name Wordfast is used for any of a number of translation memory products developed by Wordfast LLC. The original Wordfast product, now called Wordfast Classic, was developed by Yves Champollion in 1999 as a cheaper alternative to Trados, a well-known translation memory program. The current Wordfast products run on a variety of platforms, but use largely compatible translation memory formats, and often also have similar workflows. The software is most popular with freelance translators, although some of the products are also suited for corporate environments.
Wordfast LLC is based in Delaware, United States, although most of the development takes place in Paris, France. Apart from these two locations, there is also a support centre in the Czech Republic. The company has around 50 employees.
History
Development on Wordfast version 1 (then called simply Wordfast) was begun in 1999 in Paris, France, by Yves Champollion. It was made up of a set of macros that ran inside of Microsoft Word, version 97 or higher. At that time, other translation memory programs also worked inside Microsoft Word, for example Trados.
Until late 2002, this MS Word-based tool (now known as Wordfast Classic) was freeware. Through word of mouth, Wordfast grew to become the second most widely used TM software among translators.
In 2006, the company Wordfast LLC was founded by Philip Shawe and Elizabeth Elting, also co-owners of the translation company TransPerfect. In July 2006, Mr. Champollion sold all interest in the Wordfast Server computer program to Wordfast LLC. Since that time, Wordfast has been the sole owner of all right, title and interest, including the copyright, in the Wordfast Server Code. Since then, Champollion, while holding the title Founder and Chief Architect, has also been CEO and president of the company Wordfast LLC.
In January 2009, Wordfast released Wordfast Pro, a standalone Java-based TM tool.
In May 2010, Wordfast released a free online tool known as Wordfast Anywhere. This tool allows translators to work on projects from virtually any web-enabled device including smartphones, PDAs and tablets. By July 2010, 5000 users had registered at Wordfast Anywhere, and by November 2010 the number was 10,000.
In April 2016, Wordfast released Wordfast Pro 4, a major upgrade to their standalone Java-based TM tool. This tool includes advanced translation management features including a WYSIWYG editor, the ability to create multilingual translation packages, real-time quality assurance, and a powerful translation editor filter. An update to Wordfast Pro 4 called "Wordfast Pro 5" was released in April 2017, with mostly under-the-hood performance improvements.
Products
Wordfast Classic (WFC)
Wordfast Classic (WFC) is written in Visual Basic, and runs inside Microsoft Word. It supports Word 97 or higher on any platform, although some features present in recent versions of WFC only work on higher versions of Microsoft Word.
When a document is translated using WFC, it is temporarily turned into a bilingual document by adding segment delimiting characters and retaining both source and target text within the same file. Afterwards, the source text and segment delimiters are removed from the file in a process called "cleaning up". For this reason, the bilingual file format is often referred to as an "uncleaned file". This workflow is similar to that of Trados 5, WordFisher, Logoport and Metatexis.
WFC can handle the following formats: any format that Microsoft Word can read, including plain text files, Word documents (DOC/DOCX), Rich Text Format (RTF), tagged RTF and tagged XML. Earlier versions of WFC could translate Microsoft Excel (XLS/XLSX) and PowerPoint (PPT/PPTX) files directly, but this feature is not present in the most recent version. Users of the current version of WFC who need to translate Excel and PowerPoint files need to use a demo copy of Wordfast Pro 3 or 5 to create an intermediary file format. WFC does not offer direct support for OpenDocument formats because the current versions of Microsoft Word do not have import filters for OpenDocument files.
WFC is actively developed, with regular updates, and the most recent major version number is 8.
History
The first version of Wordfast Classic was called Wordfast version 1, and was developed by Yves Champollion. It was distributed to the public in 1999.
Version 2 was used by the translation agency Linguex, which acquired a 9-month exclusive usage right for their in-house staff and affiliated freelancers in late 1999. During this time Wordfast was expanded with features such as rule-based and glossary quality control, and network support. After the demise of Linguex, version 3 of Wordfast was released to the public, as a free tool with mandatory registration.
In mid-2001 the developers of Wordfast signed a joint-venture agreement with the translation group Logos for distribution of the program, under a newly created UK company called Champollion Wordfast Ltd. The joint-venture ceased in August of that year after Logos had failed to share their software source code with the Wordfast developer, despite having gained access to Wordfast's source code through intercepting the developers' e-mails. For a number of years after the joint-venture ended, Logos continued to distribute an older version of Wordfast from their web site, www.wordfast.org, and claimed that the right to the name "Wordfast" or distributing newer versions of it was owned by them.
Initially, version 3 was freeware, with mandatory registration, using a serial number generated by the user's computer. In October 2002, Wordfast became a commercial product with three-year licenses at a price of EUR 170 for users from wealthy countries and EUR 50 (later EUR 85) for users from other countries.
Wordfast Anywhere (WFA)
Wordfast Anywhere (WFA) is a free browser-based version of Wordfast, with a workflow and user interface similar to that of Wordfast Classic. Although the service is free, certain restrictions apply, most notably an upload limit of 2 MB (although files can be zipped) and a limit of 10 documents simultaneously worked on.
WFA's privacy policy is that all uploaded documents remain confidential and are not shared. Users can optionally use machine translation and access a large read-only public translation memory.
In addition to being usable on tablets such as Windows Mobile, Android and Palm OS, WFA is also available as an iPhone app. WFA has built-in optical character recognition of PDF files.
WFA can handle Word documents (DOC/DOCX), Microsoft Excel (XLS/XLSX), PowerPoint (PPT/PPTX), Rich Text Format (RTF), Text (TXT), HTML, InDesign(INX), FrameMaker (MIF), TIFF (TIF/TIFF) and both editable and OCR-able PDF. It does not offer support for OpenDocument formats. Although translation is done within the browser environment, users can also download unfinished translations in Wordfast Pro 5's TXLF bilingual format as well as in a bilingual review format that can be handled by many other modern CAT tools.
Users of Wordfast Classic and Wordfast Pro 3 and 5 can connect to translation memories and glossaries stored on Wordfast Anywhere directly from within their respective programs.
History
Wordfast Anywhere was released in May 2010, although development versions were available to the public as early as May 2009. Before the product was first released, it was not certain whether it would remain a free service after release.
OCR capability for PDF files was added to Wordfast Anywhere in April 2011. Wordfast Anywhere became available as an iPhone app in 2012.
Wordfast Pro 3 (WFP3)
Wordfast Pro 3 (WFP3) is a stand-alone, multiplatform (Windows, Mac, Linux) translation memory tool with much similar features as Wordfast Classic and other major CAT tools. WFP3 uses a bilingual intermediary format called TXML, which is a simple XML format. WFP3 extracts all translatable content from source files and stores it in a TXML file, which is translated in WFP3. When the translation is finished, WFP3 merges the TXML file with the original source file to create a translated target file.
WFP3 can export unfinished translations to a bilingual review file, which is a Microsoft Word DOC file, and import updated content from the same format again.
WFP3 can handle Word documents (DOC/DOCX), Excel (XLS/XLSX), PowerPoint (PPT/PPTX), Visio (VSD/VDX/VSDX), Portable Object files (PO), Rich Text Format (RTF), Text (TXT), HTML (HTML/HTM), XML, ASP, JSP, Java, InDesign (INX/IDML), InCopy (INC), FrameMaker (MIF), Quark (TAG), Xliff (XLF/XLIFF), SDL Trados (SDLXLIFF/TTX) and editable PDF. It does not offer support for OpenDocument formats.
WFP3 uses the same translation memory and glossary format as Wordfast Classic. WFP3 and WFP5 can be installed on the same computer.
History
The program that is currently referred to as "Wordfast Pro 3" was initially called "Wordfast Pro 6.0", because the latest version of Wordfast Classic at that time was version 5. However, after Wordfast Classic progressed to version number 6 itself, Wordfast Pro's version numbering was restarted.
A pre-release trial version was available from late 2008, and the first release version was in January 2009. Version 2 was released by mid-2009, and version 3 became available by early 2012. The current major version, 3.4, was released on 15 Apr 2014. WFP3 is still available from the Wordfast web site, and a license for "Wordfast Pro 5" includes a separate license for WFP3, but development on this product has stopped.
Wordfast Pro 5 (WFP5)
Wordfast Pro 5 (WFP5) is a stand-alone, multiplatform (Windows, Mac, Linux) translation memory tool with much similar features as other major CAT tools. WFP5 uses a bilingual intermediary format called TXLF, which is a fully compliant XLIFF variant. WFP5 extracts all translatable content from source files and stores it in a TXLF file, which is translated in WFP5. When the translation is finished, WFP5 merges the TXLF file with the original source file to create a translated target file.
WFP5 can handle Word documents (DOC/DOCX), Excel (XLS/XLSX), PowerPoint (PPT/PPTX), Visio (VSD/VDX/VSDX), Portable Object files (PO), Rich Text Format (RTF), Text (TXT), HTML (HTML/HTM), XML, ASP, JSP, Java, JSON, Framemaker, InDesign (INX/IDML), InCopy (INC), FrameMaker (MIF), Quark (TAG), Xliff (XLF/XLIFF), TXLF, and SDL Trados (SDLXLIFF/TTX). It does not offer support for OpenDocument formats.
WFP5 offers pseudo WYSIWIG capability. Its translation memory and glossary formats are not compatible with that of Wordfast Pro 3 and Wordfast Classic. WFP5 lacks some features that are present in Wordfast Pro 3, but also has many features that are not present in Wordfast Pro 3.
History
In December 2013, the Wordfast company released a beta version of a completely redesigned version of Wordast Pro, version 4, and announced that the new version will be released early in 2014. In the end, Wordfast Pro 4 was only released in mid 2015. Version 5 was released in April 2017.
PlusTools
This is a set of free tools that run inside Microsoft Word, designed to help Wordfast Classic translators perform specific advanced functions such as text extraction and alignment.
VLTM Project (Very Large Translation Memory)
Users can leverage content from a very large public TM, or set up a private workgroup where they can share TMs among translators they are collaborating with.
Wordfast Server (WFS)
Wordfast Server (WFS) is a secure TM server application that works in combination with either Wordfast Classic, Wordfast Pro, or Wordfast Anywhere to enable real-time TM sharing among translators located anywhere in the world.
WFS supports Translation memories in TMX or Wordfast TM (txt) format, leverages up to 1,000 TMs containing 1 billion TUs each and serves up to 50,000 users simultaneously.
WFS can be used indefinitely and for free in demo mode for up to 2 concurrent users. Users of other translation tools, namely MemoQ and SDL Trados, can also connect to WFS linguistic assets.
Supported translation memory and glossary formats
The original Wordfast translation memory format was a simple tab-delimited text file that can be opened and edited in a text editor. Wordfast products can also import and export TMX files for memory exchange with other major commercial CAT tools. Wordfast's original glossary format was a simple tab-delimited text file. These formats are still used today by Wordfast Anywhere, Wordfast Classic, Wordfast Server, and Wordfast Pro 3 (TM only, not the glossary). Wordfast Pro 5 uses a database format for the TM and glossary. The transition to a database format was made to increase the TM and glossary size limitations (e.g. from 1 million to 5 million translation units) as well as improve concordance search speeds.
Wordfast products can also import TBX files for terminology exchange with other major commercial CAT tools.
Wordfast products can support multiple TMs and glossaries. Text-based TMs can store up to 1 million TUs and glossaries can store up to 250,000 entries each. Wordfast Pro 5 TMs can store up to 5 million TUs and glossaries over 1 million entries each.
Wordfast is able to use server-based TMs, and retrieve data from machine translation tools (including Google Translate and Microsoft Translator).
Documentation and support
Comprehensive user manuals can be downloaded from the Wordfast web site. Wordfast Pro and Wordfast Anywhere also offer online help pages. Users can access the Wordfast wiki for help getting started, tips and tricks, FAQs, etc. Video tutorials are available on Wordfast's dedicated YouTube page.
Wordfast offers users free technical support with the purchase of a license. Users can also access free peer support forums available in dozens of languages.
Pricing and licensing
WFC and WFP can be purchased individually for 400 euros per license or together as a suite for 500 euros per license. Special discounts are available for users in certain countries. Bulk discounts also apply for the purchase of 3 or more licenses. All Wordfast licenses include, from date of purchase:
free email support;
free upgrades to new releases of the software for three years;
the right to relicense the software to keep it running for three years.
After the 3-year license period, users can renew their license for another 3 years for 50% of the standard list price of a license at the time of renewal.
Copyright Matter
According to an October 2017 court filing in New York State Supreme Court, after the sale of Wordfast to Wordfast LLC, Wordfast allowed the Wordfast Server Code, together with the Wordfast trademark, to be used by TransPerfect translation company, pursuant to a non-exclusive license. The lawsuit challenges a custodian appointed by a Delaware judge to sell TransPerfect, accusing him of Copyright Infringement, or pirating the code, to enhance the valuation of the firm.
See also
Computer-assisted translation
References
External links
Official Wordfast Site (High-bandwidth)
Official Wordfast Site (Low-bandwidth)
Wordfast Anywhere Site
Yves Champollion
User groups
Wordfast Classic Yahoogroup
Wordfast Pro Yahoogroup
Wordfast Anywhere Yahoogroup
Translation software
Computer-assisted translation software programmed in Java |
48445080 | https://en.wikipedia.org/wiki/Kingdom%20%28video%20game%29 | Kingdom (video game) | Kingdom is a kingdom-building strategy/resource management game developed by Thomas van den Berg and Marco Bancale with support from publisher Raw Fury. The title was released on 21 October 2015 for Microsoft Windows, OS X, and Linux systems. A port for Xbox One was released on 8 August 2016, the ports for iOS and Android were released on 31 January 2017, and the ports for the Nintendo Switch and PlayStation 4 were released on 14 September 2017 and 16 January 2018 respectively. A reworked version of the game, titled Kingdom: New Lands, was released in August 2016, and a sequel, Kingdom: Two Crowns, was released in 2018.
The game is played out on a pixel art-based two-dimensional landscape; the player controls a king or queen that rides back and forth, collecting coins and using those coins to spend on various resources, such as hiring soldiers and weaponsmiths, building defenses against creatures that can attack and steal the king or queen's crown which will end the game, and otherwise expanding their kingdom. The player has otherwise little direct control of the game, and thus must use the coins they collect in judicious ways.
Kingdom received generally positive reviews appreciating the game's art and music, and its approach that required the players to figure out what to do based on these elements, but felt that the tedious nature of some tasks in the game affected its end-game and replayability.
Gameplay
Kingdom is presented to the player in a pixel art, two-dimensional screen, with the goal to build up and create a kingdom while surviving various foes that will attempt to capture the player-character's crown, effectively ending their rule. The player starts the game with a randomly generated king or queen on horseback. The player can move the character on horse to the left or right, including at a gallop, but has no other direct action. As the character passes landmarks, these will produce a few coins, from which the player can pick up by riding over them, and spend on various resources, which will be marked with open coin slots when the character passes near them; to purchase an upgrade, the player must be able to provide all the required coins at that time.
Initially, such resources will include hiring wandering travelers (vagrants) to become part of the kingdom and creating smiths to craft weapons and tools. As the player gathers more resources, new options to spend coins will open up. For example, once the player has a citizen of the kingdom with a tool, they can build defensive walls. Many such resources include upgrades that can be purchased with coins. To get more coins, the player roams their kingdom, collecting them from their subjects; static landmarks can grant more coins, as well as working people who fund the player after they complete certain tasks independently (such as farming or hunting game).
While exploration of the kingdom can lead to finding more coins and potential resources, the areas away from the main kingdom center can become more dangerous for both the player-character and any followers; further, the game has a day-and-night cycle in which more harmful creatures can roam the kingdom at night, with subsequent nights becoming more and more dangerous. Other nights may have blood moons, which will cause much larger hordes of creatures to appear. These creatures will work to destroy existing resources that the player has created, steal gold from the player-character, and if the player-character has no gold, steal their crown, which represents failure in the game from which the player will need to restart the game. The player is encouraged to manage the construction of defenses and guards to manage those defenses against the need to expand and improve the kingdom. There is an ultimate goal to achieve victory, though the player must come to determine that for themselves.
Development
Kingdom was developed by the two-man team of Thomas van den Berg and Marco Bancale, who go by the aliases noio and Licorice, respectively. The game is an expanded, standalone version of a Flash game by noio. The development of the game was supported by Raw Fury, a Sweden-based publisher launched by former members of Paradox Interactive and EA DICE. In May 2019, Raw Fury acquired the rights to the series from van de Berg, allowing the publisher to continue development.
In mid-2016, Raw Fury announced Kingdom: New Lands, an expansion to the original game. New content will be added to the game that will address some of the repetitive aspects critics found on the game's original release, such as adding new lands to be explored and a seasonal climate system that affects gameplay. The expansion released on 9 August 2016, with owners of the original game able to update for free.
Within one day of its Windows, OS X, and Linux release, Raw Fury announced that it had sold enough copies to pay for the game's development. As a result, they affirmed that they will be releasing the title for the Xbox One, and that all future improvements and expansions to the game will remain free to players on the forementioned platforms. The team also later announced plans for a port to the Nintendo Switch. Mobile versions of the game, including the "New Lands" additions, for iOS and NVIDIA Shield devices were released circa March 2017.
Reception
Kingdom has received generally favorable reviews. Dan Stapleton of IGN gave the title a 7.7 out of 10, and felt the lack of instruction combined with the tension of the various random attacks created a great experience for the game. Robert Purchase of Eurogamer gave the game a "Recommended" rating, believing the game achieved the right balance between being fair to the player while the player learns the mechanics of the game. James Davenport of PC Gamer, giving the game a 70 out of 100, was more critical of the lack of instruction, noting that a half-hour's worth of gameplay investment could be wiped away due to the player not being aware of how certain mechanics work, and that the game would be one to test a player's patience. Reviewers praised the game's retro pixel art look and its chiptune soundtrack, which created the appropriate atmosphere for the title and provided appropriate visual and audible clues as events in the game for the player to pick up on.
Reviewers commented that once the player understood the mechanics, the end-game and subsequent replays could become tedious. Stapleton found that the lack of information or means to manage the kingdom at the end-game made the end game more frustrating than difficult, and that once he had found a strategy to deal with the attacks, the game became too easy to play. Davenport commented that while the late game can be exciting on subsequent playthroughs, the early game of setting up the initial kingdom can become monotonous.
Kingdom was nominated for the Excellence in Design award for the 2016 Independent Games Festival.
The Kingdom games have sold over 4 million copies by May 2019.
References
External links
2015 video games
Android (operating system) games
IOS games
Linux games
MacOS games
Nintendo Switch games
PlayStation 4 games
Strategy video games
Business simulation games
Retro-style video games
Video games developed in Iceland
Video games developed in the Netherlands
Windows games
Xbox One games
Video games using procedural generation
Single-player video games |
36551045 | https://en.wikipedia.org/wiki/Amavis | Amavis | Amavis is an open-source content filter for electronic mail, implementing mail message transfer, decoding, some processing and checking, and interfacing with external content filters to provide protection against spam and viruses and other malware. It can be considered an interface between a mailer (MTA, Mail Transfer Agent) and one or more content filters.
Amavis can be used to:
detect viruses, spam, banned content types or syntax errors in mail messages
block, tag, redirect (using sub-addressing), or forward mail depending on its content, origin or size
quarantine (and release), or archive mail messages to files, to mailboxes, or to a relational database
sanitize passed messages using an external sanitizer
generate DKIM signatures
verify DKIM signatures and provide DKIM-based whitelisting
Notable features:
provides SNMP statistics and status monitoring using an extensive MIB with more than 300 variables
provides structured event log in JSON format
IPv6 protocol is supported in interfacing, and IPv6 address forms in mail header section
properly honors per-recipient settings even in multi-recipient messages, while scanning a message only once.
supports international email (RFC 6530, SMTPUTF8, EAI, IDN)
A common mail filtering installation with Amavis consists of a Postfix as an MTA, SpamAssassin as a spam classifier, and ClamAV as an anti-virus protection, all running under a Unix-like operating system. Many other virus scanners (about 30) and some other spam scanners (CRM114, DSPAM, Bogofilter) are supported, too, as well as some other MTAs.
Interfacing topology
Three topologies for interfacing with an MTA are supported. The amavisd process can be sandwiched between two instances of an MTA, yielding a classical
after-queue mail filtering setup, or amavisd can be used as an SMTP proxy filter in a before-queue
filtering setup, or the amavisd process can be consulted to provide mail classification but not to forward a mail message by itself, in which case the
consulting client remains in charge of mail forwarding. This last approach is used in a Milter setup (with some limitations), or with a historical client
program amavisd-submit.
Since version 2.7.0 a before-queue setup is preferred, as it allows for a mail message transfer to be rejected during an SMTP session with a sending client. In an after-queue setup filtering takes place after a mail message has already been received and enqueued by an MTA, in which case a mail filter can no longer reject a message, but can only deliver it (possibly tagged), or discard it, or generate a non-delivery notification, which can cause unwanted backscatter in case of bouncing a message with a fake sender address.
A disadvantage of a before-queue setup is that it requires resources (CPU, memory) proportional to a current (peak) mail transfer rate, unlike an after-queue setup, where some delay is acceptable and resource usage corresponds to average mail transfer rate. With introduction of an option smtpd_proxy_options=speed_adjust in Postfix 2.7.0 the resource requirements for a before-queue content filter have been much reduced.
In some countries the legislation does not permit mail filtering to discard a mail message once it has been accepted by an MTA, so this rules out an after-queue filtering setup with discarding or quarantining of messages, but leaves a possibility of delivering (possibly tagged) messages, or rejecting them in a before-queue setup (SMTP proxy or milter).
Interfacing protocols
Amavis can receive mail messages from an MTA over one or more sockets of protocol families PF_INET (IPv4), PF_INET6 (IPv6) or PF_LOCAL (Unix domain socket), via protocols SMTP, LMTP, or a simple private protocol AM.PDP can be used with a helper program like amavisd-milter to interface with milters. On the output side protocols SMTP or LMTP can be used to pass a message to a back-end MTA instance or to an LDA, or a message can be passed to a spawned process over a Unix pipe. When SMTP or LMTP are used, a session can optionally be encrypted using a TLS STARTTLS (RFC 3207) extension to the protocol. SMTP Command Pipelining (RFC 2920) is supported in client and server code.
Interfacing with SpamAssassin
When spam scanning is enabled, a daemon process amavisd is conceptually very similar to a spamd process of a SpamAssassin project. In both cases forked child processes call SpamAssassin Perl modules directly, hence their performance is similar.
The main difference is in protocols used: Amavis typically speaks a standard SMTP protocol to an MTA, while in the spamc/spamd case an MTA typically spawns a spamc program passing a message to it over a Unix pipe, then the spamc process transfers the message to a spamd daemon using a private protocol, and spamd then calls SpamAssassin Perl modules.
Design priorities
Design priorities of the amavisd-new (from here on just called Amavis) are: reliability, security, adherence to standards, performance, and functionality.
Reliability
With the intention that no mail message could be lost due to unexpected events like I/O failures, resources depletion and unexpected program terminations, the amavisd program meticulously checks a completion status of every system call and I/O operation. Unexpected events are logged if at all possible, and handled with several layers of event handling. Amavis never takes a responsibility for a mail message delivery away from an MTA: the final success status is reported to an MTA only after the message has been passed on to the back-end MTA instance and reception was confirmed. In case of any fatal failures during processing or transferring of a message, the message being processed just stays in a queue of the front-end MTA instance, to be re-tried later. This approach also covers potential unexpected host failures, crashes of the amavisd process or one of its components.
The use of program resources like memory size, file descriptors, disk usage and creation of subprocesses is controlled. Large mail messages are not kept in memory, so the available memory size does not impose a limit on the size of mail messages that can be processed, and memory resources are not wasted unnecessarily.
Security
A great deal of attention is given to security aspects, required by handling potentially malicious, nonstandard or just garbled data in mail messages coming from untrusted sources.
The process which is handling mail messages runs with reduced privileges under a dedicated user ID. Optionally it can run chroot-ed. Risks of buffer overflows and memory allocation bugs is largely avoided by implementing all protocol handling and mail processing in Perl, which handles dynamic memory management transparently. Care is taken that content of processed messages does not inadvertently propagate to the system. Perl provides an additional security safety net with its marking of tainted data originating from
the wild, and Amavis is careful to put this Perl feature to good use by avoiding automatic untainting of data (use re "taint") and only untainting it explicitly at strategic points, late in a data flow.
Amavis can use several external programs to enhance its functionality. These are de-archivers, de-compressors, virus scanners and spam scanners. As these programs are often implemented in languages like C or C++, there is a potential risk that a mail message passed to one of these programs can cause its failure or even open a security hole. The risk is limited by running these programs as an unprivileged user ID, and possibly chroot-ed. Nevertheless, external programs like unmaintained de-archivers should be avoided. The use of these external programs is configurable, and they can be disabled selectively or as a group (like all decoders or all virus scanners).
Performance
Despite being implemented in an interpreted programming language Perl, Amavis itself is not slow. The good performance of the functionality implemented by Amavis itself (not speaking of external components) is achieved by dealing with data in large chunks (e.g. not line-by-line), by avoiding unnecessary data copying, by optimizing frequently traversed code paths, by using suitable data structures and algorithms, as well as by some low-level optimizations. Bottlenecks are detected during development by profiling code and by benchmarking. Detailed timing report in the log can help recognize bottlenecks in a particular installation.
Certain external modules or programs like SpamAssassin or some command-line virus scanners can be very slow, and using these would constitute a vast majority of elapsed time and processing resources, making resources used by Amavis itself proportionally quite small.
Components like external mail decoders, virus scanners and spam scanners can each be selectively disabled if they are not needed. What remains is functionality implemented by Amavis itself, like transferring mail message from and to an MTA using an SMTP or LMTP protocol, checking mail header section validity, checking for banned mail content types, verifying and generating DKIM signatures.
As a consequence, mail processing tasks like DKIM signing and verification (with other mail checking disabled) can be exceptionally fast and can rival implementations in compiled languages. Even full checks using a fast virus scanner but with spam scanning disabled can be surprisingly fast.
Adherence to standards
Implementation of protocols and message structures closely follows a set of applicable standards such as RFC 5322, RFC 5321, RFC 2033, RFC 3207, RFC 2045, RFC 2046, RFC 2047, RFC 3461, RFC 3462, RFC 3463, RFC 3464, RFC 4155, RFC 5965, RFC 6376, RFC 5451, RFC 6008, and RFC 4291. In several cases some functionality was re-implemented in the Amavis code even though a public (CPAN) Perl module exists, but lacks attention to detail in following a standard or lacks sufficient checking and handling of errors.
License
Amavis is licensed under a GPLv2 license. This applies to the current code, as well as to historical branches. An exception to this are some of the supporting programs (like monitoring and statistics reporting), which are covered by a New BSD License.
The project
The project started in 1997 as a Unix shell script to detect and block e-mail messages containing a virus. It was intended to block viruses at the MTA (mail transfer agent) or LDA (local delivery) stage, running on a Unix-like platform, complementing other virus protection mechanisms running on end-user personal computers.
Next the tool was re-implemented as a Perl program, which later evolved into a daemonized process. A dozen of developers took turns during the first five years of the project, developing several variants while keeping a common goal, the project name and some of the development infrastructure.
Since December 2008 (until 2018-10-09) the only active branch was officially amavisd-new, which was being developed and maintained by Mark Martinec since March 2002. This was agreed between the developers at the time in a private correspondence: Christian Bricart, Lars Hecking, Hilko Bengen, Rainer Link and Mark Martinec. The project name Amavis is largely interchangeable with the name of the amavisd-new branch.
Much functionality has been added through the years, like adding protection against spam and other unwanted content, besides the original virus protection. The focus is kept on reliability, security, adherence to standards and performance.
A domain amavis.org in use by the project was registered in 1998 by Christian Bricart, one of the early developers, who is still maintaining the domain name registration. The domain is now entirely dedicated to the only active branch. The project mailing list was moved from SourceForge to amavis.org in March 2011, and is hosted by Ralf Hildebrandt and Patrick Ben Koetter. The project web page and the main distribution site was located at the Jožef Stefan Institute, Ljubljana, Slovenia (until the handover in 2018), where most of the development was taking place between years 2002 and 2018.
Change of Project Leaders Announcement
On October 9 of 2018 Mark Martinec announced at the general support and discussion mailing list his retirement from the project and also that Patrick Ben Koetter will continue as new project leader.
After that Patrick notified the migration of the source code to a public GitLab repository and his plan for the next steps regarding the project development.
Branches and the project name
Through the history of the project the name of the project or its branches varied somewhat. Initially the spelling of the project name was AMaViS (A Mail Virus Scanner), introduced by Christian Bricart. With a rewrite to Perl the name of the program was Amavis-perl. Daemonized versions were initially distributed under a name amavisd-snapshot and then as amavisd. A modular rewrite by Hilko Bengen was called Amavis-ng.
In March 2002 the amavisd-new branch was introduced by Mark Martinec, initially as a patch against amavisd-snapshot-20020300. This later evolved into a self-contained project, which is now the only surviving and actively maintained branch. Nowadays a project name is preferably spelled Amavis (while the name of the program itself is amavisd). The name Amavis is now mostly interchangeable with amavisd-new.
History of the project
shell program
1997 (original code by Mogens Kjær - Carlsberg Laboratory, modified by Jürgen Quade) initial, not released officially
1998-01-17 AMaViS 0.1 (Christian Bricart) AMaViS, first official release
1998-01-28 AMaViS 0.1.1
1998-12-08 AMaViS 0.2.0-pre1
1999-02-25 AMaViS 0.2.0-pre2
1999-03-29 AMaViS 0.2.0-pre3
1999-03-31 AMaViS 0.2.0-pre4
1999-07-19 AMaViS 0.2.0-pre5
1999-07-20 AMaViS 0.2.0-pre6
2000-10-31 AMaViS 0.2.1 (Christian Bricart, Rainer Link, Chris Mason)
Perl program
2000-01 Amavis-perl (Chris Mason)
2000-08 Amavis-perl-8
2000-12 Amavis-perl-10
2001-04 Amavis-perl-11 (split to amavisd)
2003-03-07 Amavis-0.3.12 (Lars Hecking)
Perl daemon: amavisd
2001-01 daemonization (Geoff Winkless)
2001-04 amavisd-snapshot-20010407 (Lars Hecking)
2001-07 amavisd-snapshot-20010714
2002-03 amavisd-snapshot-20020300 (split to amavisd-new)
2003-03-03 amavisd-0.1
Perl, modular re-design
(Hilko Bengen)
2002-03 amavis-ng-0.1
2003-03 amavis-ng-0.1.6.2
amavisd-new
(Mark Martinec)
2002-03-30 amavisd-new, pre-forked, Net::Server
2002-05-17
2002-06-30 packages, SQL lookups
2002-11-16 integrated - one file
2002-12-27
2003-03-14 LDAP lookups
2003-06-16
2003-08-25 p5
2003-11-10 p6 @*_maps
2004-01-05 p7
2004-03-09 p8
2004-04-02 p9
2004-06-29 p10
2004-07-01 2.0 policy banks, IPv6 address formats
2004-08-15 2.1.0 amavisd-nanny monitoring utility
2004-09-06 2.1.2
2004-11-02 2.2.0
2004-12-22 2.2.1
2005-04-24 2.3.0 @decoders, per-recipient banning rules
2005-05-09 2.3.1
2005-06-29 2.3.2
2005-08-22 2.3.3
2006-04-02 2.4.0 DSN in SMTP, %*_by_ccat
2006-05-08 2.4.1
2006-06-27 2.4.2 pen pals, SQL logging and quarantine
2006-09-30 2.4.3
2006-11-20 2.4.4
2007-01-30 2.4.5
2007-04-23 2.5.0 blocking content categories, rewritten SMTP client
2007-05-31 2.5.1 amavisd-requeue
2007-06-27 2.5.2
2007-12-12 2.5.3
2008-03-12 2.5.4
2008-04-23 2.6.0 DKIM, bounce killer, TLS
2008-06-29 2.6.1
2008-12-12 Amavis is amavisd-new
2008-12-15 2.6.2
2009-04-22 2.6.3 support for CRM114 and DSPAM, truncation
2009-06-25 2.6.4 monitoring over SNMP
2010-04-25 2.7.0-pre4
2011-02-03 2.7.0-pre14
2011-03-07 mailing list moved from SourceForge to amavis.org
2011-04-07 2.6.5
2011-05-19 2.6.6
2011-06-01 2.7.0 pre-queue improvements, speedup
2012-04-29 2.7.1
2012-06-30 2.7.2
2012-06-30 2.8.0 use ØMQ instead of BDB, performance optimizations
2013-04-27 2.8.1-rc1
2013-06-28 2.8.1 can use Redis for pen pals storage
2013-09-04 2.8.2-rc1 (2.8.2 not released)
2014-05-09 2.9.0 structured log in JSON format, IP address auto-reputation
2014-06-27 2.9.1
2014-10-22 2.10.0 Internationalized Email (RFC 6530, SMTPUTF8, EAI, IDN)
2014-10-22 2.10.1
2016-04-26 2.11.0
2018-10-09 2.11.1 minor updates, just prior to migration to a GitLab repository
See also
List of antivirus software
SpamAssassin, a popular open source spam classifier
References
External links
Free email software
Free software programmed in Perl
Perl software
Spam filtering |
18675553 | https://en.wikipedia.org/wiki/GGH%20encryption%20scheme | GGH encryption scheme | The Goldreich–Goldwasser–Halevi (GGH) lattice-based cryptosystem is an asymmetric cryptosystem based on lattices. There is also a GGH signature scheme.
The Goldreich–Goldwasser–Halevi (GGH) cryptosystem makes use of the fact that the closest vector problem can be a hard problem. This system was published in 1997 by Oded Goldreich, Shafi Goldwasser, and Shai Halevi, and uses a trapdoor one-way function that is relying on the difficulty of lattice reduction. The idea included in this trapdoor function is that, given any basis for a lattice, it is easy to generate a vector which is close to a lattice point, for example taking a lattice point and adding a small error vector. But to return from this erroneous vector to the original lattice point a special basis is needed.
The GGH encryption scheme was cryptanalyzed in 1999 by Phong Q. Nguyen.
Operation
GGH involves a private key and a public key.
The private key is a basis of a lattice with good properties (such as short nearly orthogonal vectors) and a unimodular matrix .
The public key is another basis of the lattice of the form .
For some chosen M, the message space consists of the vector in the range .
Encryption
Given a message , error , and a
public key compute
In matrix notation this is
.
Remember consists of integer values, and is a lattice point, so v is also a lattice point. The ciphertext is then
Decryption
To decrypt the cyphertext one computes
The Babai rounding technique will be used to remove the term as long as it is small enough. Finally compute
to get the messagetext.
Example
Let be a lattice with the basis and its inverse
and
With
and
this gives
Let the message be and the error vector . Then the ciphertext is
To decrypt one must compute
This is rounded to and the message is recovered with
Security of the scheme
1999 Nguyen showed at the Crypto conference that the GGH encryption scheme has a flaw in the design of the schemes. Nguyen showed that every ciphertext reveals information about the plaintext and that the problem of decryption could be turned into a special closest vector problem much easier to solve than the general CVP.
References
Bibliography
Lattice-based cryptography
Public-key encryption schemes |
50326609 | https://en.wikipedia.org/wiki/Push%20Me%20Pull%20You | Push Me Pull You | Push Me Pull You is a 2016 video game by House House. It was released on PlayStation 4, Microsoft Windows, OS X, and Linux platforms.
Gameplay
In a game of Push Me Pull You, two teams of sausage-like bodies with heads at both ends face off in a circle for territorial control of a ball. The game has two- and four-player modes. There is a hidden option to swap the human heads for those of dogs.
Development
Push Me Pull You was developed by four friends from Melbourne as the indie developer House House. They have said that the game is about friendship and wrestling. The game was named for the two-headed llama pushmi-pullyu from Doctor Dolittle, and was originally planned for release in 2014. It was exhibited at Game Developers Conference 2014 and the 2015 PlayStation Experience. The game was announced for the PlayStation 4 in November 2015. Push Me Pull You was finally scheduled for release for PlayStation 4 on May 3, 2016, and for Microsoft Windows, OS X, and Linux later in the year.
Reception
According to review aggregator Metacritic, the game received "generally favorable reviews" on PlayStation 4, scoring a Metascore of 75/100, based on 4 reviews, on the aggregator. Sam Machkovech of Ars Technica said that the game was the best among its contemporary wave of "couch sports" games: non-combat multiplayer with elements similar to old sports video games. He added that the game's aesthetic was reminiscent of Adult Swim. Jamin Warren of Kill Screen wrote that the game's core gameplay has a "Koonsian quality of both innocent and grotesque". Alice O'Connor of Rock, Paper, Shotgun called the game "David Cronenberg's Wrestleball". Andrew Tarantola from Engadget compared Push Me Pull You to a combination of Greco-Roman wrestling, capture the flag, Human Centipede, and soccer. Writing for Polygon, Megan Farokhmanesh and Allegra Frank praised the game as "one of the most fast-paced, unique, and entertaining additions" entries in the local multiplayer genre.
References
External links
2016 video games
Indie video games
Linux games
Fantasy sports video games
Windows games
MacOS games
PlayStation 4 games
Cancelled Xbox One games
Cancelled Wii U games
Video games developed in Australia |
503360 | https://en.wikipedia.org/wiki/University%20of%20Hertfordshire | University of Hertfordshire | The University of Hertfordshire (UOH) is a public university in Hertfordshire, United Kingdom. The university is based largely in Hatfield, Hertfordshire. Its antecedent institution, Hatfield Technical College, was founded in 1948 and was identified as one of 25 Colleges of Technology in the United Kingdom in 1959. In 1992, Hatfield Polytechnic was granted university status by the British government and subsequently renamed University of Hertfordshire. It is one of the post-1992 universities.
Hertfordshire is mainly based at two campuses - College Lane and de Havilland. As of 2021, it has over 25,130 students, including more than 5,200 international students that together represent 100 countries. The university is one of Hertfordshire's largest employers with over 2,700 staff, 812 of whom are academic members of staff. It has a turnover of more than £235 million. The university has 9 schools: Hertfordshire Business School, Computer Science, Creative Arts, Education, Health and Social Work, Humanities (which oversees its CATS programme), Hertfordshire Law School, Life and Medical Sciences, Physics, Engineering and Computer Science and Hertfordshire Higher Education Consortium. Hertfordshire is a member of University Alliance, Universities UK and European University Association.
History
Origins
The original campus for the university was at Roe Green in Hatfield, where it was founded as a technical college with a particular focus on training aerospace engineers for the aerospace industry that was then prevalent in Hatfield. The Gape family of St Michael's Manor in St Albans owned the land at Roe Green from the late 17th century. In the 1920s they sold it to Hill, a farmer, who then sold it to Alan Butler, chairman of the de Havilland Aircraft Company who lived at Beech Farm nearby. In 1944 he donated of land at Roe Green to be used for educational purposes. In 1948 building commenced. The first principal W.A.J Chapman started on 1 January 1949 and in spring 1952 the 33 full-time and 66 part-time teachers were appointed. Hatfield Technical College opened with 1,738 students in September 1952 and in December officially opened by the Duke of Edinburgh. It was the first large technical college to be established in England after the war. Students attended the college on part-time or full-time courses.
In 1958 it was renamed Hatfield College of Technology and by 1960 offered four-year sandwich diplomas in technology. In 1961 it was designated a regional college in England and Wales by the Ministry of Education. The governors purchased a digital computer at a cost of £29,201 in 1962 so that a computer science degree could be established. The Council for National Academic Awards was formed in 1965 and Hatfield College was recognised for 13 honours degree courses.
Sir Norman Lindop became the Principal of the College of Technology in 1966. A year later L.E. Haines was made Chair of Governors, but died shortly afterwards and was replaced by F. Bramston Austin. A year later, Bayfordbury is acquired for the college.
20th century
In 1969 Hatfield College of Technology became Hatfield Polytechnic, offering honours degree courses in engineering and technology. In 1970 an observatory was built on the Bayfordbury Campus. Wall Hall and Balls Park Teacher Training Colleges merged in 1976 to become Hertfordshire College of Higher Education. In the same year Hatfield Polytechnic took over Balls Park. By 1977 more than ten per cent of the 4000 came from more than forty different countries. The Students' Union Social Centre opened in 1977.
In 1982 John Illston succeeded Sir Norman Lindop as the director. A sports hall was built on the Hatfield Campus in 1984 and the number of students in that year was more than 5000. The number of staff, in the same year, had increased to 824.
Neil Buxton became its director in 1987. The following year, Sir Ron Dearing and Buxton signed an agreement that gave the polytechnic accreditation from the Council for National Academic Awards. Hatfield was one of only 21 polytechnics, colleges and Scottish Central institutions to be accredited at the time. Hatfield was also, in that year, one of eight polytechnics accredited for research degrees. In 1989 it was given corporate status.
After John Major announced in 1991 that polytechnics were to be abolished, Hatfield Polytechnic announced its intention to apply for university status. In 1992 it became the University of Hertfordshire and Sir Brian Corby became the first Chancellor. It was the first university to run a bus company by making Uno bus public. The Hertfordshire College of Health Care and Nursing Studies and the Barnet College of Nursing and Midwifery merged with the university in 1993.
In 1992, Hatfield Polytechnic was granted university status by the British government and subsequently renamed University of Hertfordshire. Its antecedent institution, Hatfield Technical College, was founded in 1948 and was identified as one of 25 Colleges of Technology in the United Kingdom in 1959.
In 1994 the St Albans Cathedral was chosen to hold the university's graduation ceremonies. The same year saw the first publication of league tables and Hertfordshire was named as the top new university. In 1995 its law school moved to St Albans. Sir Ian MacLaurin was appointed chancellor in 1996 and in 1997 the Learning Resource Centre opened.
21st century
In 2000, Olivia de Havilland, cousin of Sir Geoffrey de Havilland, visited the university to mark the inauguration of a project to build a new campus named after her cousin. The university's 50th anniversary was celebrated in 2002, by which time it had 21,695 students. In 2003 Tim Wilson succeeded Neil Buxton as vice-chancellor and the de Havilland campus opened.
Hertfordshire Sports Village also opened in 2003. In 2005 the university launched the Bedfordshire and Hertfordshire Postgraduate Medical School and School of Pharmacy to enhance medical education, training and research in the region. In 2006 the university opened its School of Film, Music and Media. The university opened the MacLaurin building in 2007, named in honour of its former chancellor Lord MacLaurin followed by a new law building in 2011. During this period, Hertfordshire became a lead academic sponsor of Elstree University Technical College, a university technical college which opened in September 2013. Hertfordshire is also the academic sponsor of Watford University Technical College
In 2010, Tim Wilson announced his intention to retire as vice-chancellor after more than 19 years at the university.
In 2011, Quintin McKellar replaced Tim Wilson as vice-chancellor of the university. Also on the same year, the Hatfield Beacon is restored and repositioned at the new Law School site. Meanwhile, in the following year, the Kaspar project received a £180,000 donation from an international grant making foundation, which was used to further the university's research into the use of robotics to support the social development of children with autism.
In 2015, Hertfordshire has adopted a policy of naming its buildings after people or organisations with a significant local or regional impact. These include Kate Bellingham, British engineer and television presenter and Alistair Spalding, chief executive and artistic director of Sadler's Wells Theatre. All of the halls are being named after influential alumni who the university feels represent the attributes of Hertfordshire graduates. In these two cases, the halls were named in recognition of Bellingham and Spalding's attributes of, intellectual depth and adaptability and professionalism, employability and enterprise. On the same year, University of Hertfordshire has been announced as one of the first recipients of the Race Equality Charter which is an initiative that recognises excellence in advancing racial equality in higher education. The charter was launched by the Equality Challenge Unit at the start of the 2015 academic year.
In 2020, the University of Hertfordshire Observatory celebrated its 50th anniversary, and revealed an eight-year-long exposure photograph, breaking the record of longest exposure. The artist, Regina Valkenborgh, was a Master's student in August 2012, when she set the pinhole camera attached to one of the telescope domes in the Observatory. The camera was then forgotten, and rediscovered in September 2020 by the Observatory’s Principal Technical officer. The photograph registered the path of the sun over the sky during the 2,953 days it was exposed to it.
Organisation and administration
The University of Hertfordshire was established as an independent Higher Education Corporation in 1989 under the terms of the Education Reform Act (1989). The institution is an exempt charity. The board of governors has responsibility for running the university, while the academic board is responsible for academic quality and standards, academic policies, research and scholarship. The vice-chancellor oversees its day-to-day running. The current chancellor is Robert Gascoyne-Cecil and the current vice-chancellor is Quintin McKellar. In October 2019, the current deputy vice-chancellor, Professor Ian Campbell, is leaving to become vice-chancellor of Liverpool John Moores University.
The following people have been vice-chancellors of the university.
Neil Buxton (1987-2003)
Tim Wilson (2003–2010)
Quintin McKellar (2011–present)
The university runs on a three-term calendar in which the academic year is divided into three terms: Autumn (September–December), Spring (January–April), and Summer(April–May). Full-time undergraduate students take three to four courses every year for approximately eleven weeks before their quarterly academic breaks. The school year typically begins in late September and ends in mid-May.
Schools
The university offers over 800 undergraduate, postgraduate, CPD, online distance learning and short courses in its 9 schools of study, within which there are around 50 academic departments and 24 research centres.
Hertfordshire Business School
Creative Arts
Education
Health and Social Work
Hertfordshire Higher Education Consortium
Hertfordshire Law School
Humanities (which oversees its CATS programme)
Life and Medical Sciences
Physics, Engineering and Computer Science
Charity
Being a Higher Education Corporation created by the 1988 Education Reform Act as amended by the 1992 Act, the University of Hertfordshire is an exempt charity as defined under the various Charities Acts.
The University of Hertfordshire has due regard to the Charity Commission's guidance on the reporting of public benefit, and particularly its supplementary guidance on the advancement of education, in accordance with the requirements of HEFCE, the Higher Education Funding Council for England, as the principal regulator of English higher education institutions under the Charities Act 2006.
The university has entered an agreement with the Office of Fair Access (OFFA) to demonstrate that access to programmes of full-time undergraduate education should not be limited on grounds of individual financial circumstances.
Affiliations and memberships
Hertfordshire is a member of Association of Commonwealth Universities which is the representative body of 535 universities from 37 Commonwealth countries. It is the world's first and oldest international university network, established in 1913. It is also a member of University Alliance, a network of British universities which was formed in 2006, adopting the name in 2007. University Alliance is a group of 'business engaged' universities that claim to drive innovation and enterprise growth through research and teaching. Its MBA programme is affiliated with Association of MBAs, the only global MBA-specific Accreditation and Membership Organisation.
Campus
The university is primarily based on two campuses, College Lane and de Havilland. It owns a BioPark facility, which is a science park managed by Exemplas on behalf of the university. It also provides 6,000 square metres of laboratory and office space to life science and health technology businesses. As of 2014, there are currently 27 permanent and virtual tenants.
Additionally, a pool and climbing wall are among its sports facilities. It has also the Weston auditorium, for arts events, two art galleries and owns one of the highly recognised teaching observatories in the United Kingdom.
With over 25,130 students, including more than 5,200 international students that together represent 100 countries, Hertfordshire has a global alumni of over 165,000.
College Lane Campus
The main site of the university remains the College Lane campus, which houses the original Hatfield Technical College building. Notable among the buildings in this campus is the university's Learning Resource Centre, a combined library and computer centre. There is also a substantial collection of halls of residence and student houses, and the University of Hertfordshire Students' Union is headquartered at College Lane campus. The College Lane campus is also the location of Hertfordshire International College, which is part of the Navitas group, providing a direct pathway for international students to the university. The Hertfordshire Intensive Care & Emergency Simulation Centre is also located at College Lane. A new science building has recently opened at College Lane. This purpose built facility will primarily offer teaching laboratories, a range of research laboratories and a café.
de Havilland Campus
The £120-million de Havilland campus, which was built by Carillion, opened in September 2003 and is situated within 15 minutes walk of College Lane, and is built on a former British Aerospace site. This campus also houses its own Learning Resource Centre, a combined library and computer centre. Hertfordshire Sports Village which includes a gym, swimming pool, squash courts is also on this site. The large Weston Auditiorium is present on the de Havilland campus, adjacent to the Learn Resource Centre. The auditorium has a capacity of 450 and can host talks both by university lectures for students and for guest lecturers for guests and students, music and films events and dance events. The campus also contains 11 halls of residence; named after local towns and villages. Ashwell and Welwyn are examples of the buildings with the towns being present in Hertfordshire. The campus is mostly themed around law and business, having its business school located on the campus as well as its law school. A full scale, mocked up court room is present, being available for use for students studying a law degree. Along with publicly Northampton University it provides the 2 years accelerated law degree.
Bayfordbury Campus
A third 50-hectare site in Bayfordbury houses the university's astronomical and atmospheric physics remote sensing observatory, Regional Science Learning Centre, field stations for biology and geography programmes.
Situated approximately from the main campus in Hatfield, Bayfordbury Observatory is one of the largest astronomical teaching observatories in the United Kingdom. The observatory has formed part of the astronomy-related degree programmes since it opened in 1970.
The seven optical telescopes at Bayfordbury campus to observe detailed images of objects in space. Moreover, the five newest telescopes are also able to be operated remotely. The 4.5-metre radio telescope and 3-dish 115-metre baseline interferometer allow a completely different view of the universe. These are connected to 21 cm line receivers, to detect the neutral hydrogen in the galaxy and extragalactic radio sources.
Meridian House
Home to some Schools within the Health and Human Faculty, this building is located on the edge of Hatfield town centre, off College Lane campus. Meridian House is the location of eight clinical skills laboratories for nursing and midwifery programmes of the university. Skills facilities and ambulances for paramedic training are also situated at Meridian House, aside from counselling programme and staff offices.
Gallery
University symbols
Academic dress
The University of Hertfordshire prescribes academic dress for its members. In accordance with tradition, Hertfordshire's academic dress consists of a gown, a cap and a hood. The black gown and square cap familiar to all readers of the Beano had evolved into their present form in England by the end of the Reformation. The hood, which is now the distinctive mark of a university-level qualification, is medieval in origin, and was originally functional.
Ceremonial mace
The ceremonial mace was produced in 1999 by craftsman Martyn Pugh, a Freeman of the Worshipful Company of Goldsmiths, member of the British Jewellers Association and a Founder Member of the Association of British Designer Silversmiths. Its design symbolises the university's origins, expertise and associations. Its shape is inspired by the shape of an aeroplane wing symbolising the university's origin in the aviation industry. The head of the mace is engraved with zodiac symbols representing the university's contribution to astronomy and also contains the DNA double helix representing the biological sciences and microprocessor chips representing information and communications technology.
Coat of arms
The university's coat of arms was granted in 1992. The shield is charged with an oak tree taken from the coat of arms of the former Hatfield Rural District, the constellation Perseus (containing the binary star Algol) and a representation of the letter "H" recalling the emblem of the former Hatfield Polytechnic. The crest, a Phoenix rising from an astral crown, represents the university's evolution from a technical college training apprentices for the aviation industry. The two harts supporting the shield represent the county of Hertfordshire, with the covered cups referring to A.S. Butler, who donated the land upon which the original campus was built. A scroll bears the motto Seek Knowledge Throughout Life.
University logo
The standard university logo comprises the university name and the UH symbol in a horizontal panel. There is an exclusion zone equivalent to the height of the H in the logo above, below and to the right of the logo. The university have created an endorsed version of the logo to be used where legibility is an issue with the standard logo. It comprises just the university name in a horizontal panel. Although the university brands its logo in various colours, the standard colours are black and white.
Academic profile
Reputation
The university's School of Pharmacy has been awarded full Royal Pharmaceutical Society of Great Britain accreditation. The University of Hertfordshire is recognised as one of the top twenty universities in the world to study animation.
According to Destination of Leavers from Higher Education 2012–13 93.2 per cent of its full-time, first degree UK graduates are in work or further study within six months of graduating. Four of the university's schools achieved scores of 98 per cent: physics; astronomy and mathematics; health and social work; law and education. The survey, conducted by the UK's Higher Education Statistics Agency (HESA), revealed in its UK Performance Indicators for Employment 2013/14 that the University of Hertfordshire has climbed 30 places in the past year and is now ranked 35th out of 152 universities in the UK.
In September 2015, the Complete University Guide showed that the university has the lowest recorded 'student-relevant' crimes in the East of England. It is the fourth year the university has had the lowest rate of recorded crime in the East of England. For university's commitment to gender equality, it was regranted Athena Swan's Bronze institutional status.
The University of Hertfordshire won the Guardian University Award for Student Experience in 2015.
Rankings
University of Hertfordshire ranks 601–800 among world universities in Times Higher Education World University Rankings in 2019. It comes under the ranking of 101-150 under Young University Rankings 2018. In subject specific rankings, it has an overall world ranking between 301-400 in Arts and Humanities in 2019. It had a subject specific world ranking between 150-200 in European Teaching in 2018.
The Complete University Guide ranked UH courses in Food Science, Social Work, Optometry Ophthalmology & Orthoptics and Medical Technology as the top 20 in the UK in 2019.
In the THE 100 Under 50 universities 2015, a global ranking of the top 100 world universities under 50 years old, University of Hertfordshire was placed 71st.
It was awarded the Entrepreneurial University of the Year by Times Higher Education in 2010. In 2011, it was ranked 41st by The Complete University Guide among UK universities, its highest regional ranking in recent years.
In the Times Higher Education ranking of most international universities in January 2015, Hertfordshire ranked 84th in the top 100 in the world. In 2016, it was placed at 122nd in the top 200 international universities in the world, by Times Higher Education. The rankings are based on excellence across teaching, research, citations, industry income and international outlook.
In the US News Best Global Universities Ranking in 2018, Hertfordshire ranked 698th among universities in the world.
According to the Times Higher Education Student Experience Survey 2018, the University of Hertfordshire has a ranking of 69, with a score of 75.5 for overall student satisfaction.
QAA and OIA
The last Quality Assurance Agency institutional audit for the university was in March 2009. The outcome was that 'confidence can reasonably be placed in the soundness of the institution's present and likely future management of the academic standards of the awards that it offers'.
According to the complaint statistics, from the Office of the Independent Adjudicator, the university issued 69 completion of
procedures letters in relation to student complaints, in 2013. This is below the band medium of 81, possibly suggesting greater student
satisfaction when compared to universities of a similar size.
The OIA received 22 complaints in 2013. This is above the band medium of 18.5, possibly suggesting that more students
are dissatisfied with the outcome of internal complaints procedure's, compared to universities of a similar size.
The university has also never been named in an OIA annual report for a shortfall in practice, or a failure to comply with a recommendation set by the Adjudicator.
Research
The university has three research institutes: Health and Human Sciences Research Institute; Science and Technology Research Institute; Social Sciences, Arts and Humanities Research Institute. Also an expanding research profile with key strengths in areas of nursing, psychology, history, philosophy, physics and computer science.
HR Excellence in Research
In recognition of development activities related to research careers and the position of researchers at the university, the European Commission awarded University of Hertfordshire the right to use the HR Excellence in Research logo in spring 2010.
Research Excellence Framework
Over 55 per cent of the university's research has been rated 'world leading' and 'internationally excellent' in the UK Government's 2014 Research Excellence Framework (REF) announced last 18 December 2014. Fifty-seven per cent of the university's research submissions achieved a 4 or 3-star rating. This is an increase of 11% when compared to the results of the assessment in 2008. In 2014, it claimed the top impact for History for the results of REF, indicating that all of its submission in History is deemed 'outstanding'.
Kaspar
Kaspar, a social robot, has been designed by the University of Hertfordshire's Adaptive Systems Research Group (ASRG). The Kaspar project began in 2005, drawing upon previous researches to develop a social robot for engaging autistic children in a variety of play scenarios. The aim was to research whether interacting and communicating with Kaspar would help children with autism interact and communicate more easily with people. This is important because there is mounting evidence that early intervention for children with autism may change the child's development trajectory. Kaspar is a research tool with programmed responses adapted to be used by an autistic child in a safe, non-judgemental environment. The Kaspar research has shown that robots may provide a safe and predictable tool for children with autism, that enables the children to learn social interaction and communication skills, addressing specific therapeutic and educational objectives (for example, being able to engage in direct eye-contact or shared eye-gaze), in an enjoyable play context.
Rocket powered car
As part of the final year Aerospace Project, students and staffs from the University of Hertfordshire designed, built and tested a full sized rocket powered car under the mentorship of Ray Wilkinson, a senior professor from Department of Aerospace and Mechanical Engineering.With support from BBC - Bang Goes the Theory and Host Dallas Campbell, the Vauxhall VX220 sports car, was fitted with a large hybrid rocket motor that is designed to produce over half a tonne of thrust was tested in the Duxford Aerodrome. The project got lot of attention for its unprecedented success story and was showcased in local places to build interest in STEM.
Facilities
In 1992, it established University of Hertfordshire Press, whose first publication was a book celebrating the institution's change in status from polytechnic to university.
Art collection
The University of Hertfordshire holds over 450 artworks in its art collection. The ethos of the UH Art Collection is to present modern and contemporary art in places where people study, work and visit. This reflects the University of Hertfordshire's determination to provide not only an attractive education setting but also one which will inform, enlighten and enhance the life of its students, staff and the local community. The UH Art Collection was established in 1952, as part of Hertfordshire Country Council's commitment to the post-war programme. The collection has a diverse portfolio including photography, textile, ceramics, sculpture, mixed media and works by Ben Nicholson, Barbara Hepworth, Andy Goldsworthy, Alan Davie, and Diane MacLean.
Park and Ride
Hertfordshire operates a regular shuttle bus service, Park and Ride, which connects 800 parking spaces at Angerland Common with its College Lane and de Havilland Campus facilities. The scheme started in 2006, when it is initially provided with the 700-car facility at Angerland Common, off South Way, Hatfield, in a bid to get cars off surrounding roads.
Since 2006, the university has planned on opening a second venue, with 150 spaces, at the south side car park at Stanborough Lakes in Welwyn Garden City.
Uno bus
Uno (formerly UniversityBus) is a bus service operated by the University of Hertfordshire, serving members of the general public, and also its own students and staff, at a discounted rate. In 1992, the University of Hertfordshire wanted to create and provide bus service to and from the university. Uno, previously known as UniversityBus, was created to provide student transport to the university from local areas; improve east-west travel across the county of Hertfordshire; and, to create new links between Hertfordshire and North London.
Student life
The main source of nightlife is the Forum, which houses three entertainment spaces, a restaurant, a café, multiple bars and onsite parking. Hertfordshire Students' Union (HSU) is the Students' Union of the University of Hertfordshire. The Students' Union Social Centre was opened in 1977. The Hatfield Technical College's management encouraged the establishment of a Student Representative Council (SRC) in 1982, to create a sense of unity and expand the social activities of its day students. The SRC was affiliated to the National Union of Students but initially restricted itself largely to social activities. After 1988 it began to campaign on issues such as improvements to the canteen, lifting the ban on religious or political activity within the then Hatfield Polytechnic, and for a formal students' union. The sectarian ban was finally lifted in 1992 and a Union granted in 1995. However, the canteen continued to be an issue throughout the 2000s. The Students' Union at the University of Hertfordshire represents all students in the university by organising campus activities and running different clubs and societies, from sports to entertainment.
Trident Media Radio
Trident Media Radio (formerly known as Crush Radio, Campus Radio Hatfield, CRUSH and Crush 1278) is the student radio station. TMR is run by students of the university. Crush is run by students of the university along with amateurs from around the surrounding areas.
Crush was the first campus radio, founded in 1960 under the name of CRH (Campus Radio Hatfield). After starting as a pirate radio station, CRH was turned into a University Society of the University of Hertfordshire and was renamed Crush 1278 for it broadcast on 1278AM frequency. As Crush became more accessible, via the internet, the name was changed again to Crush Radio. In 2009 Crush as a society merged with the other media societies of the Students union and the University of Hertfordshire as one media society, though Crush still uses its own website and broadcasts over 1278AM frequency, however it stopped broadcasting on 1278AM after the move in September 2009, but restarted commencing February 2011. Crush Radio has been broadcasting since 1960. It broadcasts online via the Tunein platform.
Sport
Rowing
The University of Hertfordshire Rowing Club is affiliated to British Rowing (boat code UHE) and Dave Bell became a British champion after winning the men's double sculls title at the 2010 British Rowing Championships.
Partner institutions
The university holds a number of formal links with top-ranking institutions from around the world to share teaching and research and facilitate staff and student exchanges.
Chulalongkorn University, Thailand
James Cook University, Australia
McGill University, Canada
Nanyang Technological University, Singapore
Stony Brook University, US
University of Oklahoma, US
Yonsei University, Korea
Aside from its international partners, the university has also strong regional agenda and a number of partner institutions in the region: Elstree Screen Arts Academy a university technical college located in Borehamwood; The Watford UTC, a University Technical College for the Watford area. The UTC specialises in Event Management and Computer Sciences.
Notable alumni
The university has notable alumni and staff in a number of disciplines. Hertfordshire has more than 5,200 international students and a global network of more than 160,000 alumni.
Arts, science and academia
Jean Bacon – Professor of Distributed Systems, Computer Laboratory, University of Cambridge
Tony Banham – Founder of the Hong Kong War Diary project
Ciarán O'Keeffe – Psychologist specialising in parapsychology and forensic psychology
Diane Maclean – Sculptor and environmental artist
Ben Mosley - Expressive artist
Sean Hedges-Quinn – British sculptor and animator
Government, politics and society
Helen Lederer – Comedian, writer and actress who emerged as part of the alternative comedy boom at the beginning of the 1980s
Abdulaziz bin Abdullah – Deputy minister of foreign affairs in Saudi Arabia
John Cryer – English Labour Party politician
Richard Howitt – Member of the European Parliament for the Labour Party for the East of England
Akif Çağatay Kılıç – Current Minister of Youth and Sports of Turkey
Darell Leiking - Former minister of the international trade & industry of Malaysia (MITI) and current MP in Malaysian Parliament
Mark Oaten – British former politician who was a senior member of the Liberal Democrat Party
Fiona Onasanya - Labour Member of Parliament
Lawrie Quinn – Labour politician in England
Claire Ward – British Labour Party politician
Sarah West – First woman to be appointed to command a major warship in the Royal Navy
Prince Raj – Member of Indian Parliament
Gwen O'Mahony - Former MLA in the 39th Parliament of British Columbia
Business and finance
Chris Gubbey – Auto executive for General Motors
Martin Leach – British businessman
Luke Scheybeler – British designer and entrepreneur
Media and entertainment
Kate Bellingham – British engineer and BBC presenter
Yulia Brodskaya – Artist and illustrator known for her handmade elegant and detailed paper illustrations
Sanjeev Bhaskar – British comedian, actor and broadcaster
Matthew Buckley – British actor
Stevyn Colgan – British writer, artist and speaker
Sonia Deol – British radio and television presenter, currently at GlobalBC in Vancouver, Canada (previously BBC Asian Network)
Des de Moor – member of The Irresistible Force with Morris Gould aka Mixmaster Morris
Jaine Fenn – British science fiction author
Guvna B – Urban contemporary gospel rap artist and composer
Bob Johnson – British guitarist formerly in the electric folk band Steeleye Span
Chris Knowles ( musician, DJ ) member of Hagar the Womb and the Liberator DJ collective. He DJs under the moniker Chris Liberator.
Lisa Lazarus – British model and actress
Upen Patel – British male model and film actor
Flux Pavilion – British dub step musician (real name Josh Steele)
Sports and athletics
Ajaz Akhtar – Former British cricketer
Steve Borthwick – Former English rugby union footballer who played lock for Saracens and Bath
Noah Cato – Rugby union player
Iain Dowie – Football manager
Owen Farrell – England, Saracens rugby union player
Gavin Fisher – Former chief designer of the Williams Formula One team.
Alex Goode – Professional British rugby union player
Aaron Liffchak – Rugby union footballer
Michael Owen – Rugby union player: former Wales and British & Irish Lions captain
Sachin Patel – Former British cricketer
Tom Ryder – Rugby union player
Alex Skeel – English football coach, domestic violence survivor
Notes
References
External links
University of Hertfordshire official website
Educational institutions established in 1952
1952 establishments in England
University Alliance
Exempt charities
Education in Hertfordshire
Universities UK |
5060495 | https://en.wikipedia.org/wiki/Non-interference%20%28security%29 | Non-interference (security) | Noninterference is a strict multilevel security policy model, first described by Goguen and Meseguer in 1982, and amplified further in 1984.
Introduction
In simple terms, a computer is modeled as a machine with inputs and outputs. Inputs and outputs are classified as either low (low sensitivity, not highly classified) or high (sensitive, not to be viewed by uncleared individuals). A computer has the noninterference property if and only if any sequence of low inputs will produce the same low outputs, regardless of what the high level inputs are.
That is, if a low (uncleared) user is working on the machine, it will respond in exactly the same manner (on the low outputs) whether or not a high (cleared) user is working with sensitive data. The low user will not be able to acquire any information about the activities (if any) of the high user.
Formal expression
Let be a memory configuration, and let and be the projection of the memory to the low and high parts, respectively. Let be the function that compares the low parts of the memory configurations, i.e., iff . Let be the execution of the program starting with memory configuration and terminating with the memory configuration .
The definition of noninterference for a deterministic program is the following:
Limitations
Strictness
This is a very strict policy, in that a computer system with covert channels may comply with, say, the Bell–LaPadula model, but will not comply with noninterference. The reverse could be true (under reasonable conditions, being that the system should have labelled files, etc.) except for the "No classified information at startup" exceptions noted below. However, noninterference has been shown to be stronger than nondeducibility.
This strictness comes with a price. It is very difficult to make a computer system with this property. There may be only one or two commercially available products that have been verified to comply with this policy, and these would essentially be as simple as switches and one-way information filters (although these could be arranged to provide useful behaviour).
No classified information at startup
If the computer has (at time=0) any high (i.e., classified) information within it, or low users create high information subsequent to time=0 (so-called "write-up," which is allowed by many computer security policies), then the computer can legally leak all that high information to the low user, and can still be said to comply with the noninterference policy. The low user will not be able to learn anything about high user activities, but can learn about any high information that was created through means other than the actions of high users.(von Oheimb 2004)
Computer systems that comply with the Bell-LaPadula Model do not suffer from this problem since they explicitly forbid "read-up." Consequently, a computer system that complies with noninterference will not necessarily comply with the Bell-LaPadula Model. Thus, the Bell–LaPadula model and the noninterference model are incomparable: the Bell-LaPadula Model is stricter regarding read-up, and the noninterference model is stricter with respect to covert channels.
No summarisation
Some legitimate multilevel security activities treat individual data records (e.g., personal details) as sensitive, but allow statistical functions of the data (e.g., the mean, the total number) to be released more widely. This cannot be achieved with a noninterference machine.
Generalizations
The noninterference property requires that the system should not reveal any information about the high inputs from the observable output for various low inputs. However, one can argue that achieving noninterference is oftentimes not possible for a large class of practical systems, and moreover, it may not be desirable: programs need to reveal information that depends on the secret inputs, e.g. the output must be different when a user enters a correct credential vs when she enters incorrect credentials. Shannon entropy, guessing entropy, and min-entropy are prevalent notions of quantitative information leakage that generalize noninterference.
References
Further reading
Computer security models |
427776 | https://en.wikipedia.org/wiki/Whitfield%20Diffie | Whitfield Diffie | Bailey Whitfield 'Whit' Diffie (born June 5, 1944), ForMemRS, is an American cryptographer and one of the pioneers of public-key cryptography along with Martin Hellman and Ralph Merkle. Diffie and Hellman's 1976 paper New Directions in Cryptography introduced a radically new method of distributing cryptographic keys, that helped solve key distribution—a fundamental problem in cryptography. Their technique became known as Diffie–Hellman key exchange. The article stimulated the almost immediate public development of a new class of encryption algorithms, the asymmetric key algorithms.
After a long career at Sun Microsystems, where he became a Sun Fellow, Diffie served for two and a half years as Vice President for Information Security and Cryptography at the Internet Corporation for Assigned Names and Numbers (2010–2012). He has also served as a visiting scholar (2009–2010) and affiliate (2010–2012) at the Freeman Spogli Institute's Center for International Security and Cooperation at Stanford University, where he is currently a consulting scholar.
Education and early life
Diffie was born in Washington, D.C., the son of Justine Louise (Whitfield), a writer and scholar, and Bailey Wallys Diffie, who taught Iberian history and culture at City College of New York. His interest in cryptography began at "age 10 when his father, a professor, brought home the entire crypto shelf of the City College Library in New York."
At Jamaica High School in Queens, New York, Diffie "performed competently" but "never did apply himself to the degree his father hoped." Although he graduated with a local diploma, he did not take the statewide Regents examinations that would have awarded him an academic diploma because he had previously secured admission to Massachusetts Institute of Technology on the basis of "stratospheric scores on standardized tests." While he received a B.S. in mathematics from the institution in 1965, he remained unengaged and seriously considered transferring to the University of California, Berkeley (which he perceived as a more hospitable academic environment) during the first two years of his undergraduate studies. At MIT, he began to program computers (in an effort to cultivate a practical skill set) while continuing to perceive the devices "as very low class... I thought of myself as a pure mathematician and was very interested in partial differential equations and topology and things like that."
Career and research
From 1965 to 1969, he remained in Greater Boston as a research assistant for the MITRE Corporation in Bedford, Massachusetts. As MITRE was a defense contractor, this position enabled Diffie (a pacifist who opposed the Vietnam War) to avoid the draft. During this period, he helped to develop MATHLAB (an early symbolic manipulation system that served as the basis for Macsyma) and other non-military applications.
In November 1969, Diffie became a research programmer at the Stanford Artificial Intelligence Laboratory, where he worked on LISP 1.6 (widely distributed to PDP-10 systems running the TOPS-10 operating system) and correctness problems while cultivating interests in cryptography and computer security under the aegis of John McCarthy.
Diffie left SAIL to pursue independent research in cryptography in May 1973. As the most current research in the field during the epoch fell under the classified oversight of the National Security Agency, Diffie "went around doing one of the things I am good at, which is digging up rare manuscripts in libraries, driving around, visiting friends at universities." He was assisted by his new girlfriend and future wife, Mary Fischer.
In the summer of 1974, Diffie and Fischer met with a friend at the Thomas J. Watson Research Center (headquarters of IBM Research) in Yorktown Heights, New York, which housed one of the only nongovernmental cryptographic research groups in the United States. While group director Alan Konheim "couldn't tell [Diffie] very much because of a secrecy order," he advised him to meet with Martin Hellman, a young electrical engineering professor at Stanford University who was also pursuing a cryptographic research program. A planned half-hour meeting between Diffie and Hellman extended over many hours as they shared ideas and information.
Hellman then hired Diffie as a grant-funded part-time research programmer for the 1975 spring term. Under his sponsorship, he also enrolled as a doctoral student in electrical engineering at Stanford in June 1975; however, Diffie was once again unable to acclimate to "homework assignments [and] the structure" and eventually dropped out after failing to complete a required physical examination: "I didn't feel like doing it, I didn't get around to it." Although it is unclear when he dropped out, Diffie remained employed in Hellman's lab as a research assistant through June 1978.
In 1975–76, Diffie and Hellman criticized the NBS proposed Data Encryption Standard, largely because its 56-bit key length was too short to prevent brute-force attack. An audio recording survives of their review of DES at Stanford in 1976 with Dennis Branstad of NBS and representatives of the National Security Agency. Their concern was well-founded: subsequent history has shown not only that NSA actively intervened with IBM and NBS to shorten the key size, but also that the short key size enabled exactly the kind of massively parallel key crackers that Hellman and Diffie sketched out. When these were ultimately built outside the classified world (EFF DES cracker), they made it clear that DES was insecure and obsolete.
From 1978 to 1991, Diffie was Manager of Secure Systems Research for Northern Telecom in Mountain View, California, where he designed the key management architecture for the PDSO security system for X.25 networks.
In 1991, he joined Sun Microsystems Laboratories in Menlo Park, California as a Distinguished Engineer, working primarily on public policy aspects of cryptography. Diffie remained with Sun, serving as its Chief Security Officer and as a Vice President until November 2009. He was also a Sun Fellow.
, Diffie was a visiting professor at the Information Security Group based at Royal Holloway, University of London.
In May 2010, Diffie joined the Internet Corporation for Assigned Names and Numbers (ICANN) as Vice President for Information Security and Cryptography, a position he left in October 2012.
Diffie is a member of the technical advisory boards of BlackRidge Technology, and Cryptomathic where he collaborates with researchers such as Vincent Rijmen, Ivan Damgård and Peter Landrock.
In 2018, he joined Zhejiang University, China, as a visiting professor, Cryptic Labs generated 2 months course in Zhejiang University.
Public key cryptography
In the early 1970s, Diffie worked with Martin Hellman to develop the fundamental ideas of dual-key, or public key, cryptography. They published their results in 1976—solving one of the fundamental problems of cryptography, key distribution—and essentially broke the monopoly that had previously existed where government entities controlled cryptographic technology and the terms on which other individuals could have access to it. "From the moment Diffie and Hellman published their findings..., the National Security Agency's crypto monopoly was effectively terminated. ... Every company, every citizen now had routine access to the sorts of cryptographic technology that not many years ago ranked alongside the atom bomb as a source of power."<ref name=nytm19940712>
{{cite news|last=Levy|first=Stephen |title=Battle of the Clipper Chip |newspaper=New York Times Magazine |date=1994-07-12 |pages=44–51, plus cover photo of Diffie |quote=Whitfield Diffie's amazing breakthrough could guarantee computer privacy. But the Government, fearing crime and terror, wants to co-opt his magic key and listen in. ... High-tech has created a huge privacy gap. But miraculously, a fix has emerged: cheap, easy-to-use-, virtually unbreakable encryption. Cryptography is the silver bullet by which we can hope to reclaim our privacy. ... a remarkable discovery made almost 20 years ago, a breakthrough that combined with the obscure field of cryptography into the mainstream of communications policy. It began with Whitfield Diffie, a young computer scientist and cryptographer. He did not work for the government. ... He had been bitten by the cryptography bug at age 10 when his father, a professor, brought home the entire crypto shelf of the City College Library in New York. ... [Diffie] was always concerned about individuals, an individual's privacy as opposed to Government secrecy. ... Diffie, now 50, is still committed to those beliefs. ... [Diffie] and Martin E. Hellman, an electrical engineering professor at Stanford University, created a crypto revolution. ... Diffie was dissatisfied with the security [on computer systems] ... in the 1960s [because] a system manager had access to all passwords. ... A perfect system would eliminate the need for a trusted third party. ... led Diffie to think about a more general problem in cryptography: key management. ... When Diffie moved to Stanford University in 1969, he foresaw the rise of home computer terminals [and pondered] how to use them to make transactions. ... in the mid-1970s, Diffie and Hellman achieved a stunning breakthrough that changed cryptography forever. They split the cryptographic key. In their system, every user has two keys, a public one and a private one, that are unique to their owner. Whatever is scrambled by one key can be unscrambled by the other. ... It was an amazing solution, but even more remarkable was that this split-key system solved both of Diffie's problems, the desire to shield communications from eavesdroppers and also to provide a secure electronic identification for contracts and financial transactions done by computer. It provided the identification by the use of 'digital signatures' that verify the sender much the same way that a real signature validates a check or contract. ... From the moment Diffie and Hellman published their findings in 1976, the National Security Agency's crypto monopoly was effectively terminated. ... Every company, every citizen now had routine access to the sorts of cryptographic technology that not many years ago ranked alongside the atom bomb as a source of power.'''}}</ref>
The solution has become known as Diffie–Hellman key exchange.
Publications
Privacy on the Line with Susan Landau in 1998. An updated and expanded edition was published in 2007.
New directions in cryptography in 1976 with Martin Hellman.
Awards and honors
Together with Martin Hellman, Diffie won the 2015 Turing Award, widely considered the most prestigious award in the field of computer science. The citation for the award was: "For fundamental contributions to modern cryptography. Diffie and Hellman's groundbreaking 1976 paper, 'New Directions in Cryptography', introduced the ideas of public-key cryptography and digital signatures, which are the foundation for most regularly-used security protocols on the internet today."
Diffie received an honorary doctorate from the Swiss Federal Institute of Technology in 1992. He is also a fellow of the Marconi Foundation and visiting fellow of the Isaac Newton Institute. He has received various awards from other organisations. In July 2008, he was also awarded a Degree of Doctor of Science (Honoris Causa) by Royal Holloway, University of London.
He was also awarded the IEEE Donald G. Fink Prize Paper Award in 1981 (together with Martin E. Hellman), The Franklin Institute's Louis E. Levy Medal in 1997 a Golden Jubilee Award for Technological Innovation from the IEEE Information Theory Society in 1998, and the IEEE Richard W. Hamming Medal in 2010. In 2011, Diffie was inducted into the National Inventors Hall of Fame and named a Fellow of the Computer History Museum "for his work, with Martin Hellman and Ralph Merkle, on public key cryptography." Diffie was elected a Foreign Member of the Royal Society (ForMemRS) in 2017. Diffie was also elected a member of the National Academy of Engineering in 2017 for the invention of public key cryptography and for broader contributions to privacy.
Personal life
Diffie self-identifies as an iconoclast. He has stated that he "was always concerned about individuals, an individual's privacy as opposed to government secrecy."
References
Further reading
Steven Levy, Crypto: How the Code Rebels Beat the Government — Saving Privacy in the Digital Age, , 2001.
Oral history interview with Martin Hellman Oral history interview 2004, Palo Alto, California. Charles Babbage Institute, University of Minnesota, Minneapolis. Hellman describes his invention of public key cryptography with collaborators Whitfield Diffie and Ralph Merkle at Stanford University in the mid-1970s. He also relates his subsequent work in cryptography with Steve Pohlig (the Pohlig–Hellman algorithm) and others. Hellman addresses the National Security Agency's (NSA) early efforts to contain and discourage academic work in the field, the Department of Commerce's encryption export restrictions, and key escrow (the so-called Clipper chip). He also touches on the commercialization of cryptography with RSA Data Security and VeriSign.
Wired Magazine biography of Whitfield Diffie
Crypto dream team Diffie & Hellman wins 2015 "Nobel Prize of Computing". Network World''.
External links
Cranky Geeks Episode 133
Interview with Whitfield Diffie on Chaosradio Express International
Cranky Geeks Episode 71
Risking Communications Security: Potential Hazards of the Protect America Act
RSA Conference 2010 USA: The Cryptographers Panel 1/6, video with Diffie participating on the Cryptographer's Panel, April 21, 2009, Moscone Center, San Francisco
Nordsense: Security advisor 2017- Present
1944 births
Living people
American cryptographers
Modern cryptographers
Public-key cryptographers
Nortel employees
Sun Microsystems people
Massachusetts Institute of Technology School of Science alumni
Stanford University School of Engineering alumni
International Association for Cryptologic Research fellows
Turing Award laureates
Foreign Members of the Royal Society
Computer security academics
Recipients of the Order of the Cross of Terra Mariana, 3rd Class
Jamaica High School (New York City) alumni
Science fiction fans |
2333594 | https://en.wikipedia.org/wiki/Cincom%20Systems | Cincom Systems | Cincom Systems, Inc., is a privately held multinational computer technology corporation founded in 1968 by Tom Nies, Tom Richley, and Claude Bogardus.
The company's best known product today is named Total (trademark TOTAL).
Decades after Cincom's founder left IBM, the latter posted "Cincom was the original database company."
Historic significance
Cincom Systems was founded in 1968, when the product focus in the computer industry was far more on hardware than software, and mass merchandising in the industry was nonexistent. The company’s first product, Total, was the first commercial database management system that was not bundled with manufacturer hardware and proprietary software.
Thomas Nies
By the late 1960s, Tom Nies, a salesman and project manager at IBM, had noticed that software was becoming a more important component of computer systems and decided to work for a business that sold software. The only software businesses in existence at that time were a small number of service bureaus, none of which was located in Cincinnati, where Nies resided. In 1968, Nies joined Claude Bogardus and Tom Richley to found Cincom Systems, which initially only wrote programs for individual companies. Within its first year, the company realized that it was solving the same data management problems for its various clients. Nies proposed the solution of developing a core database management system that could be sold to multiple customers. Total was the result of this development effort.
On August 20, 1984, President Ronald Reagan called Cincom and Tom Nies "the epitome of entrepreneurial spirit of American business."
The Total solution
At a time when each application program "owned" the data it used, a company often had multiple copies of similar information:
"... people would get five different reports and the inventory balances would say five different things. What were our sales during April? Well, you'd get five different numbers, depending on how you total things up."
The problem was known, and CODASYL's Database Task Group Report wrote about it, as did General Electric and IBM. Cincom's TOTAL "segregated out the programming logic from the application of the database."
Despite IBM being "where the money was," there was still the problem of compatibility between large systems running OS/360 or small systems running DOS/360,
so they "implemented 70 to 80 percent of the application programming logic in such a way
that it insulated the user from" whichever they used; some used both.
Thomas Nies and Cincom
From 1968 through the present, Cincom founder Thomas M. Nies
has been the longest actively serving CEO in the computer industry, and Cincom Systems was described in 2001 as "a venerable software firm, included in the Smithsonian national museum along with Microsoft as a software pioneer."
Corporate history
1968 to 1969
Convinced that software was a potential profit center, rather than a drain on profits, as was then viewed by IBM management, Thomas M. Nies, left IBM late 1968 and brought along Tom Richley and Claude Bogardus. This executive trio functioned as sales and marketing (Nies), product development (Richley), and research and development (Bogardus). By March 1969, the company became a full-service organization by adding principals Judy Foegle Carlson (administration), George Fanady (custom systems), Doug Hughes (systems engineering), and Jan Litton (product installation).
The name Cincom was a contraction of the words "Cincinnati" and "computer."
Initially they simply wrote programs for local companies. At some point they realized that the data management aspects of many programs had enough similarity to develop a product. From this effort came what became Total, an improvement and generalization of IBM's DBOMP.
Other than IBM, which was still in the "selling iron" business, Cincom became the first U.S. software firm to promote the concept of a database management system (DBMS). Cincom delivered the first commercial database management system that was not bundled with a computer manufacturer's hardware and proprietary software.
1970s and 1980s
Cincom introduced several new products during the 1970s, including:
ENVIRON/1 (1971), a control system for teleprocessing networks.
SOCRATES (1972), a data retrieval system for receiving reports from the TOTAL database system.
T-ASK (1975), an Interactive Query Language for Harris computers
MANTIS (1978), an application generator. It has developed enough of a following to still be the focus of attention in 2017.
Manufacturing Resource Planning System (1979), a packaged ERP field data system for manufacturers that is the ancestor of today's CONTROL system.
Starting in 1971, Cincom opened offices in Canada, England, Belgium, France, Italy, Australia, Japan,
Brazil and Hong Kong.
New products introduced in the 1980s included:
EPOCH-FMS (1980), a directory-driven financial management system.
Series 80 Data Control System (1980), an interactive online data dictionary.
TOTAL Information System (1982), a directory-driven database management system.
ULTRA (1983), an interactive database management system for DEC's VAX hardware and VMS operating system. This offering was part of a strategic move to recognize DEC, and quickly resulted in one out of five customer product purchases being for VAX systems.
PC CONTACT (1984), a fully integrated, single-step communications facility that interactively linked an IBM mainframe computer with the user's IBM personal computer.
MANAGE User Series (1984), an integrated, decision-support system that combined extensive personal computing capabilities with the power and control of the mainframe.
SUPRA for SQL (structured query language) (1989).
CASE Environment (1989), a series of integrated components that assisted users who were facing cross-platform development demand from multiple areas within their computers.
Comprehensive Planning & Control System (CPCS) (1989), a resource and project guidance system that centralized management of resources and activities.
By 1980, TOTAL product sales reached $250 million.
1990s
New products during the 1990s, included:
AD/Advantage (1991), an application development system that automated development and maintenance activities throughout all phases of the application life cycle. AD/Advantage is a component f MANTIS
XpertRule (1993), a knowledge specification and generation system.
TOTAL FrameWork (1995), a set of object-oriented frameworks, services and integrated development environments (IDEs) for the assembly and maintenance of Smalltalk, Java, C++ and Visual Basic business applications.
Cincom Acquire (1995), an integrated selling system for companies that deliver complex products and services.
AuroraDS (1995), an enterprise-wide solution that allowed organizations to automate document creation, production, output and management in a client/server environment.
SPECTRA (1997), a system that provided customer administration and resource efficiency for telecommunications, utilities and service industries.
gOOi (1997), a solution that turns traditional server-based applications into graphical integrated desktop (client) applications.
Cincom Encompass (1998), a suite of integrated components for next-generation call centers.
Cincom Smalltalk (1999), a suite that includes VisualWorks and the ObjectStudio Enterprise development environment.
Cincom iC Solutions (1999), a technology that combines sales and marketing automation with knowledge-based support for product and service configuration.
2000 to Present
New products include:
Cincom Knowledge Builder (2001), a business-rules management system that streamlines sales and service processes by providing advice and guidance at the point of customer interaction.
Cincom TIGER (2002), a tool that integrates all data sources within an organization.
ENVIRON (2003), an enabling technology that helps manufacturers integrate their business systems, improve their business processes and eliminate waste throughout their organizations.
Cincom Synchrony (2004), a customer-experience management system for multi-channel contact centers.
Cincom Eloquence (2006), a document-composition solution that provides business-line professionals with the ability to generate dynamic-structured and free-form documents.
Cincom CPQ, configure-price-quote software that can integrate with Microsoft Dynamics, Salesforce and other CRM systems to create a complete multi-channel selling tool that simplifies sales processes and product configurations.
2007: Cincom generated over $100 million in revenue for the 21st straight year, a feat unmatched by any private software publisher in the world. Microsoft (a public company) is the only other software publisher in the world to reach this milestone.
References and footnotes
External links
Cincom Systems Inc v. Novelis Corp – the latter's predecessor licensed from Cincom, lost license by changing company name
Computer companies of the United States
Companies based in Cincinnati
American companies established in 1968
Computer companies established in 1968
Software companies established in 1968
1968 establishments in Ohio
Proprietary software
Software companies of the United States
Privately held companies based in Ohio
Smalltalk programming language family |
39121949 | https://en.wikipedia.org/wiki/USC%20Trojans%20women%27s%20basketball | USC Trojans women's basketball | The USC Trojans women's basketball team, or the Women of Troy, is the collegiate women's basketball team that represents the University of Southern California, in the Pac-12 Conference. The team rose to prominence in 1976, at which time scholarships became available to female basketball players. They were the first Division I team to give these scholarships.
History
The Women of Troy made their first appearance in the Final Four in the 1981 AIAW Tournament. Following the successful 1982 season, in which USC reached the Elite Eight of the first NCAA Tournament, the Trojans went on to win national championships in 1983 and 1984. The 1983 championship team included three All-Americans, Paula McGee, Cheryl Miller, and Rhonda Windham. The 1983 team went 31–2 in the regular and post-season combined. The 1983 team bested their opponent, Louisiana Tech, by a mere 2 points. The final score was 69–67. The 1984 championship team went 29–4 in the regular and post season. The 1984 team faced University of Tennessee. The victory this year came by a healthy eleven points. The final score was 72–61. USC made the National Championship again in 1986 but did not prevail. They lost to University of Texas 97–81. They since have yet to appear in the National Championship.
In 1987 and 1994 the Trojans won the Pac-10 Championship. The Trojans had begun their longest playoff drought in 1998, which was broken when the team made it to the playoff bracket in 2005. Not until 2011 did the Trojans make it to the postseason again. In 2006 USC opened the Galen Center, which was the new home of the Women of Troy. It can seat over 10,000 fans, and it was sold out in 2007 for a game between the Trojans and the UCLA Bruins. It was the first time in history that an NCAA women's basketball game was sold out. Every year since 1986, at least one member of the Trojans team has been honored in the Pac-10 awards. To date, eleven players who played for USC have won Olympic medals.
Given USC's early and iconic development of women's basketball, the legacy was featured in an HBO documentary entitled "Women of Troy," which premiered on March 10, 2020.
Notable players
Michelle Campbell, played 1993–1997, then played for the Washington Mystics of the WNBA in 2000.
Cynthia Cooper, played 1982–1986. Cooper helped lead the team to its only national championships (1983, 1984) and in 1988 won an Olympic gold medal with the U.S. national basketball team in Seoul. She also played with the Houston Comets in the WNBA, where the team won titles in 1997, 1998, 1999, and 2000. Signed as head coach at Prairie View A&M University in 2005, then UNC Wilmington in 2010, followed by Texas Southern in 2012. She became the USC head coach for the 2013–14 season.
Jacki Gemelos, played 2009–2012. She played on various WNBA teams as well as the Greek women's national basketball team. She is currently an assistant coach for the New York Liberty.
Lisa Leslie, played 1990–1994. She set many records in points and rebounds, and in 1994, she was National Player of the Year. She got a contract with the WNBA in 1997, becoming one of the new league's first players, where she joined the Los Angeles Sparks. In 2001, she was the first WNBA player to win the regular season MVP, the All-Star Game MVP and the playoff MVP in the same season. Lisa also led the Los Angeles Sparks to two back-to-back WNBA Championships (2001, 2002). Lisa won 4 Olympic gold medals and was the first woman in the WNBA to make a slam-dunk during an official game. In 2009 she retired and is now a team owner of the Los Angeles Sparks.
Nicky McCrimmon, played 1992–1994, then for the Los Angeles Sparks in 2000, and Houston Comets in 2005.
Pamela McGee, played 1980–1984. She was a part of the NCAA championship team and earned an Olympic Gold for the United States in 1984. She also played in the WNBA.
Dr. Paula McGee, played 1980–1984. She was a part of the NCAA championship team and is currently an academic and a public theologian.
Cheryl Miller, played 1982–1986. She led the Women of Troy to two National Championships (1983, 1984) and won the NCAA tournament MVP both years. She also coached for the Women of Troy for 2 seasons (1993–1995). In her 2 seasons she had a combined 44–14 record and went to the NCAA tournament both seasons, making a Regional Final once. She then went on to coach in the WNBA for the Phoenix Mercury (1997–2000). She was inducted to the Women's Basketball Hall of Fame in 1999.
Shay Murphy, played 2003–2007. She was a member of the Phoenix Mercury in 2014, when the squad won the WNBA championship.
Tina Thompson, played 1993–1997. Thompson led USC to the NCAA tournament 3 times (1994, 1995, 1997) and to one Elite 8 (1994). In 1994 she was named Freshmen of the Year in the Pac-10 Conference and Freshmen All-America by Basketball Times. In 1997 she was the first overall draft pick in the WNBA by the Houston Comets, she became the first draftee in the history of the WNBA. She helped lead the Comets to 4 WNBA Championships in 1997, 1998, 1999 and 2000. Thompson played for Houston Comets from 1997 to 2008, the Los Angeles Sparks from 2009 to 2011, and the Seattle Storm from 2012 to 2013.
Adrian Williams, played 1995–1999, then for the Minnesota Lynx, 2006–2007.
Head coaches
Linda Sharp (1977–1989) led the Women of Troy to 2 NCAA National Women Championships, 3 final four appearances. She ended her record with the Women of Troy with a 271–99 and was inducted into the Women's Basketball Hall of Fame in 2001.
Marianne Stanley (1989–1993) led the Women of Troy to the NCAA Tournament 3 years in a row and recruited future WNBA Stars Lisa Leslie, Tina Thompson and Nicky McCrimmon. She has been inducted into the Women's Basketball Hall of Fame in 2002.
Cheryl Miller (1993–1995) coached only 2 seasons for the Women of Troy. In her 2 seasons she had a combined 44–14 record and went to the NCAA tournament both seasons, making a Regional Final once. Cheryl Miller is also a former player of the Women of Troy where she led the Women of Troy to two National Championships (1983, 1984) and won the NCAA tournament MVP both years. She then went on to coach in the WNBA for the Phoenix Mercury (1997–2000). She was inducted to the Women's Basketball Hall of Fame in 1999.
Fred Willams (1995–1997) was assistant coach prior to serving as head coach. He went to coach in the WNBA after his final season at USC.
Chris Gobrecht (1997–2004) played for the Women of Troy from 1974 to 1976.
Mark Trakh (2004–2009) & (2017–2021) had two stints with the Women of Troy. His squads reached the NCAA tournament twice.
Michael Cooper (2009–2013) resigned as head coach for the Women of Troy. Of his 4-season he ended with a record of 61–37 (.622).
Cynthia Cooper-Dyke (2013–2017) a former Women of Troy player, who helped lead the team to its only National Championships (1983, 1984) and in 1988 won an Olympic gold medal with the U.S. national basketball team in Seoul. She also played with the Houston Comets in the WNBA, where she led the team to a record four consecutive WNBA championships (1997–2000). She took the head coaching job for the USC Women of Troy for the 2013–2014 season and remained until 2017.
Lindsay Gottlieb (2021–) joined the Women of Troy after two seasons with the Cleveland Cavaliers as an assistant coach. Prior to her time in the NBA, Gottlieb was the head coach at California for 8 seasons, leading the team to 7 NCAA tournaments, including one final four appearance.
Arenas
Los Angeles Memorial Sports Arena was the Women of Troy's arena from 1977 until 2006. The Los Angeles Memorial Sports Arena was opened in 1959.
Galen Center, which is 255,000 square feet, with a 45,000-square-foot pavilion, and has three practice courts and offices. The seating capacity is 10,258, and there are 22 private suites. Total construction cost was an estimated $147 million. The working of the Galen Center started in 2004 by a donation of $50 million by Louis Galen (a successful banker and long-time Trojan fan). The Galen Center opened in 2006.
Roster
Year by year results
Conference tournament winners noted with #
|-style="background: #ffffdd;"
| colspan="8" align="center" | Pac-12 Conference
NCAA Tournament results
Awards and achievements
Retired numbers
Career leaders
References
External links
Official Twitter: https://twitter.com/USCWBB
Fan Forum: https://www.uscbasketball.com/forum/the-lyon-center |
14068049 | https://en.wikipedia.org/wiki/OpenForum%20Europe | OpenForum Europe | OpenForum Europe (OFE) is a European open source software and open standard not-for-profit organisation advocating for interoperability in technology. Its key objective is to contribute to achieve an open and competitive Digital ecosystem in Europe.
History
Founded in 2002 OFE seeks to encourage the use of open source software and open standards among businesses, consumers and governments.
Graham Taylor, OFE's then Chief Executive Officer, spoke at the 4th EU Ministerial eGovernment conference in Lisbon in September 2007, stressing the benefits of open source software in government.
The concerns regarding interoperability expressed by OFE found an echo with EU's Competition Commissioner Neelie Kroes. In support of open standards, as advocated by OFE, Neelie Kroes declared in 2008 "I know a smart business decision when I see one choosing open standards is a very smart business decision indeed." Neelie Kroes who became Vice President of the European Commission responsible for the Digital Agenda maintained her support for open standards. In her new role she provided the keynote at the OFE Annual Summit in 2010 and again in 2012.
In 2016, Sachiko Muto succeeded Graham Taylor as the Chief Executive Officer.
Activities
OFE is a registered interest group with the European Commission and the European Parliament. OFE advises European policy makers and legislators on the merits of openness in computing and provides technical analysis and explanation. OFE promotes open source software, as well as openness more generally as part of a vision to facilitate open, competitive choice for technology users.
OFE works closely with the European Commission, European Parliament, national and local governments both directly and via its national partners. It fully supports the European Commission's Digital Single Market strategy, which aims to create a flourishing digital economy in Europe by 2020.
Software patents
Open software is the primary focus of OFE. OFE participates actively in public consultations that concern the industry and often serves as an interlocutor between legislators and the wider open computing community.
OFE cooperates with the Open Source Consortium, Free Software Foundation Europe, the Linux Professional Institute (LPI) as well as Foundation for a Free Information Infrastructure (FFII) and the ODF Alliance.
OFE is actively involved in the discussion about the development of Open Document Format (ODF), an ISO standard for electronic documents.
Democratic society
Today OFE’s focus includes the impact of the regulatory choices concerning standards and platforms on society.
OpenForum Europe has helped to launch petitions to call on government bodies such as the European Parliament to use open standards whenever possible.
Support and network
OFE is supported by major technology suppliers and works closely with the European Commission and National Governments both direct and via National Associates.
OFE's corporate partners include IBM, Oracle, Red Hat, GitHub and Deloitte. Additionally, OFE has national partners from across Europe, representing SMEs.
OFE has a partnership with the Free Software Foundation Europe (FSFE) and collaborates with the Foundation for a Free Information Infrastructure (FFII), two of the leading free and open source software campaign groups in Europe. OFE also provides input for the European Committee for Interoperable Systems (ECIS).
References
External links
OpenForum Europe
Free and open-source software organizations
Information technology organizations based in Europe
Intellectual property activism
Organizations established in 2002 |
18642288 | https://en.wikipedia.org/wiki/AMD%20CodeAnalyst | AMD CodeAnalyst | AMD CodeAnalyst is a GUI-based code profiler for x86 and x86-64-based machines. CodeAnalyst has similar look and feel on both Linux and Microsoft Windows platforms. CodeAnalyst uses specific hardware profiling techniques which are designed to work with AMD processors, as well as a timer-based profiling technique which does not require specific hardware support; this allows a subset of profiling features to work on non-AMD processors, such as Intel processors.
As of March 2013, CodeAnalyst has been replaced by AMD CodeXL.
Code optimization
CodeAnalyst is built on OProfile for the Linux platform and is available as a free download. The GUI assists in various kinds of code profiling including time based profiling, hardware event-based profiling, instruction-based profiling and others. This produces statistics about details such as time spent in each subroutine which can be drilled down to the source code or instruction level. The time taken by the instructions can be indicative of stalls in the pipeline during instruction execution. Optimization could be as simple as reordering the instructions for maximum utilization of a data line cache or altering/removing the branches and loops so that the maximum number of execution units(Load/Store units, ALU, FP execution unit...) are utilized in parallel.
Support for PERF was added in CodeAnalyst 3.4, allowing users to choose between OProfile and PERF as the profiling backend.
Instruction-Based Sampling
CodeAnalyst supports IBS (Instruction-Based Sampling) that was introduced in Family 10h AMD processors (Barcelona). With IBS support, CodeAnalyst can more precisely identify instructions that cause pipeline stalls and cache misses.
Open-source
The Linux version of CodeAnalyst is available under GNU General Public License 2.0.
CodeAnalyst also uses other open-source components, including the Qt framework, libdwarf, libelf, and the Binary File Descriptor library.
See also
AMD uProf
Intel VTune
AMD CodeXL
List of performance analysis tools
References
External links
https://web.archive.org/web/20120204112454/http://developer.amd.com/tools/CodeAnalyst/Pages/default.aspx
Advanced Micro Devices software
Free software programmed in C++
Free system software
Profilers
Software that uses Qt |
39878 | https://en.wikipedia.org/wiki/Domain%20name | Domain name | A domain name is an identification string that defines a realm of administrative autonomy, authority or control within the Internet. Domain names are used in various networking contexts and for application-specific naming and addressing purposes. In general, a domain name identifies a network domain, or it represents an Internet Protocol (IP) resource, such as a personal computer used to access the Internet, a server computer hosting a website, or the web site itself or any other service communicated via the Internet. As of 2017, 330.6 million domain names had been registered.
Domain names are formed by the rules and procedures of the Domain Name System (DNS). Any name registered in the DNS is a domain name. Domain names are organized in subordinate levels (subdomains) of the DNS root domain, which is nameless. The first-level set of domain names are the top-level domains (TLDs), including the generic top-level domains (gTLDs), such as the prominent domains com, info, net, edu, and org, and the country code top-level domains (ccTLDs). Below these top-level domains in the DNS hierarchy are the second-level and third-level domain names that are typically open for reservation by end-users who wish to connect local area networks to the Internet, create other publicly accessible Internet resources or run web sites.
The registration of these domain names is usually administered by domain name registrars who sell their services to the public.
A fully qualified domain name (FQDN) is a domain name that is completely specified with all labels in the hierarchy of the DNS, having no parts omitted. Traditionally a FQDN ends in a dot (.) to denote the top of the DNS tree. Labels in the Domain Name System are case-insensitive, and may therefore be written in any desired capitalization method, but most commonly domain names are written in lowercase in technical contexts.
Purpose
Domain names serve to identify Internet resources, such as computers, networks, and services, with a text-based label that is easier to memorize than the numerical addresses used in the Internet protocols. A domain name may represent entire collections of such resources or individual instances. Individual Internet host computers use domain names as host identifiers, also called hostnames. The term hostname is also used for the leaf labels in the domain name system, usually without further subordinate domain name space. Hostnames appear as a component in Uniform Resource Locators (URLs) for Internet resources such as websites (e.g., en.wikipedia.org).
Domain names are also used as simple identification labels to indicate ownership or control of a resource. Such examples are the realm identifiers used in the Session Initiation Protocol (SIP), the Domain Keys used to verify DNS domains in e-mail systems, and in many other Uniform Resource Identifiers (URIs).
An important function of domain names is to provide easily recognizable and memorizable names to numerically addressed Internet resources. This abstraction allows any resource to be moved to a different physical location in the address topology of the network, globally or locally in an intranet. Such a move usually requires changing the IP address of a resource and the corresponding translation of this IP address to and from its domain name.
Domain names are used to establish a unique identity. Organizations can choose a domain name that corresponds to their name, helping Internet users to reach them easily.
A generic domain is a name that defines a general category, rather than a specific or personal instance, for example, the name of an industry, rather than a company name. Some examples of generic names are books.com, music.com, and travel.info. Companies have created brands based on generic names, and such generic domain names may be valuable.
Domain names are often simply referred to as domains and domain name registrants are frequently referred to as domain owners, although domain name registration with a registrar does not confer any legal ownership of the domain name, only an exclusive right of use for a particular duration of time. The use of domain names in commerce may subject them to trademark law.
History
The practice of using a simple memorable abstraction of a host's numerical address on a computer network dates back to the ARPANET era, before the advent of today's commercial Internet. In the early network, each computer on the network retrieved the hosts file (host.txt) from a computer at SRI (now SRI International), which mapped computer hostnames to numerical addresses. The rapid growth of the network made it impossible to maintain a centrally organized hostname registry and in 1983 the Domain Name System was introduced on the ARPANET and published by the Internet Engineering Task Force as RFC 882 and RFC 883.
The following table shows the first 20 domains with the dates of their registration:
Domain name space
Today, the Internet Corporation for Assigned Names and Numbers (ICANN) manages the top-level development and architecture of the Internet domain name space. It authorizes domain name registrars, through which domain names may be registered and reassigned.
The domain name space consists of a tree of domain names. Each node in the tree holds information associated with the domain name. The tree sub-divides into zones beginning at the DNS root zone.
Domain name syntax
A domain name consists of one or more parts, technically called labels, that are conventionally concatenated, and delimited by dots, such as example.com.
The right-most label conveys the top-level domain; for example, the domain name www.example.com belongs to the top-level domain com.
The hierarchy of domains descends from the right to the left label in the name; each label to the left specifies a subdivision, or subdomain of the domain to the right. For example: the label example specifies a node example.com as a subdomain of the com domain, and www is a label to create www.example.com, a subdomain of example.com. Each label may contain from 1 to 63 octets. The empty label is reserved for the root node and when fully qualified is expressed as the empty label terminated by a dot. The full domain name may not exceed a total length of 253 ASCII characters in its textual representation. Thus, when using a single character per label, the limit is 127 levels: 127 characters plus 126 dots have a total length of 253. In practice, some domain registries may have shorter limits.
A hostname is a domain name that has at least one associated IP address. For example, the domain names www.example.com and example.com are also hostnames, whereas the com domain is not. However, other top-level domains, particularly country code top-level domains, may indeed have an IP address, and if so, they are also hostnames.
Hostnames impose restrictions on the characters allowed in the corresponding domain name. A valid hostname is also a valid domain name, but a valid domain name may not necessarily be valid as a hostname.
Top-level domains
When the Domain Name System was devised in the 1980s, the domain name space was divided into two main groups of domains. The country code top-level domains (ccTLD) were primarily based on the two-character territory codes of ISO-3166 country abbreviations. In addition, a group of seven generic top-level domains (gTLD) was implemented which represented a set of categories of names and multi-organizations. These were the domains gov, edu, com, mil, org, net, and int. These two types of top-level domains (TLDs) are the highest level of domain names of the Internet. Top-level domains form the DNS root zone of the hierarchical Domain Name System. Every domain name ends with a top-level domain label.
During the growth of the Internet, it became desirable to create additional generic top-level domains. As of October 2009, 21 generic top-level domains and 250 two-letter country-code top-level domains existed. In addition, the ARPA domain serves technical purposes in the infrastructure of the Domain Name System.
During the 32nd International Public ICANN Meeting in Paris in 2008, ICANN started a new process of TLD naming policy to take a "significant step forward on the introduction of new generic top-level domains." This program envisions the availability of many new or already proposed domains, as well as a new application and implementation process. Observers believed that the new rules could result in hundreds of new top-level domains to be registered. In 2012, the program commenced, and received 1930 applications. By 2016, the milestone of 1000 live gTLD was reached.
The Internet Assigned Numbers Authority (IANA) maintains an annotated list of top-level domains in the DNS root zone database.
For special purposes, such as network testing, documentation, and other applications, IANA also reserves a set of special-use domain names. This list contains domain names such as example, local, localhost, and test. Other top-level domain names containing trade marks are registered for corporate use. Cases include brands such as BMW, Google, and Canon.
Second-level and lower level domains
Below the top-level domains in the domain name hierarchy are the second-level domain (SLD) names. These are the names directly to the left of .com, .net, and the other top-level domains. As an example, in the domain example.co.uk, co is the second-level domain.
Next are third-level domains, which are written immediately to the left of a second-level domain. There can be fourth- and fifth-level domains, and so on, with virtually no limitation. An example of an operational domain name with four levels of domain labels is sos.state.oh.us. Each label is separated by a full stop (dot). 'sos' is said to be a sub-domain of 'state.oh.us', and 'state' a sub-domain of 'oh.us', etc. In general, subdomains are domains subordinate to their parent domain. An example of very deep levels of subdomain ordering are the IPv6 reverse resolution DNS zones, e.g., 1.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.ip6.arpa, which is the reverse DNS resolution domain name for the IP address of a loopback interface, or the localhost name.
Second-level (or lower-level, depending on the established parent hierarchy) domain names are often created based on the name of a company (e.g., bbc.co.uk), product or service (e.g. hotmail.com). Below these levels, the next domain name component has been used to designate a particular host server. Therefore, ftp.example.com might be an FTP server, www.example.com would be a World Wide Web server, and mail.example.com could be an email server, each intended to perform only the implied function. Modern technology allows multiple physical servers with either different (cf. load balancing) or even identical addresses (cf. anycast) to serve a single hostname or domain name, or multiple domain names to be served by a single computer. The latter is very popular in Web hosting service centers, where service providers host the websites of many organizations on just a few servers.
The hierarchical DNS labels or components of domain names are separated in a fully qualified name by the full stop (dot, .).
Internationalized domain names
The character set allowed in the Domain Name System is based on ASCII and does not allow the representation of names and words of many languages in their native scripts or alphabets. ICANN approved the Internationalized domain name (IDNA) system, which maps Unicode strings used in application user interfaces into the valid DNS character set by an encoding called Punycode. For example, københavn.eu is mapped to xn--kbenhavn-54a.eu. Many registries have adopted IDNA.
Domain name registration
History
The first commercial Internet domain name, in the TLD com, was registered on 15 March 1985 in the name symbolics.com by Symbolics Inc., a computer systems firm in Cambridge, Massachusetts.
By 1992, fewer than 15,000 com domains had been registered.
In the first quarter of 2015, 294 million domain names had been registered. A large fraction of them are in the com TLD, which as of December 21, 2014, had 115.6 million domain names, including 11.9 million online business and e-commerce sites, 4.3 million entertainment sites, 3.1 million finance related sites, and 1.8 million sports sites. As of July 2012 the com TLD had more registrations than all of the ccTLDs combined.
Administration
The right to use a domain name is delegated by domain name registrars, which are accredited by the Internet Corporation for Assigned Names and Numbers (ICANN), the organization charged with overseeing the name and number systems of the Internet. In addition to ICANN, each top-level domain (TLD) is maintained and serviced technically by an administrative organization operating a registry. A registry is responsible for maintaining the database of names registered within the TLD it administers. The registry receives registration information from each domain name registrar authorized to assign names in the corresponding TLD and publishes the information using a special service, the WHOIS protocol.
Registries and registrars usually charge an annual fee for the service of delegating a domain name to a user and providing a default set of name servers. Often, this transaction is termed a sale or lease of the domain name, and the registrant may sometimes be called an "owner", but no such legal relationship is actually associated with the transaction, only the exclusive right to use the domain name. More correctly, authorized users are known as "registrants" or as "domain holders".
ICANN publishes the complete list of TLD registries and domain name registrars. Registrant information associated with domain names is maintained in an online database accessible with the WHOIS protocol. For most of the 250 country code top-level domains (ccTLDs), the domain registries maintain the WHOIS (Registrant, name servers, expiration dates, etc.) information.
Some domain name registries, often called network information centers (NIC), also function as registrars to end-users. The major generic top-level domain registries, such as for the com, net, org, info domains and others, use a registry-registrar model consisting of hundreds of domain name registrars (see lists at ICANN or VeriSign). In this method of management, the registry only manages the domain name database and the relationship with the registrars. The registrants (users of a domain name) are customers of the registrar, in some cases through additional layers of resellers.
There are also a few other alternative DNS root providers that try to compete or complement ICANN's role of domain name administration, however, most of them failed to receive wide recognition, and thus domain names offered by those alternative roots cannot be used universally on most other internet-connecting machines without additional dedicated configurations.
Technical requirements and process
In the process of registering a domain name and maintaining authority over the new name space created, registrars use several key pieces of information connected with a domain:
Administrative contact. A registrant usually designates an administrative contact to manage the domain name. The administrative contact usually has the highest level of control over a domain. Management functions delegated to the administrative contacts may include management of all business information, such as name of record, postal address, and contact information of the official registrant of the domain and the obligation to conform to the requirements of the domain registry in order to retain the right to use a domain name. Furthermore, the administrative contact installs additional contact information for technical and billing functions.
Technical contact. The technical contact manages the name servers of a domain name. The functions of a technical contact include assuring conformance of the configurations of the domain name with the requirements of the domain registry, maintaining the domain zone records, and providing continuous functionality of the name servers (that leads to the accessibility of the domain name).
Billing contact. The party responsible for receiving billing invoices from the domain name registrar and paying applicable fees.
Name servers. Most registrars provide two or more name servers as part of the registration service. However, a registrant may specify its own authoritative name servers to host a domain's resource records. The registrar's policies govern the number of servers and the type of server information required. Some providers require a hostname and the corresponding IP address or just the hostname, which must be resolvable either in the new domain, or exist elsewhere. Based on traditional requirements (RFC 1034), typically a minimum of two servers is required.
A domain name consists of one or more labels, each of which is formed from the set of ASCII letters, digits, and hyphens (a-z, A-Z, 0–9, -), but not starting or ending with a hyphen. The labels are case-insensitive; for example, 'label' is equivalent to 'Label' or 'LABEL'. In the textual representation of a domain name, the labels are separated by a full stop (period).
Business models
Domain names are often seen in analogy to real estate in that domain names are foundations on which a website can be built, and the highest quality domain names, like sought-after real estate, tend to carry significant value, usually due to their online brand-building potential, use in advertising, search engine optimization, and many other criteria.
A few companies have offered low-cost, below-cost or even free domain registration with a variety of models adopted to recoup the costs to the provider. These usually require that domains be hosted on their website within a framework or portal that includes advertising wrapped around the domain holder's content, revenue from which allows the provider to recoup the costs. Domain registrations were free of charge when the DNS was new. A domain holder may provide an infinite number of subdomains in their domain. For example, the owner of example.org could provide subdomains such as foo.example.org and foo.bar.example.org to interested parties.
Many desirable domain names are already assigned and users must search for other acceptable names, using Web-based search features, or WHOIS and dig operating system tools. Many registrars have implemented domain name suggestion tools which search domain name databases and suggest available alternative domain names related to keywords provided by the user.
Resale of domain names
The business of resale of registered domain names is known as the domain aftermarket. Various factors influence the perceived value or market value of a domain name. Most of the high-prize domain sales are carried out privately.
Domain name confusion
Intercapping is often used to emphasize the meaning of a domain name, because DNS names are not case-sensitive. Some names may be misinterpreted in certain uses of capitalization. For example: Who Represents, a database of artists and agents, chose whorepresents.com, which can be misread. In such situations, the proper meaning may be clarified by placement of hyphens when registering a domain name. For instance, Experts Exchange, a programmers' discussion site, used expertsexchange.com, but changed its domain name to experts-exchange.com.
Use in web site hosting
The domain name is a component of a uniform resource locator (URL) used to access web sites, for example:
URL: http://www.example.net/index.html
Top-level domain: net
Second-level domain: example
Hostname: www
A domain name may point to multiple IP addresses to provide server redundancy for the services offered, a feature that is used to manage the traffic of large, popular web sites.
Web hosting services, on the other hand, run servers that are typically assigned only one or a few addresses while serving websites for many domains, a technique referred to as virtual web hosting. Such IP address overloading requires that each request identifies the domain name being referenced, for instance by using the HTTP request header field Host:, or Server Name Indication.
Abuse and regulation
Critics often claim abuse of administrative power over domain names. Particularly noteworthy was the VeriSign Site Finder system which redirected all unregistered .com and .net domains to a VeriSign webpage. For example, at a public meeting with VeriSign to air technical concerns about SiteFinder, numerous people, active in the IETF and other technical bodies, explained how they were surprised by VeriSign's changing the fundamental behavior of a major component of Internet infrastructure, not having obtained the customary consensus. SiteFinder, at first, assumed every Internet query was for a website, and it monetized queries for incorrect domain names, taking the user to VeriSign's search site. Unfortunately, other applications, such as many implementations of email, treat a lack of response to a domain name query as an indication that the domain does not exist, and that the message can be treated as undeliverable. The original VeriSign implementation broke this assumption for mail, because it would always resolve an erroneous domain name to that of SiteFinder. While VeriSign later changed SiteFinder's behaviour with regard to email, there was still widespread protest about VeriSign's action being more in its financial interest than in the interest of the Internet infrastructure component for which VeriSign was the steward.
Despite widespread criticism, VeriSign only reluctantly removed it after the Internet Corporation for Assigned Names and Numbers (ICANN) threatened to revoke its contract to administer the root name servers. ICANN published the extensive set of letters exchanged, committee reports, and ICANN decisions.
There is also significant disquiet regarding the United States Government's political influence over ICANN. This was a significant issue in the attempt to create a .xxx top-level domain and sparked greater interest in alternative DNS roots that would be beyond the control of any single country.
Additionally, there are numerous accusations of domain name front running, whereby registrars, when given whois queries, automatically register the domain name for themselves. Network Solutions has been accused of this.
Truth in Domain Names Act
In the United States, the Truth in Domain Names Act of 2003, in combination with the PROTECT Act of 2003, forbids the use of a misleading domain name with the intention of attracting Internet users into visiting Internet pornography sites.
The Truth in Domain Names Act follows the more general Anticybersquatting Consumer Protection Act passed in 1999 aimed at preventing typosquatting and deceptive use of names and trademarks in domain names.
Seizures
In the early 21st century, the US Department of Justice (DOJ) pursued the seizure of domain names, based on the legal theory that domain names constitute property used to engage in criminal activity, and thus are subject to forfeiture. For example, in the seizure of the domain name of a gambling website, the DOJ referenced and . In 2013 the US government seized Liberty Reserve, citing .
The U.S. Congress passed the Combating Online Infringement and Counterfeits Act in 2010. Consumer Electronics Association vice president Michael Petricone was worried that seizure was a blunt instrument that could harm legitimate businesses. After a joint operation on February 15, 2011, the DOJ and the Department of Homeland Security claimed to have seized ten domains of websites involved in advertising and distributing child pornography, but also mistakenly seized the domain name of a large DNS provider, temporarily replacing 84,000 websites with seizure notices.
In the United Kingdom, the Police Intellectual Property Crime Unit has been attempting to seize domain names from registrars without court orders.
Suspensions
PIPCU and other UK law enforcement organisations make domain suspension requests to Nominet which they process on the basis of breach of terms and conditions. Around 16,000 domains are suspended annually, and about 80% of the requests originate from PIPCU.
Property rights
Because of the economic value it represents, the European Court of Human Rights has ruled that the exclusive right to a domain name is protected as property under article 1 of Protocol 1 to the European Convention on Human Rights.
IDN variants
ICANN Business Constituency (BC) has spent decades trying to make IDN variants work at the second level, and in the last several years at the top level. Domain name variants are domain names recognized in different character encodings, like a single domain presented in traditional Chinese and simplified Chinese. It is an Internationalization and localization problem. Under Domain Name Variants, the different encodings of the domain name (in simplified and traditional Chinese) would resolve to the same host.
According to John Levine, an expert on Internet related topics, "Unfortunately, variants don't work. The problem isn't putting them in the DNS, it's that once they're in the DNS, they don't work anywhere else."
Fictitious domain name
A fictitious domain name is a domain name used in a work of fiction or popular culture to refer to a domain that does not actually exist, often with invalid or unofficial top-level domains such as ".web", a usage exactly analogous to the dummy 555 telephone number prefix used in film and other media. The canonical fictitious domain name is "example.com", specifically set aside by IANA in RFC 2606 for such use, along with the .example TLD.
Domain names used in works of fiction have often been registered in the DNS, either by their creators or by cybersquatters attempting to profit from it. This phenomenon prompted NBC to purchase the domain name Hornymanatee.com after talk-show host Conan O'Brien spoke the name while ad-libbing on his show. O'Brien subsequently created a website based on the concept and used it as a running gag on the show.
Domain name spoofing
The term Domain name spoofing (or simply though less accurately, Domain spoofing) is used generically to describe one or more of a class of phishing attacks that depend on falsifying or misrepresenting an internet domain name. These are designed to persuade unsuspecting users into visiting a web site other than that intended, or opening an email that is not in reality from the address shown (or apparently shown). Although website and email spoofing attacks are more widely known, any service that relies on domain name resolution may be compromised.
Types
There are a number of better-known types of domain spoofing:
Typosquatting also called "URL hijacking", a "sting site", or a "fake URL", is a form of cybersquatting, and possibly brandjacking which relies on mistakes such as typos made by Internet users when inputting a website address into a web browser or composing an email address. Should a user accidentally enter an incorrect domain name, they may be led to any URL (including an alternative website owned by a cybersquatter).
The typosquatter's URL will usually be one of five kinds, all similar to the victim site address:
A common misspelling, or foreign language spelling, of the intended site
A misspelling based on a typographical error
A plural of a singular domain name
A different top-level domain: (i.e. .com instead of .org)
An abuse of the Country Code Top-Level Domain (ccTLD) (.cm, .co, or .om instead of .com)
Internationalised domain name homograph attack. This type of attack depends on registering a domain name that is similar to the 'target' domain, differing from it only because its spelling includes one or more characters that come from a different alphabet but look the same to the naked eye. For example, the Cyrillic, Latin, and Greek alphabets each have their own letter , each of which has its own binary code point. Turkish has a dotless letter i () that may not be perceived as different from the ASCII letter . Most web browsers warn of 'mixed alphabet' domain names, Other services, such as email applications, may not provide the same protection. Reputable top level domain and country code domain registrars will not accept applications to register a deceptive name but this policy cannot be presumed to be infallible.
Risk mitigation
("Domain-based Message Authentication, Reporting and Conformance")
(SSL certificate)
Legitimate technologies that may be subverted
See also
Domain hack
Domain hijacking
Domain name registrar
Domain name speculation
Domain name warehousing
Domain registration
Domain tasting
Geodomain
List of Internet top-level domains
Reverse domain hijacking
Reverse domain name notation
References
External links
(domain bias in web search) a research by Microsoft
Top Level Domain Bias in Search Engine Indexing and Rankings
Icann New gTLD Program Factsheet - October 2009 (PDF)
IANA Two letter Country Code TLD
ICANN - Internet Corporation for Assigned Names and Numbers
Internic.net, public information regarding Internet domain name registration services
Internet Domain Names: Background and Policy Issues Congressional Research Service
, Domain Names — Concepts and Facilities, an Internet Protocol Standard
, Domain Names — Implementation and Specification, an Internet Protocol Standard
UDRP, Uniform Domain-Name Dispute-Resolution Policy
Special use domain names
Identifiers |
17847261 | https://en.wikipedia.org/wiki/EasyBCD | EasyBCD | EasyBCD is a program developed by NeoSmart Technologies to configure and tweak the Boot Configuration Data (BCD), a boot database first introduced in Windows Vista and used in all subsequent Windows releases. EasyBCD can be used to set up multi-boot environments for computers on which some versions of Windows, Linux, BSD and Mac OS X can be simultaneously installed; EasyBCD can also be used for adding entries to bootable tools and utilities, as well as modifying and controlling the behavior of the Windows boot menu. EasyBCD 2.3 introduced additional support for creating and managing entries for UEFI-based Windows entries in the boot menu. As of June 20, 2011 with the release of EasyBCD 2.1, it is no longer free for use in commercial environments which require the purchase of a paid license, however it remains free for home and non-profit use without limitations.
Supported operating systems
EasyBCD runs on Windows and modifies the Windows Boot Configuration Data (BCD) to add support for other operating systems. Windows NT, Windows 2000, and Windows XP are supported by handing off the control of boot to either NTLDR or the EasyBCD-specific EasyLDR, which bypasses NTLDR and boots directly into the OS. MS-DOS, Windows 3.x and Windows 9x can be chainloaded via modified versions of IO.sys and the Windows 9x boot sector. Linux and BSD are loaded either by handing off control of the boot process to GRUB or LILO or by using EasyBCD's own NeoGrub module (which is based on GRUB4DOS). Mac OS X is loaded via the Darwin bootloader. Other operating systems are also supported by means of chainloading their specific loader environments.
Features
Bootloader Configuration
EasyBCD has a number of bootloader-related features that can be used to repair and
configure the bootloader.
From the "Manage Bootloader" section of EasyBCD, it is possible to switch between the bootmgr bootloader (used since Windows Vista) and the NTLDR bootloader (used by legacy versions of Windows, from Windows NT to Windows XP) in the MBR from within Windows by simply clicking a button. EasyBCD also offers a feature to back up and restore the BCD (boot configuration data) configuration files for recovery and testing purposes.
In the "Diagnostics Center," it is possible to reset a corrupt BCD storage and automatically create the necessary entries for the current operating system, as well as search for and replace missing/corrupt boot files. This latter feature can be taken advantage of to install the Windows Vista BCD bootloader.
EasyBCD can be used to change the boot drive, rename or change the order of any entries in the bootloader, and modify existing entries to point to a different drive.
Newer versions of EasyBCD also support creating bootable USB disks, by deploying BOOTMGR and the BCD onto a removable disk and performing the necessary actions to make the drive bootable, after which it can be loaded into EasyBCD to add and remove the various supported entry types in order to create bootable repair USB sticks.
EasyBCD also supports changing the boot partition/drive that PC boots from, changing the default boot entry, re-ordering menu entries, and modifying the timeout behavior of the boot menu.
Windows
EasyBCD supports a number of different Windows entries, and can be used to install and configure the following:
MS-DOS 6.x
Windows 95-ME
Windows 2000, Windows XP and Windows Server 2003
Windows Vista and Windows Server 2008
Windows 7
Windows 8 and Windows Server 2012
Windows 10
Depending on the version of Windows being added in EasyBCD, certain other options may be available. These include enabling support for unsigned drivers on 64-bit Windows installations, booting into the various flavors of safe mode, limiting Windows to a certain amount/number of memory or CPU cores, verbose boot logging, and enabling/disabling of both PAE and DEP/NoExecute.
As of version 2.0, EasyBCD uses a new method for booting into Windows NT/2000/XP that does not use NTLDR in order to avoid a two-level boot menu (the BCD boot menu followed by the NTLDR/BOOT.INI boot menu for cases where multiple legacy NT operating systems are installed). Instead, EasyBCD uses a boot-time helper developed by NeoSmart Technologies called EasyLDR, which replaces NTLDR and bypasses boot.ini entirely, directly loading the operating system in question without showing the user a second selection menu.
Windows PE
Windows PE 2.0 through 5.1 are supported under a separate module in EasyBCD. EasyBCD can boot into two different Windows PE systems:
Compressed Windows PE WIM images
Windows PE partitions
EasyBCD supports booting into WinPE 2.0+ WIM images stored on any local partition by providing the path to the WIM file. It automatically re-configures the BCD to add support for the WIM format. It can also boot into a Windows PE filesystem extracted to the root of a mounted drive letter.
Linux
EasyBCD can boot into Linux by one of two means:
Chainloading GRUB/GRUB2/LILO/etc.
NeoGrub
The traditional chainloading method creates an image of the GRUB/LILO bootsector on the local disk and loads this image during boot-time in order to chainload the second bootloader which should already be configured to boot into Linux or BSD. EasyBCD has profiles for and officially supports the chainloading of GRUB (Legacy), GRUB2, LILO, eLILO, and Wubi (for Ubuntu).
EasyBCD also ships with NeoGrub, a customized build of Grub for Dos, which can be configured by editing C:\NST\menu.lst with the standard Legacy GRUB syntax for directly booting into the needed Linux or BSD partitions, or chainloading another bootloader to load the OS in question.
BSD
As of version 2.1.1, EasyBCD contains a module specifically tailored for booting into BSD-based operating systems which was developed in cooperation with the PC-BSD team. This module works in tandem with the BTX bootloader to support booting into BSD systems in both BIOS (MBR) and UEFI (GPT) environments, and the PC-BSD setup wizard has been developed with this capability and module of EasyBCD in mind.
Mac OS X
EasyBCD can chainload the Mac OS X Darwin bootloader in order to boot into OS X on another partition or physical disk. It doesn't require that Darwin be installed on the bootsector of the OS X partition. This facilitates multi-boot installation in OSX86 setups, and can currently be used with either MBR or EFI configurations.
Removable devices
In conjunction with EasyBCD's ability to create bootable USB drives, it also has the option of creating portable entries that can be used on the normal PC bootloader or, more practically, on bootable external media.
EasyBCD can create entries that boot into hard disk images (both VHD and raw disk image formats), ISO images, WinPE 2.0+ WIM files, floppy disk images, and BIOS extenders.
See also
Multi boot
Windows Boot Manager
References
External links
2006 software
Windows administration
Windows-only freeware |
56951669 | https://en.wikipedia.org/wiki/Moshe%20Dunie | Moshe Dunie | Moshe Dunie is an Israeli-born American executive and investor in the Greater Seattle Area best known for his executive roles at Microsoft in 1988–99, culminating as Vice President of the Windows Division. Moshe Dunie is serving on the Board of Governors of the Technion, Israel Institute of Technology. Dunie served as the president of the American Jewish Committee (AJC) Seattle in 2006-2009 and served on the American Jewish Committee National Board. In his role Dunie interacted with political and business leaders around the world promoting human rights. At Microsoft Moshe Dunie was responsible for the releases of Windows NT 3.1, NT 3.5, NT 3.51 and NT 4.0 partnering with Dave Cutler. Then as VP of the Windows Operating System Division, Moshe Dunie lead the teams that delivered Windows 98, and took Windows 2000 to its final beta release. Moshe Dunie was in charge of over 3,000 full time division engineers and 1,500 contractors. He collaborated with partners such as Intel, PC OEMs, application developers and enterprise customers, initiating a Rapid Deployment Program and extensive beta testing. Microsoft Israel R&D center and engineering groups in Europe and Asia reported to Dunie as international extension of the Windows division.
Early career
Moshe Dunie was born on November 14, 1949 in Israel. His mother was an Auschwitz survivor. He studied Electrical Engineering at the Technion – Israel Institute of Technology, focusing on both hardware and software engineering. Dunie graduated with a BSc degree in 1971. He received his MBA from Golden Gate University, graduating with a 4.0 GPA from their special Silicon Valley executive MBA program.
Israel Air Force Test Range - After the Technion Dunie served at the Israel Air Force Test Range for five years. As an officer he led technical teams focused on development and testing of advanced avionics. This included the development of Israel’s initial Remotely Piloted Vehicle (RPV), working with RPV pioneer Al Ellis.
Eljim & Astronautics - In 1977 Dunie developed submarine real time software at Eljim Ltd. After Elbit purchased Eljim, Dunie joined Astronautics Ltd. As software team leader he was responsible for the development of the real-time software for the Israeli Kfir's airborne computer and for an advanced tactical computer for the General Dynamics F-16 fighter aircraft using bit-slice technology. Dunie wrote the assembly code for the mapping and networking software.
Landis & Gyr Systems - In 1981 Dunie joined Landis & Gyr Systems in San Jose, Silicon Valley, first as a software developer and after a year as the software manager responsible for the development of a microprocessor based Supervisory and Data Acquisition (SCADA) master station managing the electric grid for electric utilities. Dunie personally developed the real time operating system, the implementation of the DDCMP communications protocol, and parts of the UI.
Microsoft
Dunie joined Microsoft Corporation in August 1988 as Project Manager for OS/2. He partnered with Dave Cutler building and leading the teams that delivered the first release of Windows NT, NT 3.1, on July 26, 1993. Cutler focused on NT architecture and Dunie on release management and quality. Both reported to Executive VP Paul Maritz. Next Dunie led the releases of Windows NT 3.5 (09/08/94), NT 3.51 (05/30/95), and NT 4.0 (07/29/96) partnering again with Cutler and reporting to Jim Allchin. Dunie was promoted to Vice President on July 26, 1996. On September 16, 1996. Dunie was given complete responsibility for all Windows NT and Windows 98 development teams. Dunie was the first Microsoft executive to be responsible for both the Windows 9x consumer focused development team and Windows NT enterprise focused team. After Windows 98 release, Dunie integrated the Win98 team into his Windows NT 5 team. Dunie led the Windows 2000 (NT 5) development team from the early design stage to code completion and final Beta release in December 1998. He initiated with Intel the NetPC. Additionally Dunie was in charge of Microsoft Israel Development Center delivering Microsoft Proxy Server, Windows NT Embedded, Windows Message Queue and Windows Terminal Server. Dunie left the Windows Development team at the end of 1998 taking a well-earned sabbatical after delivering ten major products. In his email announcing Dunie’s departure Jim Allchin wrote “Moshe has done as much as anyone at Microsoft to build the amazing Windows asset that we have today. There are few jobs that are as challenging and complicated, in any industry, as those that Moshe has had to face and master. Above all, Moshe has led by example through his incredible dedication - to his team, to Windows, to Microsoft, to our customers”.
After his Sabbatical Dunie reported to Paul Maritz, working with Windows 2000 strategic customers. Dunie retired from Microsoft on October 8, wanting to spend time with his children, Doron (born 1983) and Orlene (born 1986), before they leave to university.
American Jewish Committee
Dunie joined AJC Seattle in 2002, serving for 4 years as co-chair of the Seattle Jewish Film Festival. Then Dunie was elected as the president of the American Jewish Committee (AJC) Seattle in 2006-2009 and served on the American Jewish Committee National Board. In his role Dunie collaborated with world leaders, ambassadors, community leaders and business leaders to enhance the well-being of the Jewish people and to advance human rights for all.
On Going - Technion & Investments
Moshe Dunie is serving on the Board of Governors of the Technion, Israel Institute of Technology. He is Chairman Emeritus of American Technion Society in Seattle.
Dunie has been an active investor in startups and established companies, primarily in high-technology. He has been a consultant for startup CEOs and executives.
References
1949 births
Living people
OS/2 people
Microsoft Windows people |
11507993 | https://en.wikipedia.org/wiki/Palm%20Foleo | Palm Foleo | The Palm Foleo was a planned subnotebook computer that was announced by mobile device manufacturer Palm Inc. on May 30, 2007, and canceled three months later. It intended to serve as a companion for smartphones including Palm's own Treo line. The device ran on the Linux operating system and featured 256 MB of flash memory and an immediate boot-up feature.
The Foleo featured wireless access via Bluetooth and Wi-Fi. Integrated software included an e-mail client which was to be capable of syncing with the Treo E-Mail client, the Opera web browser and the Documents To Go office suite. The client did not send and retrieve mail over the Wi-Fi connection, instead transmitting via synchronization with the companion smartphone.
The device was slated to launch in the U.S. in the third quarter of 2007 for a price expected by Palm to be $499 after an introductory $100 rebate. Palm canceled Foleo development on September 4, 2007, with Palm CEO Ed Colligan announcing that the company would return its focus to its core product of smartphones and handheld computers. Soon after the device was canceled, a branch of subnotebooks called netbooks, similar to the Foleo in size and functionality, reached the market. Had it been released, the Foleo would have been the founding device in the category. At the time, Palm was performing poorly in face of heavy competition in the smartphone market. The company's sales did not recover, and it was purchased by information technology giant Hewlett-Packard in April 2010.
Software
The Foleo was initially reported to run a modified Linux kernel. The kernel was reported as being version 2.6.14-rmk1-pxa1-intc2 ("rmk1" indicates this is the ARM architectural version, "pxa1" indicates it is of the PXA family of Intel/Marvell Technology Group XScale processors, "intc2" is possibly an IRQ handler). On August 7, 2007, Palm announced that it had chosen Wind River Systems to help it customize the standard Linux kernel to make it more suitable for this device.
The device used a custom-built widget framework called HxUI, which is based on the LiTE toolbox over the DirectFB graphics subsystem. HxUI uses XML to describe its interfaces. Bundled Applications included the Opera web browser (Supports Flash and Ajax, but not Flash video), an E-mail application, a PDF viewer, and DocumentsToGo to handle Microsoft Word, Excel, and PowerPoint documents.
A number of companies had announced plans to release applications for this product. For example, LogMeIn planned to provide remote PC access capabilities to the Foleo, Avenuu planned to provide remote file access, Bluefire planned to provide VPN software. On July 26, 2007, Normsoft was the first company to announce an MP3 player for Foleo. Some executives at Palm had suggested that the fan-less CPU would probably not be able to play back video, while others had disagreed. Other companies had announced plans for games, a photo editor, and blogging tools.
Criticism
Initial reaction to the Foleo in the trade press was mixed, with reviewers such as Tim Bajarin, Gizmodo, and Slashgear giving the device positive reviews, while other analysts noted that subnotebooks had never found a large market. Leslie Fiering, a vice president of research group Gartner stated that Palm has "created a device that's not quite pocketable, but it's not quite full function, either". Users on forums and news sites have mocked the name (with a few calling it the Palm Fooleo), and criticized the apparent lack of ability to run Palm OS applications, the lack of multimedia features and the price. TechRadar said "If you've got a mobile that can handle email, why on earth would you want the Foleo?"
Gartner analyst Todd Kort called the Foleo "the most disappointing product I've seen in several years". He added: "To think that anyone would carry something with a 10-inch display at 2.5 pounds as an adjunct to a phone just doesn't make any sense to me." (Sydney Morning Herald, May 31, 2007).
Palm continued to tout the device as an alternative to carrying a standard laptop when traveling, as it was cheaper, smaller, lighter, and sturdier, with a longer battery life and the ability to access the internet through a smartphone when not in range of a Wi-Fi network. Despite its lack of computational power for such tasks as video playback or 3D games, a few reviewers were very positive about this possibility.
References
External links
Official site
Brighthand First Thoughts
PalmInfocenter hands on Foleo Preview
Palm, Inc.
Computer-related introductions in 2007
Linux-based devices
Mobile computers
Netbooks
Subnotebooks |
25215 | https://en.wikipedia.org/wiki/Quake%20III%20Arena | Quake III Arena | Quake III Arena is a 1999 multiplayer-focused first-person shooter developed by id Software. The third installment of the Quake series, Arena differs from previous games by excluding a story-based single-player mode and focusing primarily on multiplayer gameplay. The single-player mode is played against computer-controlled bots. It features music composed by Sonic Mayhem and Front Line Assembly founder Bill Leeb.
Notable features of Quake III Arena include the minimalist design, lacking rarely used items and features; the extensive customizability of player settings such as field of view, texture detail and enemy model; and advanced movement features such as strafe-jumping and rocket-jumping.
The game was praised by reviewers who, for the most part, described the gameplay as fun and engaging. Many liked the crisp graphics and focus on multiplayer. Quake III Arena has also been used extensively in professional electronic sports tournaments such as QuakeCon, Cyberathlete Professional League, DreamHack, and the Electronic Sports World Cup.
Gameplay
Unlike its predecessors, Quake III Arena does not have a plot-based single-player campaign. Instead, it simulates the multiplayer experience with computer-controlled players known as bots. The game's story is brief: "the greatest warriors of all time fight for the amusement of a race called the Vadrigar in the Arena Eternal." The introduction video shows the abduction of such a warrior, Sarge, while making a last stand. Continuity with prior games in the Quake series and even Doom is maintained by the inclusion of player models and biographical information. A familiar mixture of gothic and technological map architecture as well as specific equipment is included, such as the Quad Damage power-up, the rocket launcher, and the BFG.
In Quake III Arena, the player progresses through tiers of maps, combating different bot characters that increase in difficulty, from Crash (at Tier 0) to Xaero (at Tier 7). As the game progresses, the fights take place in more complex arenas and against tougher opponents. While deathmatch maps are designed for up to 16 players, tournament maps are designed for duels between 2 players and in the single-player game could be considered 'boss battles'.
The weapons are balanced by role, with each weapon having advantages in certain situations, such as the railgun at long-range and the lightning gun at close quarters. The BFG super-weapon is an exception to this; compared to other similarly named weapons in the Doom/Quake series, Quake III Arenas incarnation of this weapon is basically a fast-firing rocket launcher and it is found in hard-to-reach locations. Weapons appear as level items, spawning at regular intervals in set locations on the map. If a player dies, all of their weapons are lost and they receive the spawn weapons for the current map, usually the gauntlet and machine gun. Players also drop the weapon they were using when killed, which other players can then pick up.
Quake III Arena comes with several gameplay modes: Free for All (FFA), a classic deathmatch, where each player competes against the rest for the highest score, Team Deathmatch (TDM), where usually two teams of four compete for the highest team frag (kill) total, Tournament (1v1), a deathmatch between two players, usually ending after a set time and Capture the Flag, which is played on symmetrical maps where teams have to recover the enemy flag from the opponents' base while retaining their own.
Quake III Arena was specifically designed for multiplayer. The game allows players whose computers are connected by a network or to the internet to play against each other in real time, and incorporates a handicap system. It employs a client–server model, requiring all players' clients to connect to a server. Quake III Arena'''s focus on multiplayer gameplay spawned a lively community, similar to QuakeWorld, that is still active as of 2021.
CharactersQuake III Arena features several characters from previous entries in the Quake series including "Bitterman" from Quake II, the "Ranger" character from Quake as well as Doomguy from id Software's sister franchise Doom.
Development
During early March 1999, ATI leaked the internal hardware vendor (IHV) copy of the game, which unveiled to the public in Macworld Conference & Expo at Moscone Center in January and Makuhari Messe in February by Steve Jobs (CEO of Apple Inc. at the time when it unveiled). This was a functional version of the engine with a textured level and working guns. The IHV contained most of the weapons (excepting the Gauntlet) that would make it into the final game although most were not fully modeled; a chainsaw and grappling hook were also in the IHV but did not make it into the final release. Many of the sounds that would make it into the final release were also included.
After the IHV leak, id Software released a beta of the game called Quake III Arena Test on April 24, 1999, initially only for Mac OS before expanding to Windows at a later date. The Q3Test started with version 1.05 and included three levels that would be included in the final release: dm7, dm17, and q3tourney2. Id Software continued to update Q3Test up until version 1.09.
id co-founder and former technical director John Carmack has stated that Quake III Arena is his favorite game he has worked on.Quake III Arena was shipped to retailers on December 2, 1999; the official street date for the game was December 5, although id Software chief executive officer Todd Hollenshead expected the game to be available as early as December 3 from retailers like Babbage's and EB Games. the game was support for the A3D 2.0 HRTF technology by Aureal Semiconductor out of the box.
Game engine
The id Tech 3 engine is the name given to the engine that was developed for Quake III Arena. Unlike most other games released at the time, Quake III Arena requires an OpenGL-compliant graphics accelerator to run. making The game does not include a software or Direct3D renderer.
The graphic technology of the game is based tightly around a "shader" system where the appearance of many surfaces can be defined in text files referred to as "shader scripts". Quake 3 also introduced spline-based curved surfaces in addition to planar volumes, which are responsible for many of the surfaces present within the game. Quake 3 also provided support for models animated using vertex animation with attachment tags (known as the .md3 format), allowing models to maintain separate torso and leg animations and hold weapons. Quake 3 is one of the first games where the third-person model is able to look up and down and around as the head, torso and legs are separate. Other visual features include volumetric fog, mirrors, portals, decals, and wave-form vertex distortion.
For networking, id Tech 3 uses a "snapshot" system to relay information about game "frames" to the client over UDP. The server attempts to omit as much information as possible about each frame, relaying only differences from the last frame the client confirmed as received (Delta encoding). id Tech 3 uses a virtual machine to control object behavior on the server, effects and prediction on the client and the user interface. This presents many advantages as mod authors do not need to worry about crashing the entire game with bad code, clients could show more advanced effects and game menus than was possible in Quake II and the user interface for mods was entirely customizable. Unless operations which require a specific endianness are used, a QVM file will run the same on any platform supported by Quake III Arena. The engine also contains bytecode compilers for the x86 and PowerPC architectures, executing QVM instructions via an interpreter.Quake III Arena features an advanced AI with five difficulty levels which can accommodate both a beginner and an advanced player, though they usually do not pose a challenge to high-tier or competitive players. Each bot has its own, often humorous, 'personality', expressed as scripted lines that are triggered to simulate real player chat. If the player types certain phrases, the bots may respond: for example, typing "You bore me" might cause a bot to reply "You should have been here 3 hours ago!". Each bot has a number of alternative lines to reduce the repetition of bot chatter. The Gladiator bots from Quake II were ported to Quake III Arena and incorporated into the game by their creator - Jan Paul van Waveren, aka Mr. Elusive. Bot chat lines were written by R. A. Salvatore, Seven Swords and Steve Winter. Xaero, the hardest opponent in the game, was based on the Gladiator bot Zero. The bot Hunter appears on magazine covers in the later id game Doom 3.
On August 19, 2005, id Software released the complete source code for Quake III Arena under the GNU General Public License v2.0 or later, as they have for most of their prior engines. As before, the engine, but not the content such as textures and models, was released, so that anyone who wishes to build the game from source will still need an original copy of the game to play it as intended.
Mods
Like its predecessors, Quake and Quake II, Quake III Arena can be heavily modified, allowing the engine to be used for many different games. Mods range from small gameplay adjustments like Rocket Arena 3 and Orange Smoothie Productions to total conversions such as Smokin' Guns, DeFRaG, and Loki's Revenge. The source code's release has allowed total conversion mods such as Tremulous, World of Padman, OpenArena, and Urban Terror to evolve into free standalone games. Other mods like Weapons Factory Arena have moved to more modern commercial engines. Challenge ProMode Arena became the primary competitive mod for Quake III Arena since the Cyberathlete Professional League announced CPMA as its basis for competition. CPMA includes alternative gameplays, including air-control, rebalanced weapons, instant weapon switching, and additional jumping techniques. Another mod that underwent several open beta versions and was very popular in 1999-2001 was Quake 3 Fortress (Q3F). The initial version of this game was an indirect port of the Quakeworld Team Fortress mod with many clans and leagues competing in both games simultaneously. Q3F was eventually ported to another Quake 3 mod Enemy Territory Fortress which had limited success. The developers of Q3F eventually abandoned the mod but used it to create the standalone 2003 game Wolfenstein: Enemy Territory, which uses the Quake 3 engine and is still popular with approximately 9,400 active players in 2018.
Fast inverse square root
Fast inverse square root, sometimes referred to as Fast InvSqrt() or by the hexadecimal constant 0x5F3759DF, is an algorithm that estimates , the reciprocal (or multiplicative inverse) of the square root of a 32-bit floating-point number in IEEE 754 floating-point format. The algorithm is best known for its implementation in the source code of Quake III Arena.At the time, it was generally computationally expensive to compute the reciprocal of a floating-point number, especially on a large scale. However, the fast inverse square root bypassed this step.
Around 2002, initial speculation pointed to John Carmack as the probable author of the code, but he demurred and suggested it was written by Terje Mathisen, an accomplished assembly programmer who had previously helped id Software with Quake optimization. Mathisen had written an implementation of a similar bit of code in the late 1990s, but the original authors proved to be much further back in the history of 3D computer graphics with Gary Tarolli's implementation for the SGI Indigo as a possible earliest known use.
Expansion
An expansion pack titled Quake III: Team Arena was released on December 18, 2000, in North America, January 15, 2001, in Japan and January 26, in Europe. It was developed by id Software and published by Activision. The expansion focused on team-based gameplay through new game modes, as well as the addition of three new weapons (the Chaingun, Nailgun, and Prox Launcher), and new items and player models. Quake III: Team Arena was criticized, as its additions were long overdue and had already been implemented by fan modifications. Quake III: Gold was later released on September 26, 2001, in North America, March 29, 2002, in Japan and August 9 in Europe, including the original Quake III Arena and the Quake III: Team Arena expansion pack bundled together. Canadian electro-industrial band Front Line Assembly made the soundtrack for the expansion, the counterpart to Sonic Mayhem's Quake III Arena: Noize.
Ports
Official
As a result of the disappointing sales of Blue Stinger, Activision was discouraged from publishing further titles for the Dreamcast and relinquished the distribution of the Dreamcast version of Quake III Arena (ported by Raster Productions) to Sega. First announced on January 29, 2000. and released on October 23, 2000, the Dreamcast version of Quake III featured 4 player Cross-platform play between Dreamcast and PC Players. It is often considered one of the best PC to console ports of its time due to its smooth frame rate and online play. There are still communities that play this version online on the remaining dedicated servers running patch version 1.16n and the required map pack. the Dreamcast version's of Quake III also including VMU mini-games (similar to Cardcaptors: Tomoyo no Video Daisakusen, the Japanese Version of Soulcalibur, and Tech Romancer).Quake III Revolution (ported by Bullfrog Productions, published by Electronic Arts in North America and Electronic Arts Square in Japan) was released for the PlayStation 2 in March 2001, featuring several elements adopted from Team Arena, along with a more mission-based single-player mode. It features split-screen multiplayer for up to 4 players with PS2 Multitap (SCPH-10090/SCPH-70120), as well as the Id Software new animated logo called "The Laboratory" which produced by Blur Studio, which later worked on movie such as The Girl with the Dragon Tattoo and Avengers: Age of Ultron. As the game was an early PS2 title it lacked online play as Sony had not launched their network functionality in North America yet until August 2002. GameRankings rated the release at 83%. Quake III Revolution was widely criticized for having long loading times (which typically averaged over a minute compared to the Dreamcast and PC version), Poor Game balance, and did not including USB mouse and keyboard support out of the box (which the PlayStation 2 version's of Unreal Tournament does have).Quake III Arena Arcade for the Xbox 360 was officially announced by id at QuakeCon 2007. The title, jointly developed by id and Pi Studios, was released on Xbox Live Arcade on December 15, 2010. The retail price of the game was set at 1200 Microsoft Points, or $15 USD. Quake Arena DS for the Nintendo DS was announced at QuakeCon on August 4, 2007. John Carmack said that touch screen controls would not be implemented as much as in Metroid Prime Hunters, for example, also stating that he would like all shooting in the game to be controlled with the D-pad instead of the touch screen. This version was silently cancelled. Quake Zero was announced at QuakeCon on August 3, 2007, and was an updated version of Quake 3 Arena, distributed by free download, run in a browser window and supported by built-in advertising content. Quake Zero was launched as Quake Live, released in 2010.
On November 15, 2021, Microsoft made the x86-64-based Xbox One/Series X/S console backward compatible with the Quake III Arena Arcade, as part of a "final addition" of 76 titles was published as part of the 20th anniversary of the launch of the original Xbox console.
Source portsQuake III Arena has been unofficially ported to several consoles, including the PlayStation Portable handheld and Xbox console. These versions require a modified console or handheld and the assets to the game to go along with the source port.
Carmack has said that Quake Trilogy (including Arena) will be ported on the iPhone/iPod Touch/iPad. An unofficial version for iOS was released through Cydia for jailbroken iOS devices in April 2008; it is a demo version similar to the original except that it integrates the iPhone and iPod Touch's accelerometer and touch controls to make gameplay possible. A high-definition version for iPad was released in November 2010, featuring re-created controls, sharper graphics, better gameplay, and better framerate; this improved version was also integrated into the iPhone and iPod touch version of the port.
A Moorestown prototype version was demonstrated on a reference design that demonstrated performance of up to 90 frames per second. An unofficial port of Quake III for Symbian mobile devices was made. It requires PAK files from original game to run. An unofficial port of the game to Android was created based on the released source code.Quake 3 ported to Android, shows off Droid's graphical prowess engadget, February 25, 2010 This means the game can be run on several Android powered devices, most notably the Motorola Milestone, Motorola Droid, and the Nexus One, as well as other high specification Handsets.
In August 2011, the ARM-based Raspberry Pi credit card-sized computer was shown running a specially compiled ARM version of Quake III on Debian.
Reception
SalesQuake IIIs sales surpassed 50,000 copies during its first three days of release, by which time 1 million copies had been printed. It debuted at #5 on PC Data's weekly computer game sales chart for the December 5–11 period. The game rose to fourth place in the weekly top 10 the following week. Domestically, it sold 222,840 copies and earned revenues of $10.1 million by early 2000.
In North America, Quake III sold 168,309 copies and earned $7.65 million from January through October 2000, according to PC Data. Its overall sales in the region, including its launch in 1999, totaled 319,970 units by November 2000. Its sales for 2000 alone ultimately reached 190,950 units and $8.4 million by the end of the year. The game later received a "Silver" sales award from the Entertainment and Leisure Software Publishers Association (ELSPA), indicating sales of at least 100,000 copies in the United Kingdom.
Critical reception
Reviews for the game were very positive, with many describing the game as fast and addictive. Curved surfaces were a welcome addition to the series. Most reviewers felt the game was best when played with others online. A Diehard GameFan review by Robert Howarth described the game as the best “pure deathmatch” experience around, but criticised the game's frame rate, which didn't run very well on low-end systems and required either a RIVA TNT2 or GeForce 256 GPU to run the game at an acceptable frame rate. GameSpot reviewer Jeff Gerstmann described the game as outstanding. He noted the fun level designs, great-looking textures, impressive special effects and weapons sounds. Gerstmann however criticised the narrator's voice and thought that some levels could become too crowded when playing multiplayer. An IGN review felt the game lacked originality but enjoyed the detailed wall textures and outer space jump levels. The high number of character skins and the artificial intelligence of opponent bots were praised but the weapons were said to be "bland and predictable". A Eurogamer review described the game as "polished" and "stunning" and thought that it "was extremely well balanced and plays very well". The reviewer was especially pleased with the customisable 3D engine and looked forward to new maps and mods.
Blake Fischer reviewed the PC version of the game for Next Generation, rating it five stars out of five, describing it as "the best deathmatch yet. Period. End of story. If you want single-player or a storyline, buy Half-Life. If you want great DM and near-infinite expandability, Quake III is the best in the business".
Frank O'Connor reviewed the Dreamcast version of the game for Next Generation, rating it four stars out of five, and stated that it was "a brilliant, if flawed, conversion of arguably the best online game ever made – it's sure a hell of a lot more interesting use of the Dreamcast modem than Chu Chu Rocket". The Dreamcast version won GameSpot's annual "Best Multiplayer Game" award among console games, and was a runner-up in the "Best Shooting Game" category, which went to Perfect Dark.
Garrett Kenyon reviewed the PlayStation 2 version of the game for Next Generation, rating it four stars out of five, and stated that "all in all, this is a fast and beautiful game – easily the best shooter available for PS2". The PlayStation 2 version was a nominee for The Electric Playgrounds 2001 Blister Awards for "Best Console Shooter Game", but lost to Halo: Combat Evolved for Xbox.Quake III Arena won PC Gamer USs 1999 "Special Achievement in Graphics" award, and wrote that it "set a new high-water mark in 3D graphics this year". The game was a finalist for the Academy of Interactive Arts & Sciences' 1999 "Action Game of the Year" award, which ultimately went to Half-Life: Opposing Force.
In January 2016, Red Bull labeled Q3DM17 (The Longest Yard) one of the 10 greatest FPS multiplayer levels of all time.
Competitive playQuake III Arenas multiplayer-focused development led to it developing a large community of competitive players and like its predecessors it was used extensively in professional electronic sports tournaments. In competitive Quake III Arena there are two distinct gameplays, often referred to as 'rulesets', the out-of-the-box Quake III Arena game, also known as vanilla Quake 3 (VQ3), and the CPM ruleset of the Challenge Pro Mode Arena mod. On July 26, 2006, Challenge Pro Mode Arena with VQ3 gameplay was chosen by Cyberathlete Professional League as the mod of choice for their tournament, making it the standard competitive mod for Quake III Arena. Previously, Orange Smoothie Productions was the most widely used tournament mod.
The following competitions have held Quake III events:
Cyberathlete Amateur League
Cyberathlete Professional League
Electronic Sports World Cup
QuakeCon
World Cyber Games
Dreamhack
These competitions have now moved on to more recent games or have transitioned to its variant successor, Quake Live''.
See also
1999 in video games
OpenArena – a video game clone of Quake III Arena
Unreal Tournament
Notes
References
External links
(archived copy)
1999 video games
Arena shooters
Activision games
Cancelled Nintendo DS games
Commercial video games with freely available source code
AROS software
Dreamcast games
Square (video game company) games
Esports games
First-person shooters
Id Software games
Electronic Arts games
Bethesda Softworks games
Linux games
Classic Mac OS games
MorphOS games
Quake (series)
PlayStation 2 games
Split-screen multiplayer games
Video game sequels
Video games about death games
Video games developed in the United States
Video games scored by Aubrey Hodges
Video games scored by Sascha Dikiciyan
Video games with cross-platform play
Video games with expansion packs
Windows games
Xbox 360 games
Xbox 360 Live Arcade games
Loki Entertainment games
Id Tech games |
9460850 | https://en.wikipedia.org/wiki/Conservation%20and%20restoration%20of%20new%20media%20art | Conservation and restoration of new media art | The conservation and restoration of new media art is the study and practice of techniques for sustaining new media art created using from materials such as digital, biological, performative, and other variable media.
New media art runs a unique risk when it comes to longevity that has resulted in the development of new and different preservation and restoration strategies and tools.
To preserve and restore these pieces of new media art, there are a variety of strategies including storage, migration, emulation, and reinterpretation. There are even more tools used to implement these strategies including Archivematica, BitCurator, Conifer, Media Info, PRONOM, QC Tools, and the Variable Media Questionnaire. The common metadata schema used for new media art is Media Art Notation System (MANS). Despite the name "new media art," there is a diverse history of preservation and restoration efforts including both individual efforts and consortium efforts.
Preservation strategies
Storage
The acquisition and storage of the physical media-equipment, such as DVD players or computers, used in multi-media or digital artworks has proven a short-term tactic at best, as hardware can quickly become obsolete or can 'stale' in storage. Storage is also notoriously bad at capturing the contextual and live aspects of works such as Internet art, performance art and live electronic music.
Storage involves keeping documents in their original formats whenever possible to maintain authenticity; keeping metadata updated to aid in finding and understanding the preservation strategies taken so far; keeping documents on reliable, non-proprietary software that users would be the most likely to already have or easily get access to; storing multiple copies of bitstreams; replacing the carriers when new, more widely used ones become available.
Migration
To migrate a work of art is to upgrade its format from an aged medium to a more current one, such as from VHS to DVD, accepting that some changes in quality may occur while still maintaining the integrity of the original. This strategy assumes that preserving the content or information of an artwork, despite its change in media, trumps concerns over fidelity to the original look and feel.
Migration must take place regularly or the original piece may become obsolete with no way to update it to a newer format for accessibility. Migration is especially important when the file is saved on proprietary software like Microsoft Word, Prezi, Archives Space, etc. In the process of migration, a document can be stored in its original form and also migrated to a non-proprietary form in order to maintain authenticity while also providing long-term access.
Emulation
The process of simulating an older operating system (or by extension, other supporting infrastructure) on a newer software or hardware platform is called emulation. The idea behind emulation is to maintain the original format and feel of the piece of new media art. Emulation software allows users and researchers to view complex pieces of art like video games, virtual reality, etc. in a way that it was intended to be viewed. Emulation is especially important for art created on proprietary software or software that many users and researchers might not have access to. The emulation software allows them to view the document even without the original software.
Seeing Double: an emulation testbed
In 2004, the Guggenheim Museum, in conjunction with the Daniel Langlois Foundation, held an exhibition entitled Seeing Double: Emulation in Theory and Practice as a trial of emulation. In the exhibition, artworks operating on their original physical media were displayed alongside versions emulated on newer physical media. The exhibition was organized with the participation of computer researcher and emulation specialist, Jeff Rothenberg. In 1998, Rothenberg had published "Avoiding Technological Quicksand: Finding a Viable Technical Foundation for Digital Preservation".
Reinterpretation
Reinterpretation is the final storage form and is only considered when all other storage forms are not available. Reinterpretation involves changing the essence of the art with or without the artist's approval for preservation purposes. This could involved re-coding for access, recasting a piece in a more modern, durable medium, and more. This technique does not maintain authenticity the way the other strategies do, but it can be the most effective. Therefore, it is considered best practice to only use reinterpretation when all other strategies are deemed inappropriate.
Preservation tools
Because the conservation and restoration of new media art is a craft, not a science, not every preservation strategy will work for every piece of new media art. Repositories have to make decisions based on the complexities of each individual piece. They will each have their own unique needs, interests, and priorities. Repositories and individual conservators keep up with new tools and technologies available to aid in preservation.
Archivematica
Archivematica is an "integrated suite of open source software tools." It allows repositories to store their documents there for the long-term while also keeping up to date with current industry standards such as Dublin Core, AIPs, etc. Repositories started using Archivematica to address the gap between storage and actual preservation. It helps them along with every step in archival processing.
Bit Curator
Bit Curator can be used as a way to examine a collection without going through each individual piece of art. Conservators can upload bulk files and Bit Curator will examine the trends and patterns. From there, repositories can decide what to focus on and which pieces need attention.
Conifer
Conifer creates an archive of any page you visit while you browse. It is useful for conservators because they do not have to collect the webpage materials themselves. Everything you see is archived. Unlike other web archives like Wayback Machine, Conifer captures images, video, etc. of pages that can only be seen by you. They capture material that is password protected. From there, conservators can go through the collection themselves to sort, arrange, describe, add metadata, etc.
Media Info
Media Info is primarily used for audio and visual files. They only take certain formats so more unconventional formats must be converted. This software verifies technical metadata and makes sure everything is working properly and up to date.
PRONOM
PRONOM is a resource for information on "file formats, software products, and other technical components." It helps to ensure the conservation and long-term access to a variety of documents. This information is marketed toward anyone interested in learning more. It is not exclusive to archivists and conservators.
QC Tools
QC Tools filters video files to help repositories analyze the contents of the video.
Variable Media Questionnaire
The Variable Media Questionnaire is a free web service that allows new media curators and repositories to share the most effective strategies of preservation for different forms of new media art. It focuses particularly on creating guidelines for preserving the art once the original medium or software is not available. They utilize the 4 main preservation strategies while recommending the specific mediums and software and work for different types of art.
Involving artists in preservation
The future of new media conservation and restoration involves more collaboration between artists and curators. When preservation efforts are taken earlier in the creation of the work, future preservation becomes easier and more effective. The artist doesn't necessarily know the steps that must be taken to accurately preserve new media art and the curator doesn't necessarily know the artistic intentions of the creator. When these two work together throughout the creation and transfer to a repository, the conservation of the piece will last longer and the intentions of the artist will be honored. Without these efforts, many new media art pieces will not be properly preserved and will never be moved to a repository. The earlier the intervention, the easier it is going to be for the curator to ensure long-term preservation. Steps can be taken to make sure the new media art is around for future use. Those steps can involve the preservation strategies and tools described above, but the piece can only be preserved if it exists in a state where curators can access and modify it. For example, if it exists on a software that is already obsolete, it cannot be migrated.
Metadata standards
Media Art Notation System (MANS)
The Media Art Notation System is a formal notation system introduced by Richard Rinehart, Digital Media Director and Adjunct Curator, Berkeley Art Museum/Pacific Film Archive, in 2007. It was developed in response to a need for a "new approach to conceptualizing digital and media art forms." Rinehart compares MANS to a musical score. An ensemble can change out the instruments, but it will still be the same piece of music as long as they follow the score. In the same way, digital media can be separated from its software and still produce the same computational result. When digital media is presented using a different hardware or software, it may appear slightly different, but it will still be the same piece of media art.
MANS uses XML to present the metadata specifically because it allows the coder to define the framework of the digital media while allowing for variations in how it presents itself. This is particularly useful for conservation because it allows future users to examine the document in a system that works for multiple different pieces of art. If the software or materials for one piece of art becomes obsolete, future researchers will be able to examine the new media art via XML and map it onto a newer schema.
MANS has three levels of implementation. The first level is Score which is mostly metadata with minimal XML. The second level is the machine-processable Score. It includes sub-component description, more XML, and even images and other media. The third level is the machine-processable Score that serves as a working model of the original. This level contains technical metadata, bitstreams, very granular description, and structural markup.
Exhibiting New Media Art
New media art is unique from other types of art in that the tools and strategies used to create the art are often the same tools and strategies used to display or exhibit the art. Because of this, exhibiting new media art becomes a part of conserving new media art. Often, a curator or specialist will be on site at the exhibit to ensure the art is being displayed and used correctly by audiences. It would be easy to assume a computer is for administrative use when really the coding on the computer is part of the exhibit. This everchanging medium is difficult to conserve, restore, and exhibit.
Because of the innovative nature of new media art, it is very common for exhibits to include audience interaction. Artists will create work that is only fully complete during audience interaction such as movement, tactile pieces, or even changes made by audience members. This creates a unique challenge where only the initial artist-created portion of the piece can be conserved and the audience-interaction portion of the piece will change overtime and depending on the actions of the audience members.
It is considered best practice when conserving or restoring new media art to consider the relationship with the audience. Often the aspect that sets new media art from other types of art is the "liveliness" that is represented by the relationship between the piece and the audience. In order to exhibit this type of art, curators and repositories must first accept this relationship as a type of art, and thus, worth exhibiting. Then, they will attempt to conserve the relationship built between the art and the audience.
Relationship to other preservation efforts
The catchall term sometimes applied to such genres, variable media, suggests that it is possible to recapture the experience of these works independently of the specific physical material and equipment used to display them in a given exhibition or performance. As the nature of multi-media artworks calls for the development of new standards, techniques, and metadata within preservation strategies, the idea that certain artworks incorporating an array of media elements could be variable opens up the possibility for experimental standards of preservation and reinterpretation.
Nevertheless, many new media preservationists work to integrate new preservation strategies with existing documentation techniques and metadata standards. This effort is made in order to remain compatible with previous frameworks and models on how to archive, store and maintain variable media objects in a standardized repository utilizing a systematized vocabulary, such as the Open Archival Information System model.
While some of this research parallels and exploits progress made in the practice of Digital preservation and Web archiving, the preservation of new media art offers special challenges and opportunities. Whereas scientific data and legal records may be easily migrated from one platform to another without losing their essential function, artworks are often sensitive to the look and feel of the media in which they are embedded. On the other hand, artists who are invited to help imagine a long-term plan for their work often respond with creative solutions.
History of new media art preservation
Individual efforts
Numerous contemporary art conservators have contributed individual efforts toward new media art preservation:
Carol Stringari of the Solomon R. Guggenheim Museum in New York
As a deputy director and chief conservator, Stringari led laser research of a monochromatic painting by Ad Reinhardt and project on conservation of the works of László Moholy-Nagy. She later won the CAA/Heritage Preservation Award for Distinction for Scholarship and Conservation for her work on Ad Reinhardt's technique.
Pip Laurenson of the Tate Gallery in London
Head of Time-based media conservation at the Tate, Laurenson is currently working to achieve her PhD in the care and management of time-based media works of art at University College London.
Jill Sterret of the San Francisco Museum of Modern Art.
Director of Collections & Conservation at SFMOMA, Sterret is an avid collector and preserver of artworks made by contemporary artists. She is committed to the vital collaborations between artists, curators, technical experts, registrars, and conservators that support contemporary art conservation practice.
Consortium efforts
The variable media concept was developed in 1998, first as a creative strategy Ippolito brought to the adversarial collaborations produced with artists Janet Cohen and Keith Frank, and later as a preservation strategy called the Variable Media Initiative that he applied to endangered artworks in the Solomon R. Guggenheim Museum's collection. In 2002 the Guggenheim partnered with the Daniel Langlois Foundation for Art, Science and Technology in Montreal to form the Variable Media Network, a concerted effort to develop a museum-standard, best practice for the collection and preservation of new media art. Apart from Stringari and Ippolito, other key members of the Variable Media Network included Alain Depocas, Director of the Centre for Research and Documentation, Daniel Langlois Foundation; and Caitlin Jones, former Daniel Langlois Variable Media Preservation Fellow at the Guggenheim Museum.
Around this time similar investigations into the preservation of digital/media art were being led on the West Coast by Richard Rinehart, who published an article on the subject, "The Straw that Broke the Museum's Back? Collecting and Preserving Digital/Media Art for the Next Century", in 2000. Rinehart had also established Conceptual & Intermedia Arts Online (CIAO) with Franklin Furnace, the New York-based performance art-grants giving organization and archive/advocate of performance, 'ephemeral' or non-traditional art under the directorship of Martha Wilson.
Members of the Variable Media Network and CIAO subsequently joined forces with other organizations, including Rhizome.org, an affiliate of New York's New Museum of Contemporary Art, for collective preservation endeavors such as Archiving the Avant Garde. This broader coalition, operating under the rubric Forging the Future, is managed by the Still Water lab at the University of Maine and offers free, open-source tools for new media preservation, including the 3rd-generation Variable Media Questionnaire.
In 2002, Timothy Murray founded the Rose Goldsen Archive of New Media Art. Named after the pioneering critic of the commercialization of mass media, the late Professor Rose Goldsen of Cornell University. The Archive hosts international art work produced on CD-Rom, DVD-Rom, video, digital interfaces, and the internet. Its collection of supporting materials includes unpublished manuscripts and designs, catalogues, monographs, and resource guides to new media art. The curatorial vision emphasizes digital interfaces and artistic experimentation by international, independent artists. Designed as an experimental center of research and creativity, the Goldsen Archive includes materials by individual artists and collaborates on conceptual experimentation and archival strategies with international curatorial and fellowship projects.
Other important initiatives include DOCAM, an international research alliance on the documentation and the conservation of the media arts heritage organized by the Daniel Langlois Foundation, and the International Network for the Conservation of Contemporary Art (INCCA), organized by the Netherlands Institute for Cultural Heritage (ICN).
See also
Art conservation
Digital preservation
Digital art
Internet art
National Digital Library Program (NDLP)
National Digital Information Infrastructure and Preservation Program (NDIIPP)
New media art
Virtual art
References
Alain Depocas, Jon Ippolito, and Caitlin Jones, eds., Permanence Through Change: The Variable Media Approach, co-published by the Guggenheim Museum and The Daniel Langlois Foundation for Art, Science & Technology, 2003.
Jon Ippolito, "Death by Wall Label", 2007.
Jeff Rothenberg, "Avoiding Technological Quicksand: Finding a Viable Technical Foundation for Digital Preservation", 1998.
Variable Media Network
Richard Rinehart, The Media Art Notation System: Documenting and Preserving Digital / Media Art, 2007.
Further reading
Chin, Daryl. "Transmissible Evidence". PAJ: A Journal of Performance & Art Jan 2002, Vol. 24 Issue 70, p44-51, 8p, 4bw.
Steve Dietz. Collecting New Media Art: Just Like Anything Else, Only Different
Oliver Grau. "For an Expanded Concept of Documentation: The Database of Virtual Art", ICHIM, École du Louvre, Paris 2003, Proceedings, pp. 2–15. Expanded Concept of Documentation
Jones, Caitlin. "Does Hardware Dictate Meaning? Three Variable Media Conservation Case Studies" Horizon article
Jones, Caitlin. "Seeing Double: Emulation in Theory and Practice, The Erl King Case Study" Case Study
Jones, Caitlin. "Understanding Medium: preserving content and context in variable media art" Article from Keep Moving Images
Christiane Paul. Challenges for a Ubiquitous Museum: Presenting and Preserving New Media
Quaranta, Domenico. Interview with Jon Ippolito published in "Noemalab" Leaping into the abyss and resurfacing with a pearl
External links
erpanet The Preservation of Digital-Born Art
Preserving the Immaterial – A conference on variable media
DOCAM – Documentation and Conservation of the Media Arts Heritage / Documentation et Conservation du Patrimoine des Arts Médiatiques
Media Art and Museums: Guidelines and Case Studies
Variable Media Network – A resource from CHIN (Canadian Heritage Information Network)
Conservation and restoration of cultural heritage
Computer art
Digital art
Digital preservation
Internet culture
Multimedia
New media
Preservation (library and archival science) |
20334400 | https://en.wikipedia.org/wiki/Software%20industry%20in%20China | Software industry in China | The software industry in China is the business of developing and publishing software and related services in China. The size of the industry including software and information services in 2013 was worth 3060 billion RMB (about $493 billion) according to the Ministry of Industry and Information Technology.
Companies
Leaders in the enterprise software market are UFIDA Software, Kingdee, and SAP.
See also
Software companies of China
China Software Industry Association
Institute of Software, Chinese Academy of Sciences
Dalian Software Park
Business process outsourcing in China
References
Industry in China
China |
7160942 | https://en.wikipedia.org/wiki/Filesystem-level%20encryption | Filesystem-level encryption | Filesystem-level encryption, often called file-based encryption, FBE, or file/folder encryption, is a form of disk encryption where individual files or directories are encrypted by the file system itself.
This is in contrast to the full disk encryption where the entire partition or disk, in which the file system resides, is encrypted.
Types of filesystem-level encryption include:
the use of a 'stackable' cryptographic filesystem layered on top of the main file system
a single general-purpose file system with encryption
The advantages of filesystem-level encryption include:
flexible file-based key management, so that each file can be and usually is encrypted with a separate encryption key
individual management of encrypted files e.g. incremental backups of the individual changed files even in encrypted form, rather than backup of the entire encrypted volume
access control can be enforced through the use of public-key cryptography, and
the fact that cryptographic keys are only held in memory while the file that is decrypted by them is held open.
General-purpose file systems with encryption
Unlike cryptographic file systems or full disk encryption, general-purpose file systems that include filesystem-level encryption do not typically encrypt file system metadata, such as the directory structure, file names, sizes or modification timestamps. This can be problematic if the metadata itself needs to be kept confidential. In other words, if files are stored with identifying file names, anyone who has access to the physical disk can know which documents are stored on the disk, although not the contents of the documents.
One exception to this is the encryption support being added to the ZFS filesystem. Filesystem metadata such as filenames, ownership, ACLs, extended attributes are all stored encrypted on disk. The ZFS metadata relating to the storage pool is stored in plaintext, so it is possible to determine how many filesystems (datasets) are available in the pool, including which ones are encrypted. The content of the stored files and directories remain encrypted.
Another exception is CryFS replacement for EncFS.
Cryptographic file systems
Cryptographic file systems are specialized (not general-purpose) file systems that are specifically designed with encryption and security in mind. They usually encrypt all the data they contain – including metadata. Instead of implementing an on-disk format and their own block allocation, these file systems are often layered on top of existing file systems e.g. residing in a directory on a host file system. Many such file systems also offer advanced features, such as deniable encryption, cryptographically secure read-only file system permissions and different views of the directory structure depending on the key or user ...
One use for a cryptographic file system is when part of an existing file system is synchronized with 'cloud storage'. In such cases the cryptographic file system could be 'stacked' on top, to help protect data confidentiality.
See also
Steganographic file system
List of cryptographic file systems
Disk encryption
Disk encryption
Special-purpose file systems
Cryptographic software
Utility software types |
8667723 | https://en.wikipedia.org/wiki/Dcraw | Dcraw | dcraw is an open-source computer program which is able to read numerous raw image format files, typically produced by mid-range and high-end digital cameras. dcraw converts these images into the standard TIFF and PPM image formats. This conversion is sometimes referred to as developing a raw image (by analogy with the process of film development) since it renders raw image sensor data (a "digital negative") into a viewable form.
A number of other image processing programs use dcraw internally to enable them to read raw files.
Development of dcraw began on February 23, 1997. Version 1.0 was released in revision 1.18, on May 5, 2000. Versions up to 3.15 used the name Canon PowerShot Converter, starting with v3.40 the name was Raw Photo Decoder, switching to Raw Photo Decoder "dcraw" in v5.70. Version 8.86 supported 300 cameras.
The development has stalled, with only two releases since May 2015 and the last release dated June 2018, but parts of dcraw are included in LibRaw.
Motivation
While most camera manufacturers supply raw image decoding software for their cameras, this software is almost always proprietary, and often becomes unsupported when a camera model is discontinued. The file formats themselves are often undocumented, and several manufacturers have gone so far as to encrypt all or part of the data in their raw image format, in an attempt to prevent third-party software from accessing it.
Given this ever-expanding plethora of raw image formats, and uncertain and inconsistent support for them by the manufacturers, many photographers worry that their valuable raw images may become unreadable as the applications and operating systems required become obsolete.
In contrast to proprietary decoding software, dcraw strives for simplicity, portability, and consistency, as expressed by its author:So here is my mission: Write and maintain an ANSI C program that decodes any raw image from any digital camera on any computer running any operating system.
Design
Because many raw image formats are specific to one make or model of camera, dcraw is frequently updated to support new models. For many proprietary raw image formats, dcraw's source code (based largely on reverse-engineering) is the best—or only—publicly available documentation. dcraw currently supports the raw formats of several hundred cameras.
dcraw is built around the Unix philosophy. The program is a command line tool which takes a list of raw image files to process, along with any image adjustment options desired. dcraw also serves as the basis for various high-level raw image-processing applications (such as viewers and converters), both free and open source software as well as proprietary software.
GUI front-ends
Several GUI front-ends for dcraw are available. These applications use dcraw as a back-end to do the actual processing of raw images, but present a graphical interface with which the image processing options can be adjusted.
AZImage – image converter (uses LibRaw rather than dcraw) for Windows
darktable – a standalone raw developer for Windows, Linux, and macOS
dcraw-assist – for Linux
dcRAW-X – for macOS
digiKam – for Linux
DNG Viewer by ideaMK – raw image viewer for Windows
EasyHDR – for Windows
gimp-dcraw – GIMP plug-in for Windows, Linux, and macOS
Helicon Filter – photo editor, can use dcraw for raw processing for Windows
Konvertor – for Windows
nUFRaw – a standalone raw developer, a new version of UFRaw for Linux
Phiewer – for macOS
RAWDrop – for Windows
Rawstudio – a standalone raw developer for Linux
RawTherapee – a standalone raw developer for Windows, Linux, and macOS
SilkRaw – exports embedded thumbnails and launch batch conversion for Amiga OS4
SNS-HDR – for Windows
UFRaw – a standalone raw developer and GIMP plug-in for Windows, Linux, and macOS
References
External links
dcraw compiled for Microsoft Windows by Axel Rietschin
dcraw compiled for Microsoft Windows by Bartłomiej Okonek
Digital photography
Free photo software
Free software programmed in C
Photo software for Linux
Raw image processing software |
39783577 | https://en.wikipedia.org/wiki/Faculty%20of%20Engineering%20and%20Technology%2C%20Jamia%20Millia%20Islamia | Faculty of Engineering and Technology, Jamia Millia Islamia | The Faculty of Engineering and Technology, Jamia Millia Islamia (JMIFET) was established in 1985 in New Delhi, India to provide engineering education. All courses of the faculty are approved by the All India Council for Technical Education. JMI FET is one of the top ranked engineering colleges in the country.
History
The BE course in civil engineering was started in 1979. The Faculty Of Engineering and Technology officially came into being in 1985. Undergraduate programmes in civil, electrical and mechanical engineering began is as old as the institution. Undergraduate programmes in electronics engineering began in 1996 and in computer engineering began in 2000. The building was inaugurated in October 1995. The four-storey structure has pointed arches adorning the windows. The entrance itself is pyramid-shaped.
Campus
The faculty is located at Jamia Nagar, Okhla. The University cricket ground (Bhopal Ground) is the best by university standards in the country and has played host to many Ranji trophy cricket matches apart from hosting many matches including the semi-finals and the finals of the recently concluded university cricket championships. The Sports Complex also has tennis, basketball, volleyball, football and rugby courts (it was the official practice venue for rugby in the Commonwealth Games 2010) besides indoor facilities for table tennis and badminton apart from a synthetic athletics track. The complex also has a generously equipped gymnasium.
Admissions
Admissions to the Bachelor of Technology Programme are based on the performance in the JEE Main examination conducted by the National Testing Agency.
Admissions towards the M.Tech. programme is based on the performance in GATE examination.
Departments and academic programmes
The Faculty Of Engineering and Technology has the following departments
University Polytechnic
Started in 1957 as the Civil and Rural Institute, now named as Jamia Polytechnic it is one of the oldest departments of Jamia Millia Islamia operative under the Faculty of Engineering and Technology of the university. It is conducting Diploma Engineering courses of three years duration in the following branches:
Diploma in Civil Engineering
Diploma in Computer Engineering
Diploma in Electrical Engineering
Diploma in Electronics Engineering
Diploma in Mechanical Engineering
Department of Applied Sciences & Humanities
Established in 1996, as one among the departments of the Faculty of Engineering and Technology, this department participates in B.Tech. and M.Tech. programs offered by the Faculty. This department offers M.Tech. in the trending and diverse field of Engineering and Technology, M.Tech. Energy Science and M.Tech. Computational Mathematics. In addition, the department offers a two-year M.Sc. Electronic program with an intake of 30 students per year. The M.Sc. Electronics program imparts instruction in the basics of electronics science, The program has earned a prestigious position for itself among similar programs available in the country.
Department of Computer Engineering
The Department of Computer Engineering was started in the year 2000. Two undergraduate courses are running in this department, which are B.Tech. in computer engineering and B.E. in computer engineering. The department also runs a Ph.D. program, under which a number of research scholars are working in the fields of networking, data mining, and artificial intelligence, among others.
Department of Civil Engineering
The Department of Civil Engineering is one of the oldest and the largest department in the Faculty of Engineering & Technology. The department offers two undergraduate courses in civil engineering. The department also offers Master's programme with specializations in environmental engineering and earthquake engineering. More than 35 Ph.D. research scholars including many from foreign countries are currently working in the department on emerging research areas.
Department of Electrical Engineering
The Department of Electrical Engineering began in 1985. The department offers a regular four-year B.Tech. program in "Electrical Engineering" and two regular M.Tech. Programs. The First M.Tech. Programme in "Electrical Power System Management (EPSM)" was started in 2003–2004. The second M.Tech. Programme in "Control and Instrumentation System (CIS)" is started in 2012. The department also runs a four-year B. E. (evening) program in "Electrical Engineering" for working professionals with Diploma. The department also operates Ph.D. programme in five major areas; (1) Power System, (2) Power Electronics and Drives, (3) Computer Technology, (4) Control and Instrumentation, and (5) Electronics and Communication.
Department of Electronics & Communication Engineering
The department of Electronics and Communication Engineering came into existence at the Faculty of Engineering and Technology in 1996. The department runs two undergraduate courses - B.Tech. in Electronics and Communication Engineering and B.E (evening) program in Electronics and Communication Engineering. A proposal to start the M.Tech. Course has also been submitted and the course may be started at the earliest.
Department of Mechanical Engineering
The department offers eight-semester Bachelor of Technology (B.Tech.) course with an annual intake of seventy, four-semester Master of Technology (M.Tech.) course with annual intake of 18 and Doctor of Philosophy (Ph.D.) in Mechanical Engineering. M.Tech. Program is offered in three broad areas of Mechanical Engineering namely Production-Industrial Engineering, Machine Design and Thermal Engineering. The department has a student population of about 560 at UG, 40 at PG and about 44 at doctoral research levels. It also offers a four-year Bachelor of Engineering (B.E) (Regular-Evening) program in the evening with an annual intake of 70 for working diploma holders to up-grade their knowledge and skill.
Rankings
Faculty was ranked 28 among engineering colleges by the National Institutional Ranking Framework (NIRF) in 2020.
In London based Times Higher Educationsubject ranking-2020, Faculty of Engineering JMI scaled high in Engineering and Technology improving its position to 401-500 worldwide within India its rank is 11 among all higher education institutions while among universities it is 2nd position. JMI made to list of top institutions worldwide for computer science having been placed at 301–400, while among Indian Institutions it has been ranked at 16th position and at 7th among Indian universities.
Student activities
Festivals
Xtacy is the Techno-Cultural Festival organised by CSI, IEEE, SAE, ASCE hosted by FET every year.
IEEE Student Branch of JMI organizes ENCOMIUM, their annual techno-cultural festival each year. So far there have been 10 editions of ENCOMIUM.
Algorhythm is the Techno-Cultural Festival organized by Computer Society of India, Jamia Millia Islamia Student Branch CSI-JMI [1] and Department of Computer Engineering, Jamia Millia Islamia.
Tangelo Town is the Techno-Cultural cum sports festival hosted by the Faculty of Engineering and Technology.
Student organisations
Student chapters of prominent technical societies like Computer Society of India, IEEE, Indian Society for Technical Education (ISTE), American Society of Mechanical Engineers (ASME), Society of Automotive Engineers (SAE), American Society of Civil Engineers (ASCE), Robot Society of India (TRS) are there in the faculty. Apart from Technical organisations the college also has cultural and other non-technical clubs including the likes of Literary Club, Drama Club, Debating Club, Entrepreneurship Club, Enactus Club, Social Services Club, Music Club etc.
Notable alumni
Sanjeev Aggarwal, Founder of energy company Amplus Energy Solution
Vikas Jain, Co-founder Micromax
Sumit Kumar, Co-founderMicromax
Adil Zaidi, Executive Director Ernst & Young
Amit Mathur, Senior Vice President Axis Bank
Mohit Khattar, CEO Baskin-Robbins India
Anil Gaur, Vice President, Head of Engineering NetApp
Sunil Chopra, Co-founder Calence Software
Arvind Sampath, vice-president, Information Division, Goldman Sachs, Bangalore
Sukesh Jain, Senior Vice President Samsung Electronics
Vaibhav Chadha, Managing Director Cantor Fitzgerald
Faizi Mohsini, Vice President Services Boeing Defense, Space & Security India
References
Engineering colleges in Delhi
Jamia Millia Islamia |
2248013 | https://en.wikipedia.org/wiki/Roger%20Wilco%20%28software%29 | Roger Wilco (software) | Roger Wilco is one of the first voice-over-IP client programs designed primarily for use with online multiplayer video games. Roger Wilco enabled online gamers to talk to one another through a computer headset or other audio input device instead of typing messages to each other. Within a year of the software's introduction, over 2 million online video gamers were using the application.
Roger and Wilco are procedure words which, in radiophone communication, mean "I understood your message and I will comply".
Development and release
Roger Wilco was developed by a US startup company called Resounding Technology. Three of the company's four founders were roommates when they were undergraduate students at Princeton University: Adam Frankl, Tony Lovell, and Henri de Marcellus. David Lewis, who led marketing and business development, was based in Silicon Valley and was responsible for growing the Roger Wilco community and managed partnerships with video game publishers who bundled Roger Wilco with their games. Plantronics, PNY, and other game peripheral manufacturers also bundled Roger Wilco with their products.
The company began publishing pre-release versions of the software in the autumn of 1998; the first general availability release, Roger Wilco Mark I, followed in May 1999. The company distributed both the client and server as freeware. The server software, Roger Wilco Base Station, was developed for Linux, FreeBSD, Windows 9x, and Windows NT. Development of a client for Mac OS never progressed beyond the alpha phase.
David Lewis demonstrated the product's server-less voice capabilities to Mpath Interactive, a startup company in Silicon Valley, who went on to acquire Resounding Technology based on their proprietary peer-to-peer voice technology. The company IPO'd soon after and renamed it to HearMe, Inc.
In December 2000, GameSpy bought the Roger Wilco intellectual property. In early 2001, they integrated an updated version of the client software into their game server browser, GameSpy Arcade. Players could use the Roger Wilco software if they bought a subscription to GameSpy's Game Tools suite. David Lewis licensed SDK versions of the voice technology to virtually every major game publisher including Activision, EA, Microsoft, Ubisoft, and others. The licensing arrangement with Microsoft enabled the use of the Voice SDK for Microsoft's Xbox and required that all multi-player Xbox game developers included in-game voice chat capabilities. Today, virtually every leading online multiplayer game includes voicechat due to the pioneering efforts by the Resounding team.
GameSpy published the final version of the Roger Wilco client for Windows on July 8, 2003. That year, a vice president of consumer products at GameSpy Industries told The Boston Globe that Roger Wilco had about 5 million users.
See also
Comparison of VoIP software
Mumble
TeamSpeak
Ventrilo
References
Further reading
External links
1999 software
GameSpy
VoIP software
Windows Internet software |
17325661 | https://en.wikipedia.org/wiki/Marathon%20Technologies | Marathon Technologies | Marathon Technologies Corp. was founded by senior executives and engineers responsible for developing Digital Equipment Corporation's VAXft fault-tolerant systems. The team used this experience to create the first software and networking technology that allowed multiple Windows/Intel servers to operate as a single fault-tolerant system.
Marathon Technologies migrated its technology in 2004 to a software-only product named everRun that works with standard off-the-shelf x86 Intel and AMD servers with Windows Server 2003 and unmodified Windows applications.
In 2007, Marathon Technologies announced its v-Available product initiative, designed to fill the gap in the market for effective high availability software for server virtualization. In the spring of 2008 the company released everRun VM for Citrix XenServer the first in the series of v-Available products from Marathon Technologies that provides fault-tolerant high availability and disaster recovery protection.
In late 2010, Marathon released everRun MX, the industry's first software-based fault tolerant solution for symmetric multiprocessing (SMP) and multi-core servers and applications.
Marathon Technologies is headquartered in Littleton, MA, United States with additional offices in the United States, Europe and Asia. Marathon Technologies has taken venture funding from Atlas Venture, Longworth Venture Partners and venture capital firm Sierra Ventures.
Marathon Technologies was acquired by Stratus in September 2012.
References
External links
Official Website
Official Blog
24/7 Uptime - UK elite partner
Stratus acquisition
Companies established in 1993
Software companies based in Massachusetts
Software companies of the United States |
27500501 | https://en.wikipedia.org/wiki/OS/VS2%20%28SVS%29 | OS/VS2 (SVS) | Single Virtual Storage (SVS) refers to Release 1 of Operating System/Virtual Storage 2 (OS/VS2); it is the successor system to the MVT option of Operating System/360. OS/VS2 (SVS) was a stopgap measure pending the availability of MVS, although IBM provided support and enhancements to SVS long after shipping MVS.
SVS provides a single 16MiB address space which is shared by all tasks in the system, regardless of the size of physical memory.
Differences from MVT
OS/360 used the Interval Timer feature for providing time of day and for triggering time-dependent events. The support for S/370 made limited use of new timing facilities, but retained a dependency on the Interval Timer. SVS uses the TOD Clock, Clock Comparator and CPU Timer exclusively.
In the wake of the Applied Data Research lawsuit IBM decided to develop chargeable versions of several applications, mostly language processors, although it's not clear whether the lawsuit was actually the deciding factor. As a result, SVS does not include a sort/merge program or any language processor other than the new Assembler (XF) (replacing Assembler (F)) which is required for the system generation process.
Authorized Program Facility (APF) is a new facility that limited use of certain dangerous services to programs that are authorized, that is link edited with AC(1) and were loaded from the link list, LPA, or SYS1.SVCLIB. In MVS IBM enhanced the facility to allow the installation to designate additional data sets as authorized.
Because the Reader/Interpreter in SVS runs in pageable storage, there is much less benefit to the Automatic SYSIN Batching (ASB) Reader, and SVS does not include it. OS/360 has a facility called Direct SYSOUT (DSO) whereby specific output classes can be diverted to data sets on tape instead of normal SPOOL datasets. As DASD prices dropped, the facility dropped from use, and SVS does not provide it.
OS/360 provides limited interactive facilities in Conversational Remote Job Entry (CRJE), Graphic Job Processing (GJP), Interactive Terminal Facility (ITF) and Satellite Graphic Job Processing (SGJP) prior to the Time Sharing Option (TSO), but IBM did not carry those forward to SVS. TSO continues to provide equivalent facilities, except that it does not support use of a 2250 as a terminal. Use of 2250 from a batch job using Graphics Access Method (GAM) and Graphics Subroutine Package (GSP) remains supported. OS/360 includes a batch debugging facility named TESTRAN; it is clumsier than the equivalent facility in IBSYS/IBJOB, and was not used much. With the advent of TSO TESTRAN became even less relevant, and SVS does not include it.
Dynamic Support System (DSS) was a new OS/VS debugging facility for system software. It remained available until Selectable Unit 64 and MVS/System Extensions Release 2.
The storage key facility of System/360 and System/370 keeps track of when a page frame has been modified. The Machine Check Handler (MCH) in SVS can correct a parity or ECC error in an unmodified page by unassigning the damaged page frame and marking the page table entry to cause a pagein operation into a newly assigned page table. This replaces the special handling of refreshable transient SVC routines in OS/360.
SVS expands the size of the Error Recovery Procedure (ERP) transient area.
None of the processors on which SVS runs have an equivalent to the 2361 Large Core Storage (LCS), and thus there is no need for Hierarchy support, which SVS does not provide. SVS also dropped support for some obsolete I/O equipment.
In OS/360 load modules can be permanently loaded at Initial Program Load (IPL) time into an area of real storage known as the Link Pack Area (LPA). In SVS the LPA was split into three areas, each of which is searched in turn.
The installation can specify a list of modules to be loaded into the Fixed Link Pack Area (FLPA). These are loaded into V=R storage at IPL time.
The installation can specify a list of load modules to be loaded into the Modified Link Pack Area (MLPA) at IPL time. These modules are subject to normal paging.
SVS uses a dedicated paging data set to back up the Permanent Link Pack Area (PLPA). In a normal IPL, SVS will simply allow modules in the existing PLPA paging data set to be paged in at need, but the operator can specify the CLPA option to load all of the load modules from SYS1.LPALIB into the PLPA and write the new PLPA into the PLPA paging data set.
OS/360 has support for a multiprocessor version of the 360/65. SVS provide no equivalent support; customers wanting to run a multiprocessor System/370 have to use MVS.
OS/360 introduced Telecommunications Access Method (TCAM) as the successor to Queued Telecommunications Access Method (QTAM). SVS does not include QTAM.
SVS does not include Remote Job Entry (RJE). However, ASP and HASP provide comparable facilities.
Because of the larger (16 MiB) address space that SVS provides, there is less external fragmentation than in MVT, and Rollin/Rollout would provide less of a benefit. SVS does not include it.
In OS/360, transient SVC routines are loaded into 1 KiB areas known as SVC Transient Areas, and a considerable amount of code is required to manage them. In SVS, all SVC routines are preloaded into virtual storage and there are no SVC Transient Areas.
While SVS retains the SPOOL support of OS/360, most shops used ASP or HASP, the precursors of JES3 and JES2, respectively.
Storage Management
Storage management in SVS is similar to that in MVT, with a few notable differences. The description below is somewhat simplified; it glosses over some special cases.
SVS has 16MiB of addressable storage in a single address space, regardless of the size of physical memory. The nucleus and the FLPA are Virtual=Real (V=R), meaning that each virtual address in that area is mapped to the corresponding physical address.
A job step in SVS can request V=R storage; all assigned pages in a V=R region are mapped to the corresponding real page frames.
When a program check occurs with an interrupt code of 16 or 17, SVS checks whether a page has been assigned to the virtual address. If it has, SVS will assign a page frame and read the contents of the page into it. If no page has been assigned, SVS causes an Abnormal End (ABEND) with the same ABEND code (0C4) that MVT would have used for a protection violation.
SVS provides services for page fixing and unfixing. When a page is fixed, its page frame is not subject to page stealing. The primary purpose of page fixing is I/O.
I/O
I/O channels on S/370 (and successors) do not have the ability to do address translation. However, as part of the support for virtual storage operating systems IBM has provided the Indirect Data Address (IDA) feature. A Channel Control Word (CCW) with the IDA bit set points to an IDA list (IDAL) rather than directly to the I/O buffer.
SVS provides a CCW translation service as part of the Execute Channel Program (EXCP) SVC. EXCP will do any necessary page fixing, allocate storage for IDA lists, translate virtual addresses to real, put the translated addresses into the appropriate IDA words and put the real addresses of the IDA lists into the translated CCW's. When an I/O completes, EXCP reverses the process, freeing storage and translating status back into virtual.
In addition, SVS provides the Execute Channel Program in Real Storage (EXCVR) SVC for privileged applications that do their own paged fixing and build their own IDA lists.
Independent Component Releases (ICRs)
IBM provided several enhancements to SVS that were not shipped with SVS initially. These included:
Telecommunications Access Method (TCAM) Release 10
Virtual Sequential Access Method (VSAM)
Virtual Telecommunications Access Method (VTAM) Release 2
References
Notes
IBM mainframe operating systems
1972 software |
10641780 | https://en.wikipedia.org/wiki/Expense%20management | Expense management | Expense management refers to the systems deployed by a business to process, pay, and audit employee-initiated expenses. These costs include, but are not limited to, expenses incurred for travel and entertainment. Expense management includes the policies and procedures that govern such spending, as well as the technologies and services utilized to process and analyze the data associated with it.
Software to manage the expense claim, authorization, audit and repayment processes can be obtained from organizations that provide a licensed software, implementation and support service, or alternatively, from software as a service (SaaS) providers. SaaS providers offer on-demand web-based applications managed by a third party to improve the productivity of expense management.
Steps
Expense management automation has two aspects: the process an employee follows in order to complete an expense claim (for example, logging a hotel receipt or submitting mobile phone records) and the activity accounts or finance staff undertake to process the claim within the finance system.
Typically, a manual process will involve an employee completing a paper, spreadsheet, or graphical user interface-based expense report that they then forward, along with the relevant tax invoices (receipts), to a manager or other controller for approval. Once the manager has approved the claim, they forward it on to the accounts department for processing. The accounts staff then key each expense item into the company's finance system before filing the claim and receipts away. In a Software as a Service implementation, these processes are largely automated and the submission and approvals processes are transacted electronically.
Expense Management automation is the means by which an organization can significantly reduce transaction costs and improve management control when logging, calculating and processing corporate expenses. Independent research evaluating the use of automated expense management systems has confirmed that the cost of processing an expense claim is reduced as the level of automation increases.
Organizations may automate their expense management processes for reasons such as compliance, cost reduction, control, and employee productivity.
Types
Business strategies are tailored to various types of expense management:
Spreadsheets: Spreadsheets can be an easy, cheap way to keep track of expenses, but they still have paper receipts that go along with them that can be lost or damaged. This can also be a labor-intensive method and it can be confusing if employees are not good at using spreadsheets.
Paper forms: Paper forms work well with paper receipts. This is also an inexpensive way to manage expense reports. However, this can amount to a lot of manual work of logging and tracking these reports for both employees, approvers, and the people who need to pay the bills in the accounting department.
Software: Software reduces the workload, but it also can cost more in the beginning to implement. According to the Aberdeen Group's report, "Best-In-Class T&E Expense Management: How They Do It," software can solve the major problems of compliance, manual labor, approval time, and the cost of expense reporting overall.
Telecom expense management
Telecom Expense Management (TEM) is the process of managing large enterprises communications costs to include fixed voice and data, mobile devices, Unified Communications and Collaboration (UCC), VoIP, and any other IT related services. The management of wireless and wireline service and asset expenses is labeled as Telecom expense management. Historically, Telecom expenses were managed as general services. In recent years, more and more organizations associate telecom expense to IT services. This change in management is due to a shift in global business strategies towards evolving technologies.
Travel expense management
Travel expense management (often referred to as "T+E") is defined as the means to organize and manage travel arrangements and costs for traveling employee.
Technology expense management
Technology expense management or IT expense management is the management of technology costs such as software licenses, computer equipment, applications, etc. Technology expense management also includes the management of services related to technology (SaaS, PaaS, etc.). Technology expense management activities are often performed through the use of various technology tools (bill of IT, management software, workflows, etc.).
See also
Corporate travel management
References
Human resource management
Expense
Travel management |
57598421 | https://en.wikipedia.org/wiki/United%20States%20Army%20Futures%20Command | United States Army Futures Command | United States Army Futures Command (AFC) is a United States Army command aimed at modernizing the Army.
As of 2018 it was focused on six priorities: long-range precision fires, Next Generation Combat Vehicle, future vertical lift platforms, a mobile & expeditionary Army network, air & missile defense capabilities, and soldier lethality.
AFC's cross-functional teams (CFTs) are Futures Command's vehicle for sustainable reform of the acquisition process for the future.
Futures Command (AFC) was established in 2018 as a peer of Forces Command, Training and Doctrine Command, and Army Materiel Command (AMC), the Army commands that provide forces, training and doctrine, and materiel respectively). The other Army commands focus on their readiness to "Fight tonight" when called upon by the nation. In contrast, AFC is focused on future readiness for competition with near-peers, who have updated their capabilities. The command is supported by United States Army Reserve Innovation Command (aka. 75th Innovation Command).
By October 2021, the Chief of Staff of the Army was able to project that 24 of the top 35 priority programs for modernization would be fielded in Fiscal year 2023 (FY2023).
Overmatch of the capability of a competitor or adversary is one of the goals of the Futures Command. More specifically, the imposition of multiple simultaneous dilemmas upon a competitor or adversary is a goal of the US Army: to get into a position of relative advantage. By 2021, Army leadership recognized that new Army formations (the multi-domain task force) had the ability to simultaneously compete with, and also threaten an adversary, with its new capability, across domains (space, cyber, disinformation) of the conflict continuum. By 2022 or 2023, a new concept for command and control (JADC2) will have been largely prototyped.
History
U.S. Army Futures Command was activated in the summer of 2018. The Decker-Wagner report on the 2010 Army Acquisition Review (Jan 2011) listed numerous changes to the acquisition process; the recommendation to disestablish RDECOM was not followed. Instead a unitary Futures Command, to unify development over the life cycle was moved forward by an Acting Secretary of the Army, and Vice Chief of Staff of the Army, who established a task force for modernization in 2016-2017 using cross-functional teams of subject matter experts to drive initial actions.
AFC declared its full operational capability in July 2019, after an initial one-year period. The FY2020 military budget allocated $30 billion for the top six modernization priorities over the next five years. The $30 billion came from $8 billion in cost avoidance and $22 billion in terminations. Over 30 projects are envisioned to become the materiel basis needed for overmatching any potential competitors in the 'continuum of conflict' over the next ten years, in multi-domain operations (MDO). By 2018 a fundamental strategy was formulated, involving simultaneous integrated operations across domains. This strategy involves pushing adversaries to standoff, by presenting them with multiple simultaneous dilemmas. By 2028, the ability to project rapid, responsive power across domains will have become apparent to potential adversaries.
From an initial 12 people at its headquarters in 2018 it grew to 24,000 across 25 states and 15 countries in 2019. The apparent rapid expansion came by research facilities and personnel (including ARCIC and RDECOM) migrated from other commands and parts of the Army such as the United States Army Research Laboratory.
Transition to multi-domain operations (MDO)
According to then-Secretary Ryan McCarthy, the three elements in Futures Command are to be:
Futures and Concepts: assess gaps (needs versus opportunities, given a threat). Concepts for realizable future systems (with readily harvestable content) will flow into TRADOC doctrine, manuals, and training programs.
Combat Development: stabilized concepts. Balance the current state of technology and the cash-flow requirements of the defense contractors providing the technology, that they become deliverable experiments, demonstrations, and prototypes, in an iterative process of acquisition. (See Value stream)
Combat Systems: experiments, demonstrations, and prototypes. Transition to the acquisition, production, and sustainment programs of AMC.
Then-Secretary of the Army, Mark Esper emphasized that the 2018 administrative infrastructure for the Futures and Concepts Center (formerly ARCIC) and United States Army Combat Capabilities Development Command (CCDC, now called DEVCOM, (formerly RDECOM)) remains in place at their existing locations. What has changed or will change is the layers of command (operational control, or OPCON) needed to make a decision. He said "You've got to remain open to change, you've got to remain flexible, you've [got] to remain accessible. That is the purpose of this command."
Cross-Functional Teams (CFTs)
Under Secretary McCarthy characterized a Cross-Functional Team (CFT) as a team of teams, led by a requirements leader, program manager, sustainer and tester. Each CFT must strike a balance for itself amid constraints: the realms of requirements, acquisition, science and technology, test, resourcing, costing, and sustainment. A balance is needed in order for a CFT in order to produce a realizable concept before a competitor achieves it. The Army Requirements Oversight Council (AROC) itself serves as a kind of CFT, operating at a higher level as response to Congressional oversight, budgeting, funding, policy, and authorization for action.
CFTs for materiel and capabilities were first structured in a task force, in order to de-layer the Army Commands. Each CFT addresses a capability gap, which the Army must now match for its future: there can be a Capability Development Integration Directorate (CDID), for each CFT. Initially, the CFTs were placed as needed; eventually they might each co-locate at a Center of Excellence (CoE) listed below. For example, the Aviation CoE at Fort Rucker, in coordination with the Aviation Program Executive Officer (PEO), also contains the Vertical Lift CFT and the Aviation CDID.
Modernization reform is the priority for AFC, in order to achieve readiness for the future.
The CFTs will be involved in all three of AFC's elements: Futures and concepts, Combat development, and Combat systems. "We were never above probably a total of eight people" —BG Wally Rugen, Aviation CFT. Four of the eight CFT leads have now shifted from dual-hat jobs to full-time status. Each CFT lead is mentored by a 4-star general.
Although AFC and the CFTs are a top priority of the Department of the Army, as AFC and the CFTs are expected to unify control of the $30 billion-dollar modernization budget, "The new command will not tolerate a zero-defects mentality. 'But if you fail, we'd like you to fail early and fail cheap,' because progress and success often builds on failure." —Ryan McCarthy: Holland notes that prototyping applies to the conceptual realm ('harvestable content') as much as prototyping applies to the hardware realm.
A 2019 Government Accountability Office (GAO) report cautions that lessons learned from the CFT pilot are yet to be applied; Holland notes that this organizational critique applies to prototyping hardware, a different realm than concept refinement ("scientific research is a fundamentally different activity than technology development"). Also in 2019 the GAO recommended that the government establish a process to ensure that CFTs implement their intended business reforms; however by 2021 the office of the Chief Management officer (CMO) had been disestablished.
Joint collaboration on modernization
The Secretaries of the Army, Air Force, and Navy meet regularly to take advantage of overlap in their programs:
Hypersonics
Hypersonics: The US Army (August 2018) has no tested countermeasure for intercepting maneuverable hypersonic weapons platforms, and in this case the problem is being addressed in a joint program of the entire Department of Defense. The Army is participating in a joint program with the Navy and Air Force, to develop a hypersonic glide body, by mutual agreement between the respective secretaries In order to rapidly develop this capability, a dedicated program office was established, in behalf of the joint services. A division of responsibility was agreed upon, with researchers who demonstrated hypersonic capability in 2011, teaching industrial vendors, to transfer the technology. Joint programs in hypersonics are informed by Army work; however, at the strategic level, the bulk of the hypersonics work remains at the Joint level. Long Range Precision Fires (LRPF) is an Army priority, and also a DoD joint effort. The Army and Navy's Common Hypersonic Glide Body (C-HGB) had a successful test of a prototype in March 2020. After the US realized that a catch-up effort was needed, billions of dollars were expended by 2020. A wind tunnel for testing hypersonic vehicles is being built at the Texas A&M University System' RELLIS Campus in Bryan, Texas (2019). The Army's Land-based Hypersonic Missile "is intended to have a range of 1,400 miles". By adding rocket propulsion to a shell or glide body, the joint effort shaved five years off the likely fielding time for hypersonic weapon systems. Countermeasures against hypersonics will require sensor data fusion: both radar and infrared sensor tracking data will be required to capture the signature of a hypersonic vehicle in the atmosphere. In 2021 the GAO counted 70 separate Hypersonics projects, in both offense and defensive categories overseen by DoD's Office of the Under Secretary of Defense for Research and Engineering, which oversees only research and development, and not DoD's Under Secretary of Defense for Acquisition and Sustainment —DoD's acquisition and sustainment office, which do not need oversight until the Hypersonics projects are ready for the acquisition phase.
By 2021, the Missile Defense Agency (MDA) realized that it almost had a countermeasure to hypersonic boost-glide weapons, by using existing data on the adversary hypersonic systems which were gathered from existing US satellite and ground-based sensors. MDA then fed this data into its existing systems models, and concluded that the adversary hypersonic weapon's glide phase offered the best chance for MDA to intercept it. MDA next proffered a request for information (RFI) from the defense community for building interceptors (denoted the GPI —glide phase interceptor) against the glide phase of that hypersonic weapon. GPIs would be guided by Hypersonic and Ballistic Tracking Space Sensors (HBTSS). These GPI interceptors could first be offered to the Navy for Aegis to intercept using the C2BMC, and later to the Army for THAAD to intercept using IBCS.
Multi-Domain Operations (MDO); Joint warfighting concept (JADC2)
Multi-Domain Operations (MDO): Joint planning and operations are also part of the impending DoD emphasis on multi-domain operations. Multi-domain battalions, first stood up in 2019, comprise a single unit for air, land, space,—and cyber domains. A hypersonics-based battery similar to a THAAD battery is under consideration for this type of battalion, denoted a strategic fires battalion. In 2019, as part of a series of globally integrated exercises, these capabilities were analyzed. Using massive simulation the need for a new kind of command and control (now denoted JADC2) to integrate this firepower was explored.
The ability to punch-through any standoff defense of a near-peer competitor is the goal which Futures Command is seeking. For example, the combination of F-35-based targeting coordinates, Long range precision fires, and Low-earth-orbit satellite capability overmatches the competition, according to Lt. Gen. Wesley. Critical decisions to meet this goal will be decided by data from the results of the Army's ongoing tests of the prototypes under development.
For example, in Long Range Precision Fires (LRPF), the director of the LRPF CFT envisions one application as an anti-access/area denial (A2AD) probe; this spares resources from the other services; by firing a munition with a thousand-mile range at an adversary, LRPF would force an adversary to respond, which exposes the locations of its countermeasures, and might even expose the location of an adversary force's headquarters. In that situation an adversary's headquarters would not survive for long, and the adversary's forces would be subject to defeat in detail. But LRPF is only one part of the strategy of overmatch by a Combatant commander.
In August–September 2020. at Yuma Proving Ground, the US Army engaged in a five-week exercise to rapidly merge capabilities in multiple-domains. The exercise prototyped a ground tactical Network, pushing it to its limits of robustness (as of 2020, 36 miles on the ground, and demonstrated 1500-mile capability above the ground, with kill chains measured in seconds) in the effort to penetrate anti-access/area denial (A2AD) with long-range fires. Longer-range fires are under development, ranging from hundreds of miles to over 1000 miles, with yearly iterations of Project Convergence being planned.
MDO (multi-domain operations) and JADC2 (joint all-domain command and control) thus entails:
Penetrate phase: satellites detect enemy shooters
Dis-integrate phase: airborne assets remove enemy long range fires
Kinetic effect phase: Army shooters, using targeting data from aircraft and other sensors, fire on enemy targets.
Army Chief of Staff Gen. James C. McConville will discuss the combination of MDO and JADC2 with Air Force Chief of Staff Gen. Charles Q. Brown. In October 2020 the Chiefs agreed that Futures Command, and the Air Force's A5 office will lead a two-year collaboration 'at the most "basic levels" by defining mutual standards for data sharing and service interfacing' in the development of Combined Joint All-Domain Command and Control (CJADC2).
The ability of the joint services to send data from machine to machine was exercised in front of several of the Joint Chiefs of Staff in April 2021; this is a prerequisite capability for Convergence of MDO and JADC2.
Partners
AFC is actively seeking partners outside the military, including research funding to over 300 colleges and universities, but with one-year program cycles. "We will come to you. You don't have to come to us. —General Mike Murray, 24 August 2018"
Multiple incubator tech hubs are available in Austin,
especially Capital Factory, with offices of Defense Innovation Unit (DIUx) and AFWERX (USAF tech hub). Gen. Murray will stand up an Army Applications Lab there to accelerate acquisition and deployment of materiel to the soldiers, using artificial intelligence (AI)
as one acceleration technique; Murray will hire a chief technology officer for AFC. Gen. Murray, in seeking to globalize AFC, has embedded U.S. military allies into some of the CFTs.
AI; Software; Data; ISR; Allies and partners
Disinformation at scale appears to be AI-generated, in 2021.
Artificial Intelligence (AI) Modernization—The Secretary of the Army has directed the establishment of an Army AI Task Force (A-AI TF) to support the DoD Joint AI center. The execution order will be drafted and staffed by Futures Command:
The Army Applications Laboratory was established in 2018, along with the stand-up of the Army Futures Command, to act as a concierge service across the Army's Future Force Modernization Enterprise and the broader commercial marketplace of ideas.
Army AI task force (its relationship with the CFTs is cross-cutting, in the same sense as the Assured Position, Navigation, Timing (A-PNT) CFT and the Synthetic Training Environment (STE) CFT are also cross-cutting) will use the resources of the Army to establish scalable machine learning projects at Carnegie Mellon University
the Army CIO/G-6 will create an Identity, Credential, and Access Management system to efficiently issue and verify credentials to non-person entities (AI agents and machines)
DCS G-2 will coordinate with CG AFC, and director of A-AI TF, to provide intelligence for Long-Range Precision Fires
CG AMC will provide functional expertise and systems for maintenance of materiel with AI
AFC and A-AI TF will establish an AI test bed for experimentation, training, deployment, and testing of machine learning capabilities and workflows. Funding will be assured for the Fiscal Year 2019.
A Global Network to counter cyber attacks, much like Five Eyes, is the recommendation for multi-domain operations (MDO), which is unified to present a synoptic view of any cyber operation to all the combatant commands simultaneously. 'Decision dominance' is a tenet of the 'Joint warfighting concept'.
Defense Advanced Research Projects Agency (DARPA) AlphaDogfight: Trials of eight AI teams, which began learning how to fly in September 2019. In August 2020 the eight AI agents faced each other, in a series of simulated fights. The simulations included the g-forces which limit a human (accelerations greater than 9 g's will cause most forward-facing human pilots to black out— AI agents are not subject to these human constraints). The champion AI agent eventually met a human General Dynamics F-16 Fighting Falcon fighter pilot in simulated combat on 20 August 2020. On 20 August 2020, the champion AI agent consistently defeated a human F-16 pilot in a series of dogfights.
DoD's Joint AI Center (JAIC) is providing a Joint Common Foundation, a cloud-based AI toolkit for any DoD organization (viz., Futures Command) to use. JAIC is seeking to curate the flood of data at DoD to allow systematic, reliable datasets which are usable for machine learning.
Adaptive Distributed Allocation of Probabilistic Tasks (ADAPT) is a DARPA model for testing AI-to-human communication in a toy environment.
In 2021 DoD is requesting 600 separate AI efforts for FY2022 ($874 million) as opposed to 400 AI efforts for FY2021. The Army is using machine learning to extract targeting data from satellite sensors for its JADC2 effort.
Futures Command will stand up Army Software Factory in August 2021, to immerse Soldiers and Army civilians of all ranks in modern software development, in Austin. Similar in spirit to the Training with industry program, participants are expected to take these practices back with them, to influence other Army people in their future assignments, and to build up the Army's capability in software development. The training program lasts three years, and will produce skill sets for trainees as product managers, user experience and user interface designers, software engineers, or platform engineers. The Al Work Force Development program and this Software Factory will complement the Artificial Intelligence Task Force.
Tapping in to its personnel system, the Army has identified soldiers who can already code at Ph.D.-level, but who are in unexpected MOSs.
AFC is seeking to design signature systems in a relevant time frame according to priorities of the Chief of Staff of the Army (CSA). AFC will partner with other organizations such as Defense Innovation Unit Experimental (DIUx) as needed.
If a team from industry presents a viable program idea to a CFT, that CFT connects to the Army's requirements developers, Secretary Esper said, and the program prototype is then put on a fast track.
The Secretary of the Army has approved an Intellectual Property Management Policy, to protect both the Army and the entrepreneur or innovator.
For example, the Network CFT and the Program Executive Office Command, Control, Communications—Tactical (PEO C3T) hosted a forum on 1 August 2018 for vendors to learn what might function as a testable/deployable in the near future. A few of the hundreds of white papers from the vendors, adjudged to be 'very mature ideas', were passed to the Army's acquisition community, while many others were passed to United States Army Communications-Electronics Research, Development and Engineering Center (CERDEC) for continuation in the Army's effort to modernize the network for combat. Although some test requirements were inappropriately applied, the Command post computing environment (CPCE) has passed a hurdle.
While seeking information, the Army is especially interested in ideas that accelerate an acquisition program, in for example the Future Vertical Lift Requests for Information (RFIs): "provide a detailed description of tailored, alternative or innovative approaches that streamlines the acquisition process to accelerate the program as much as possible". In January 2020 the current Optionally manned fighting vehicle (OMFV) solicitation was cancelled when the OMFV's requirements added up to an unobtainable project; In February 2020 Futures command was now soliciting the industry for do-able ideas for an OMFV.
The 2020 top ten semifinalists (who will each receive $120,000) are:
Bounce Imaging, for a tactical throwable camera (self orienting, pointable camera)
GeneCapture, for deployable medical tests
Inductive Ventures, for magnetic braking of helicopters
IoT/AI, for hardware IoT AI devices
LynQ Technologies, for a GPS beacon
KeriCure, for wound care
MEI Micro, for Micro Electronic-Mechanical System Inertial Measurement Unit (assured position, navigation, and timing—A-PNT)
Multiscale Systems, for meta-material
Novaa, for single-aperture antennas ( multi-band rather than 1 dedicated antenna per application)
Vita Inclinata, stabilized anti-spin hoisting for pulling injured people on a stretcher into a hovering helicopter
The COVID-19 pandemic triggered the Army to run an Ventilator Challenge; entrants can submit their ideas online for immediate consideration and a possible cash prize to encourage participation for a $100,000 prize and possible Army contract. In 1964 Henrik H. Straub of Harry Diamond Labs, a predecessor to CCDC Army Research Laboratory, invented the Army Emergency Respirator (now termed a 'ventilator' in current terminology). This ventilator is one application of the fluidic amplifier (a 1957 Harry Diamond Labs invention), which allows the labored breathing of the patient to control the flow from an externally purified air stream, to augment the air flow into a patient's lungs.
TRX Systems won an award for technology which allows navigation in a GPS-denied environment, an A-PNT priority. The award was delayed by the COVID-19 pandemic, which allowed the company more time for business development.
AFC events
By 13 October 2021 the Chief of Staff of the Army could announce that the majority of the Army's Futures Command's 31 signature systems, and the four rapid capability projects of the Rapid Capabilities and Critical Technologies Office would be fielded by fiscal year 2023 (FY2023).
Acquisition
Futures Command partners with the ASA(ALT), who, in the role of the Army Acquisition Executive (AAE), has milestone decision authority (MDA) at multiple points in a Materiel development decision (MDD). (Thus, from the perspective of AFC, which seeks to modernize, consolidate the relevant expertise into the relevant CFT. The CFT balances the constraints needed to realize a prototype, beginning with realizable requirements, science and technology, test, etc. before entering the acquisition process (typically the Army prototypes on its own, and currently (2019) initiates acquisition at Milestone B, in order to have the Acquisition Executive, with the concurrence of the Army Chief of Staff, decide on production as a program of record at Milestone C). Next, refine the prototype to address the factors needed to pass the Milestone decisions A, B, and C which require Milestone decision authority (MDA) in an acquisition process. This consolidation of expertise thus reduces the risks in a Materiel development decision (MDD), for the Army to admit a prototype into a program of record.) The existing processes (as of April 2018) for a Materiel development decision (MDD) have been updated to clarify their place in the Life Cycle of a program of record: over 1200 programs/projects were reviewed; by October 2019, over 600 programs of record have been moved from the acquisition (development for modernization) phase to the sustainment phase (for mature projects, to continue their manufacture and fielding to the brigades). An additional life cycle management action is underway, to re-examine which of these projects/programs should be divested. (Surplus materiel might well go to the Security Assistance Command, perhaps to Foreign Military Sales.)
The emphasis remains with Futures Command, which selects programs to develop. In order to achieve its mission of achieving overmatch, each Futures Command CFT partners with the acquisition community. This community (the Army acquisition workforce (AAW)) includes an entire Army branch (the Acquisition Corps), U.S. Army Acquisition Support Center (USAASC), Army Contracting Command, (.. This list is incomplete). The Principal Military Deputy to the ASA(ALT) is also deputy commanding general for Combat Systems, Army Futures Command, and leads the Program Executive Officers (PEO); he has directed each PEO who does not have a CFT to coordinate with, to immediately form one, at least informally.
The PEOs work closely with their respective CFTs. The list of CFTs and PEOs below is incomplete.
Operationally, the CFTs offer "de-layering" (fewer degrees of separation between the echelons of the Army—Rugen estimates two degrees of separation), and provide a point of contact (POC) for Army reformers interested in adding value in the midst of constraints to be balanced while modernizing. "... and if we're really good, we'll continue to adapt. Year over year over year." —Secretary Esper (See Value stream.)
Prototyping and experimentation
"Our new approach is really to prototype as much as we can to help us identify requirements, so our reach doesn’t exceed our grasp. ... A good example is Future Vertical Lift: The prototyping has been exceptional." —Secretary of the Army Mark Esper. The development process will be cyclic, consisting of prototype, demonstration/testing, and evaluation, in an iterative process designed to unearth unrealistic requirements early, before prematurely including that requirement in a program of record.
AFC activities include at least one Cross-functional team, its Capability development integration directorate (CDID), and the associated Battle Lab, for each (Army Center of Excellence (CoE)) respectively. Each CDID and associated Battle Lab work with their CFT to develop operational experiments and prototypes to test.
ASA(ALT), in coordination with AFC, has dotted-line relationships between its PEOs and the CFTs. In particular, the Rapid Capabilities and Critical Technologies Office of ASA(ALT) has a PEO who is charged with developing experimental prototype 'units of action' for rapid fielding to the Soldiers. The prototypes are currently for Long range hypersonic weapons, High energy laser defense, and Space, as of June 2019, Speed and range are the Army capabilities which are being augmented, with spending on these capabilities tripling between 2017 and 2019.
Tests are run by JMC and White Sands Missile Range, which hosts ATEC. As United States Army Test and Evaluation Command (ATEC) reports directly to the Army Chief of Staff, the test support level from ATEC is to be specified by the CFT, or PEO. Fort Bliss and WSMR together cover 3.06 million acres, large enough to test every non-nuclear weapon system in the Army inventory.
JMC runs live developmental experiments to test and assess MDO concepts or capabilities that support the Army's six modernization priorities which are then analyzed by The Research and Analysis Center, denoted TRAC based out of Fort Leavenworth, or AMSAA, denoted the Data Analysis Center at APG. CCDC, now called DEVCOM (formerly RDECOM, at APG) includes the several Army research laboratory locations (ARLs), as well as research, development and engineering centers (RDECs) listed:
In internal partnerships, CCDC, now called DEVCOM (formerly RDECOM) has taken Long range precision fires (LRPF) as its focus in aligning its organizations (the six research, development and engineering centers (RDECs), and the Army Research Laboratory (ARL)); as of September 2018, RDECOM's 'concept of operation' is first to support the LRPF CFT, with ARDEC. AMRDEC is looking to improve the energetics and efficiency of projectiles. TARDEC Ground Vehicle Center is working on high-voltage components for Extended range cannon artillery (ERCA) that save on size and weight. Two dedicated RDECOM people support the LRPF CFT, with reachback support from two dozen more at RDECOM. In January 2019 RDECOM was reflagged as CCDC; General Mike Murray noted that CCDC will have to support more Soldier feedback, and that prototyping and testing will have to begin before a project ever becomes a program of record.
Although the Army Research Laboratory has not changed its name, Secretary Esper notes that the CCDC objectives supersede the activities of the Laboratory; the Laboratory remains in its support role for the top-six priorities for modernizing combat capabilities.
Acquisition specialists are being encouraged to accept lateral transfers to the several research, development and engineering centers (RDECs), where their skills are needed: Ground vehicle systems center (formerly TARDEC, at Detroit Arsenal. Michigan), Aviation and missile center (formerly AMRDEC, at Redstone Arsenal), C5ISR center (formerly CERDEC, at Aberdeen Proving Ground), Soldier center (formerly NSRDEC, Natick, MA), and Armaments center (formerly ARDEC, at Picatinny Arsenal) listed below.
In 2021 candidate Robotic combat vehicles (RCVs), both medium and light RCVs, along with surrogate heavy RCVs (modified M-113s) and proxy manned control vehicles (MET-Ds) were to marshal at Camp Grayling MI to test a company-sized tele-operated / unmanned formation. The light RCVs had their autonomous driving software installed in November and December 2020. The robotic vehicle formation begins a shakeout in April 2021. The RCVs (and the software, which is common to all 18 vehicles) enters ATEC (Army Test and Evaluation Command) safety testing through May 2022. Live-fire drills are scheduled to conclude in August 2022.
AFC branch locations
The following activities for Futures Command are at 23 locations. (A US Army center of excellence (CoE), or TRADOC Center of Excellence, can be co-located near a CFT, along with the associated Capability Development Integration Directorate (CDID) and "Battle Lab") The interrelation between AFC and TRADOC can be seen by the role of a TRADOC Capability manager, who is responsible for DOTMLPF, and reports to the TRADOC commander.
AFC HQ, Austin TX
AFSG Army Future Studies Group, 2530 Crystal Dr, Arlington, VA 22202
Futures and Concepts Center of AFC, formerly ARCIC Fort Eustis VA
JMC Joint Modernization Command, Fort Bliss, which is contiguous to WSMR
White Sands Missile Range NM, also houses ARL, TRAC, and Army Test and Evaluation Command.
FT LVN Operations research: Mission Command Battle Lab, Capability development integration directorate (CDID), The Research Analysis Center (TRAC), formerly TRADOC Analysis Center, Fort Leavenworth KS
CFT: Synthetic Training Environment (STE): The HQ for STE has opened in Orlando (28 January 2019).
CCOE Cyber CoE - (its CDID and Battle Lab), Fort Gordon GA
CFT: Mobile and Expeditionary Network
MCOE Maneuver CoE - (its CDID and Battle Lab), Fort Benning GA
CFT: Next Generation Combat Vehicle (NGCV)
CFT: Soldier Lethality
AVNCOE Aviation CoE - (its CDID), at Fort Rucker
CFT: Future Vertical Lift (FVL)
FCOE Fires CoE - (its CDID and Battle Lab), Doctrine updates to support strategic fires Fort Sill OK
CFT: Long Range Precision Fires (LRPF)
CFT: Air and Missile Defense
ICOE Intelligence CoE - (its CDID), Fort Huachuca AZ
MSCOE Maneuver Support CoE - (its CDID and Battle Lab), Fort Leonard Wood MO
SCOE Sustainment CoE - (its CDID), Fort Lee VA
APG Aberdeen Proving Ground, Aberdeen MD, also houses Combat Capabilities Development Command (CCDC, now called DEVCOM), formerly RDECOM, Army Materiel Systems Analysis Activity (AMSAA), and C5ISR center (the Command, Control, Communications, Computers, Cyber, Intelligence, Surveillance and Reconnaissance Center was formerly CERDEC)
CFT: Assured Positioning, Navigation and Timing (A-PNT)
CFT: Network CFT (N-CFT)
CFT: Long Range Precision Fires,
CCDC Armaments Center (formerly Armament research, development and engineering center—ARDEC), Picatinny Arsenal, PEO AMMO, and the Cross Functional Team for Long Range Precision Fires
CFT: Long Range Precision Fires
CCDC Ground Vehicle Systems Center (formerly Tank Automotive research, development and engineering center—TARDEC), Detroit Arsenal (Warren, Michigan)
CFT: Next Generation Combat Vehicle (NGCV)
Army Aviation and Missile Center (formerly Aviation and Missile research, development and engineering center—AMRDEC), Redstone Arsenal, Huntsville AL
CFT: Air and Missile Defense
CCDC Soldier Center (formerly Natick Soldier research, development and engineering center—NSRDEC), General Greene Ave, Natick, MA
Army Research Laboratory (ARL), Adelphi MD
ARL-Orlando Army Research Laboratory, Orlando FL
ARL West, Playa Vista CA
ARL-RTP Army Research Laboratory, Raleigh-Durham NC
AI task force at Carnegie-Mellon University
Need for modernization reform
Between 1995 and 2009, $32 billion was expended on programs such as the Future Combat System (2003-2009), with no harvestable content by the time of its cancellation. As of 2021, the Army had not fielded a new combat system in decades.
Secretary of the Army Mark Esper has remarked that AFC will provide the unity of command and purpose needed to reduce the requirements definition phase from 60 months to 12 months.
A simple statement of a problem (rather than a full-blown requirements definition) that the Army is trying to address may suffice for a surprising, usable solution. —General Mike Murray, paraphrasing Trae Stephens (One task will be to quantify the lead time for identifying a requirement; the next task would then be to learn how to reduce that lead time.—Gap analysis ) Process changes are expected. The development process will be cyclic, consisting of prototype, demonstration/testing, and evaluation, in an iterative process designed to unearth unrealistic requirements early, before prematurely including that requirement in a program of record. The ASA(ALT) Bruce Jette has cautioned the acquisition community to 'call-out' unrealistic processes which commit a program to a drawn-out failure, rather than failing early, and seeking another solution.
Secretary Esper scrubbed through 800 modernization programs to reprioritize funding for the top six modernization priorities, which will consume 80% of the modernization funding, of eighteen systems. The Budget Control Act will restrict funds by 2020. Secretary McCarthy has cautioned that a stopgap 2019 Continuing resolution (CR) would halt development of some of the critical modernization projects. Realistically, budget considerations will restrict the fielding of new materiel to one Armor BCT per year; at that rate, updates would take decades. The Budget Control Act (BCA) expires in 2022. The "night court" budget review process realigned $2.4 billion for modernization away from programs which were not tied to modernization or to the 2018 National Defense Strategy. The total FY2021 budget request of $178 billion is $2 billion less than the enacted FY2020 budget of $180 billion.
The CIO/G6 has targeted Futures Command (Austin) in 2019 as the first pilot for "enterprise IT-as-a-service"-style service contracts; General Murray now (July 2019) has a sensitive compartmented information facility in his headquarters, as a result of this pilot. Two other locations are to be announced for 2019. Six to eight other pilots are envisioned for 2020. However, 288 other enterprise network locations remain to be migrated away from the previous "big bang" migration concept from several years ago, as they are vulnerable to near-peer cyber threats. The CIO/G6 emphasizes that this enterprise migration is not the tactical network espoused in the top six priorities (a 'mobile & expeditionary Army network').
After AFC, the following G6 service contracts are high priority:
The Combat Training Centers (Fort Irwin, Fort Polk, and Grafenwöhr)
TRADOC and its Centers of Excellence (CoEs)
The power projection bases from which deployments spring
By February 2020 the Vice Chief of Staff could assess that Army modernization was perceptibly speeding up.
Silos
Chief Milley noted that AFC would actively reach out into the community in order to learn, and that Senator John McCain's frank criticism of the acquisition process was instrumental for modernization reform at Futures command. In fact, AFC soldiers would blend into Austin by not wearing their uniforms [to work side by side with civilians in the tech hubs], Milley noted on 24 August 2018 press conference. Secretary Esper said he expected failures during the process of learning how to reform the acquisition and modernization process; the Network CFT and PEO have detected a process failure in the DOT&E requirements process: some test requirements were inappropriately applied.
In the Department of Defense, the materiel supply process was underwritten by the acquisition, logistics, and technology directorate of the Office of the Secretary of Defense (OSD), with a deputy secretary of defense (DSD) to oversee five areas, one of them being acquisition, logistics, and technology (ALT). ALT is overseen by an under secretary of defense (USD). (Each of the echelons at the level of DSD and USD serve at the pleasure of the president, as does the secretary of defense (SECDEF).) The Defense Acquisition University (DAU) trains acquisition professionals for the Army as well.
In 2016 when RDECOM reported to AMC (instead of to AFC, as it does as of 2018), AMC instituted Life cycle management command (LCMC) of three of RDECOM's centers for aviation and missiles, electronics, and tanks: AMRDEC, CERDEC, and TARDEC respectively, as well as the three contracting functions for the three centers.
This Life Cycle Management (formulated in 2004) was intended to exert the kind of operational control (OPCON) needed just for the sustainment function (AMC's need for Readiness today), rather than for its relevance to modernization for the future, which is the focus of AFC. AFC now serves as the deciding authority when moving a project in its Life Cycle, out of the Acquisition phase and into the Sustainment phase.
Due to the COVID-19 pandemic, the Acquisition Executive, and the AFC commander created a COVID-19 task force to try to project supplier problems 30, 60, and 90 days out; they are respectively tracking 800 programs, and 35 priorities on a daily basis.
Relevance for modernization
The CFTs, as prioritized 1 through 6 by the Chief of Staff of the Army (CSA), each have to consider constraints: a balance of requirements, acquisition, science and technology, test, resourcing, costing, and sustainment.
The Doctrine, Organization, Training, Materiel, Leadership and education, Personnel and Facilities (DOTMLPF) method of mission planning was instituted to quantify tradeoffs in joint planning. TRADOC's Mission Command CoE uses DOTMLPF.
DOTMLPF will be used for modernization of the Army beyond materiel alone, which (as of 2019) is the current focus of the CFTs.
The updated modernization strategy, to move from concept to doctrine as well, will be unveiled by summer 2019.
DOTMLPF (doctrine, organization, training, materiel, leadership and education, personnel, and facilities) itself is planned as a driver for modernization. The plan is to have an MDO-capable Army by 2028, and an MDO-ready Army by 2035.
TRADOC, ASA (ALT), and AFC are tied together in this process, according to Vice Chief McConville. AFC will have to be "a little bit disruptive [but not upsetting to the existing order]" in order to institute reforms within budget in a timely way.
The ASA(ALT), or Assistant Secretary of the Army for acquisition, logistics, and technology is currently (2018) Dr. Bruce Jette. The ASA (ALT) is the civilian executive overseeing both the acquisition and the sustainment processes of the Department of the Army. The ASA(ALT) will coordinate the acquisition portion of modernization reform with AFC.
Congress has given the Army Other Transaction Authority (OTA), which allows the PEOs to enter into Full Rate Production quicker by permitting the services to control their own programs of record, rather than DoD. This strips out one layer of bureaucracy as of 2018.
MTA (middle tier acquisition authority) is another tool available to Program Managers and Contracting Officers.
Besides the AFC CFTS, the Army Requirements Oversight Council (AROC) could also play a part in acquisition reform; as of September 2018 the Deputy Chief of Staff G-8 (DCS G-8), who leads AROC and JROC (Joint Requirements Oversight Council) has aligned with the priorities of AFC.
The DCS G-8 is principal military advisor to the ASA (FM&C).
In addition, the Program Executive Officers (PEOs) of ASA (ALT) are to maintain a dotted-line relationship (i.e., coordination) with Futures Command.
There is now a PEO for Rapid Capabilities, to get rapid turnaround. The Rapid Capabilities Office (RCO)'s PEO gets two program managers, one for rapid prototyping, and one for rapid acquisition, of a capability. The Rapid capabilities office (RCO) does not develop its own requirements; rather, the RCO gets the requirements from the Cross-functional team (CFT). Rapid Capabilities (RCO) was headed by Tanya Skeen as PEO RCO but Skeen moved to DoD, in late 2018. In 2019 RCO became the Rapid Capabilities and Critical Technologies Office (RCCTO) Redstone Arsenal, headed by LTG L. Neil Thurgood, lately of ASA (ALT)'s Army Hypersonics office.
Progress toward MDO
Then-CG of Army Futures Command (AFC) Gen. Murray announced full operational capability (FOC) 31 July 2019.
The Army G8 is monitoring just how producible (Milestone C) the upcoming materiel will be; for the moment, the G8 is funding the materiel. Follow-up on Modernization reviews is forthcoming, on a regular basis, according to the G8.
The progress in the top six priorities (long-range precision fires, Next Generation Combat Vehicle, Future Vertical Lift platforms, a mobile & expeditionary Army network, air & missile defense capabilities, and soldier lethality) being:
Long Range Precision Fires
According to AFC, the mission of the Long Range Precision Fires (LRPF) CFT is to "deliver cutting-edge surface-to-surface (SSM) fires systems that will significantly increase range and effects over currently fielded US and adversary systems."
AFC's five major programs for LRPF are:
The Extended Range Cannon Artillery (ERCA) program which develops a system capable of firing accurately at targets beyond 70km as opposed to the M109A7's 30km current range
The Precision Strike Mission (PrSM) which is a precision-strike guided SSM fired from the M270A1 MLRS and M142 HIMARS doubling the present rate-of-fire with two missiles per launch pod
The Strategic Long-Range Cannon (SLRC) program develops a system that can fire a hypersonic projectile up to 1,000 miles against air defense, artillery, missile systems, and command and control targets
The Common-Hypersonic Glide Body (C-HGB) is a collaborative program between the Army, Navy, Air Force, and Missile Defense Agency (MDA) which is planned to become the base of the Long-Range Hypersonic Weapon (LRHW) program
A ground-launchable UGM-109 Tomahawk Land Attack Missile to fill the gap in the Army's mid-range missile capabilities.
Based on Futures Command's development between July 2018 and December 2020, by 2023 the earliest versions of these weapons will be fielded:
The kill chains will take less than 1 minute, from detection of the target, to execution of the fires command; these operations will have the capability to precisely strike "command centers, air defenses, missile batteries, and logistics centers" nearly simultaneously.
The speed of battle damage assessment will depend on the travel time of the munition. This capability depends on the ability of a specialized CFT, Assured precision navigation and timing (APNT) to provide detail.
Long Range Precision Fires (LRPF): Howitzer artillery ranges have doubled, in excess of , with accuracy within 1 meter of the aimpoint, currently with sufficient accuracy to intercept cruise missiles, as of September 2020, reaching the 43 mile range as of December 2020.
Precision Strike Missiles (PrSMs) can reach in excess of 150 miles, with current 2020 tests
Mid-range capability (MRC) fires can reach in excess of 500 to 1000 miles, using mature Navy missiles
Long-Range Hypersonic Weapons (LRHWs) are to have a range greater than 1725 miles. Strategic long range cannon (SLRC) ranges are to be announced.
The current M109A6 "Paladin" howitzer range is doubled in the M109A7 variant. An operational test of components of the Long range cannon was scheduled for 2020. The LRC is complementary to Extended range cannon artillery (ERCA), the M1299 Extended Range Cannon Artillery howitzer. Baseline ERCA is to enter service in 2023. Investigations for ERCA in 2025: rocket-boosted artillery shells: Tests of the Multiple launch rocket system (MLRS) XM30 rocket shell have demonstrated a near-doubling of the range of the munition, using the Tail controlled guided multiple launch rocket system, or TC-G. The TRADOC capability manager (TCM) Field Artillery Brigade - DIVARTY has been named a command position.
An autoloader for ERCA's 95-pound shells is under development at Picatinny Arsenal, to support a sustained firing rate of 10 rounds a minute A robotic vehicle for carrying the shells is a separate effort at Futures Command's Army Applications Lab.
The Precision Strike Missile (PrSM) is intended to replace the Army Tactical Missile System (MGM-140 ATACMS) in 2023. PrSM flight testing is delayed beyond 2 August 2019, the anticipated date for the expiration of the Intermediate-range Nuclear Forces Treaty, which set 499 kilometer limits on intermediate-range missiles. (David Sanger and Edward Wong projected that the earliest test of a longer range missile could be a ground-launched version of a Tomahawk cruise missile, followed by a test of a mobile ground launched IRBM with a range of 1800–2500 miles before year-end 2019.) The 2020 National Defense Authorization Act (NDAA) was approved on 9 December 2019, which allowed the Pentagon to continue testing such missiles in FY2020. The Lockheed PrSM prototype had its first launch on 10 December 2019 at White Sands Missile Range, in a 150-mile test, and an overhead detonation; the Raytheon PrSM prototype was delayed from its planned November launch, and Raytheon has now withdrawn from the PrSM risk reduction phase. The PrSM's range and accuracy, the interfaces to HIMARS launcher, and test software, met expectations. PrSM passed Milestone B on 1 October 2021. Baseline PrSM is to enter service in 2023.
For targets beyond the PrSM's range, the Army's RCCTO will seek a mid-range missile prototype by 2023, with a reach from 1000 to 2000 miles. Loren Thompson points out that a spectrum of medium-range to long-range weapons will be available to the service by 2023; RCCTO's prototype Mid-Range Capability (MRC) battery will field mature Navy missiles, likely for the Indo-Pacific theater in FY2023. DARPA is developing OpFires, an intermediate-range hypersonic weapon which is shorter-range than the Army's LRHW. DARPA is seeking a role in the armory for OpFires' throttle-able rocket motor, post-2023. These weapons will likely require planning for new Army (or Joint) formations.
The Long range hypersonic weapons (LRHWs) will use precision targeting data against anti-access area denial (A2AD) radars and other critical infrastructure of near-peer competitors by 2023. LRHW does depend on stable funding.
Advanced Field Artillery Tactical Data System (AFATDS) 7.0 is the vehicle for a Multi-domain task force's artillery battery very similar to a THAAD battery: beginning in 2020, these batteries will train for a hypersonic glide vehicle which is common to the Joint forces. The Long range hypersonic weapon (LRHW) glide vehicle is to be launched from transporter erector launchers. Tests of the Common hypersonic glide body (C-HGB) to be used by the Army and Navy were meeting expectations in 2020.
In August 2020 the director of Assured precision navigation and timing (APNT) CFT announced tests which integrate the entire fires kill chain, from initial detection to final destruction. William B. Nelson announced the flow of satellite data from the European theater (Germany), and AI processing of AFATDS targeting data to the fires units.
In September 2020 an AI kill chain was formulated in seconds; a hypervelocity (speeds up to Mach 5) munition, launched from a descendant of the Paladin, intercepted a cruise missile surrogate.
Three flight tests of LRHW were scheduled in 2021; that plan was changed to one test in late 2021, followed by a multi-missile test in 2022.
The LRHW has been named 'Dark Eagle' The first LRHW battery will start to receive its first operational rounds in early FY2023; all eight rounds for this battery will have been delivered by FY2023. By then, the PEO Missiles and Space will have picked up the LRHW program, for batteries two and three in FY'25 and FY'27, respectively. Battery one will first train, and then participate in the LRHW flight test launches in FY'22 and FY'23.
Next Generation Combat Vehicle
Next Generation Combat Vehicle (NGCV) portfolio:
The use of modular protection is a move toward modular functionality for combat vehicles.
At Yuma Proving Ground (YPG), Firestorm (a Project Convergence AI node) sent targeting coordinates to Remote Weapons Stations, which were proxies for the Robotic Combat Vehicles and Optionally Manned Fighting Vehicles. A CROWS was slewed to the aimpoint, awaiting the human commander's order to fire. Firestorm aids and partakes of the Common operational picture (COP) shared by the AI hub at Joint Base Lewis-McChord. Satellite-based, F-35 based, and Army ground-based targeting data were shared in real-time during Firestorm's operation with the AI hubs to produce effects at YPG.
Firestorm was made possible by a mesh network—improvising an medium earth orbit (MEO, at 1200 mile altitude), and then a geosynchronous earth orbit (GEO, at 22,000 mile altitude) satellite link between Joint Base Lewis-McChord to Yuma Proving Ground.
Armored Multi-Purpose Vehicle (AMPV): in Limited User Tests General purpose variant supports Blue force tracking
An Advanced Powertrain Demonstrator, compact enough for AMPVs, Bradleys, OMFVs, or RCVs, can generate 1,000 horsepower from diesel. Alternatively, the demonstrator can generate electrical power: 160 kiloWatts for SHORAD high-energy lasers, or for propulsion of a 50-ton vehicle in quiet mode, for brief periods.
A ground mobility vehicle competition, bids closing 26 October 2018
The JLTV was approved for full rate production in June 2019. Joint Modernization Command (JMC) is supporting a TCM Stryker study on the optimum number of JLTVs for light infantry brigades.
Electrification microgrid standards
AFC's Futures and concepts center is proposing a strategy to guide the electrification of the GCVs, using the JLTV as an example for a step-by-step pathway and transition plan for electrification. Loren Thompson cautions that electrification per se could harm further fielding due to scope creep in specifications for the JLTV. The Army has not requested a hybrid electric JLTV.
The Maneuver CDID (MCDID) is undertaking the requirements development for electrification of Tactical and Combat Vehicles in September 2020; General Wesley had previously announced a plan in April 2020 for the modernization of Tactical and Combat Vehicles using the JLTV electrification plan as a prototype template of the electrification process.
After prototype JLTV electrification, the Army is seeking ideas for an electrified Light Reconnaissance Vehicle (LRV) by 2025. The LRVs would complement the Infantry Squad Vehicles (ISVs), and electrified versions of Stryker Infantry Carrier Vehicle - Dragoon which are already fielded. GM Defense has since converted one of its bid vehicles for the ISV to an all-electric version.
Mobile Protected Firepower (MPF): approved by joint requirements oversight council. Two vendors were selected to build competing prototype light tanks (MPF), with contract award in 2022. A unit of 82nd Airborne Division will begin assessment of prototype MPFs beginning in March 2020.
Optionally Manned Fighting Vehicle (OMFV): soliciting input, in requirements definition stage; the 2018 requirement was that 2 OMFVs fit in a C-17. A request for proposal for a vehicle prototype was placed 29 March 2019. On 16 January 2020 the Optionally Manned Fighting Vehicle solicitation was cancelled, as a middle tier acquisition in its early stage; the requirements and schedule are being revisited. The FY2021 budget request has been adjusted accordingly.
An Army development team will not be an OMFV competitor as of 17 September 2020.
NGCV optionally manned fighting vehicle: OMFV is getting some industry silhouettes which may be incorporated in digital designs for 2023, prototypes by 2025. A fifth OMFV bidder (a small business) is still a contender in the competition, includes large consortia. However, Mark Cancian points out that OMFV might not be suitable for a pivot to the Pacific theater.
A hybrid electrified Bradley Fighting Vehicle is slated for January 2022 by RCCTO.
Robotic Combat Vehicles (RCVs): General Murray envisions that by FY2023 critical decisions will be made on RCVs after years of experimentation. See Uran-9 (Уран-9)
Next Generation main battle tank: § Futures
Future Vertical Lift
Future Vertical Lift (FVL) is a plan for a family of military helicopters for the United States Armed Forces using common elements such as sensors avionics and engines. Five different sizes of aircraft are to be developed, to replace the wide range of rotorcraft in use. The project began in 2009. By 2014, the SB-1 Defiant and V-280 Valor had been chosen as deomostrators.
The FVL CFT has secured approval for the requirements in all four of its Lines of Effort:
Future Vertical Lift will use the DoD modular open systems approach (MOSA), an integrated business and technical strategy in FARA, and in FLRAA Both FLRAA and FARA are to enter service by Fiscal Year 2030. By abstracting its requirements, the Army was able to request prototypes which used new technologies.
Joint Multi-Role Technology Demonstrator (JMR-TD) prototypes are to be built by two teams to replace Sikorsky UH-60 Blackhawks with Future Long-Range Assault Aircraft (FLRAA). The tilt-rotor FLRAA demonstrator by Bell is flying unmanned (October 2019); it logged 100 hours of flight testing by April 2019. Both Bell and Sikorsky-Boeing received contract awards to compete in a risk reduction effort (CDRRE) for FLRAA in March 2020. The risk reduction effort will be a 2-phase, 2-year competition. The competition will transition technologies (powertrain, drivetrain and control laws) from the previous demonstrators (JMR-TDs) of 2018–2019 to requirements, conceptual designs, and acquisition approach for the weapon system. The Army wants flight testing of FLRAA prototypes to begin ning in 2025, with fielding to the first units in 2030.
The Future Attack Reconnaissance Aircraft (FARA) is smaller than FLRAA. The Army's requests for proposals (RFPs) for FARA were due in December 2018;
A long range precision munition for the Army's aircraft will begin its program of design and development. In the interim, the Army is evaluating the Spike 18 mile range non-line of sight missile on its Boeing AH-64E Apache attack helicopters.
Mobile, Expeditionary Network
In Fiscal Year 2019, the network CFT will leverage Network Integration Evaluation 18.2 for experiments with brigade level scalability.
Integrated Tactical Network (ITN) "is not a new or separate network but rather a concept"—PEO C3T. Avoid overspecifying the requirements for Integrated Tactical Network Information Systems Initial Capabilities Document. Instead, meet operational needs, such as interoperability with other networks, and release ITN capabilities incrementally.
Up through 2028, every two years the Army will insert new capability sets for ITN (Capability sets '21, '23, '25, etc.). and take feedback from Soldier-led experiment & evaluation. However, the Army's commitment to a 'campaign of learning' showed more paths:
Firestorm was made possible by a mesh network—improvising an MEO, and then a GEO satellite link between JBLM to YPG. There are plans to have a Project Convergence 2021. The Army fielded a data fabric at Project Convergence 2020; this will eventually be part of JADC2.
Five Rapid Innovation Fund (RIF) awards were granted to five vendors via the Network CFT and PEO C3T's request for white papers. That request, for a roll-on/roll-off kit that integrates all functions of mission command on the Army Network, was posted at the National Spectrum Consortium and FedBizOpps, and yielded awards within eight months. Two more awards are forthcoming.
The Rapid Capabilities Office (RCO)'s Emerging Technologies Office structured a competition to find superior AI/Machine Learning algorithms for electronic warfare, from a field of 150 contestants, over a three-month period.
The Multi-Domain Operations Task Force (MDO TF) is standing up an experimental Electronic Warfare Platoon to prototype an estimated 1000 EW soldiers needed for the 31 BCTs of the active Army.
Capability Set '21 fields ITN to selected infantry brigades to prepare for IVAS Integrated vision goggles. Expeditionary signal brigades get enhanced satellite communications.
1/82nd Airborne, 173rd Airborne, 3/25th ID, and 3/82nd Airborne infantry brigades will all have fielded the Integrated Tactical Network Capability Set '21 by year-end 2021. 2nd Cavalry Regiment is getting Capability Set '21 on Strykers, which will test the CS'23 network design on Strykers early.
Integrated Tactical Network (ITN) Capability Set '23 is prototyping JADC2 communications and the data fabric, to LEO (Low earth orbit) and to MEO (Medium earth orbit) satellites, as continued in Project Convergence 2021 in Yuma Proving Ground.
Integrated Tactical Network (ITN) Capability Set '25 will implement JADC2, according to the acting head of the Network CFT. Command post footprint reduced.
G-6 John Morrison is seeking to unify the battlefield networks of ITN, and IEN (Enterprise Network), as of September 2021.
An Army leader dashboard from PEO Enterprise Information Systems is underway. The dashboard is renamed Vantage. Cloud-service-provider agnostic abstraction layers are in use, which allows merging the staff work in G-3/5/7 for cyber/EW (electronic warfare), mission command, and space. The "seamless, real-time flow of data" across multiple domains (land, sea, air, space, and cyberspace) is an objective for G-6, as well as the sensor-to-shooter work at Futures command.
Fort Irwin, Fort Hood, Joint Base San Antonio, and Joint Base Lewis McChord have 5G experiments on wireless connectivity between forward operating bases and tactical operations centers, as well as nonaircraft Augmented reality support of maintenance and training.
The Multi-domain task forces (MDTFs) will be used to expose any capability gaps in the Unified network plan.
Air, Missile Defense
Air, Missile Defense (AMD):
Integrated Air and Missile Defense Battle Command System
The US' Integrated Air and Missile Defense Battle Command System (IBCS) is intended to let any defensive sensor (such as a radar) feed its data to any available weapon system (colloquially, "connect any sensor to any shooter"). IBCS' second limited user test was scheduled to take place in the fourth quarter of FY20. On 1 May 2019 an Engagement Operations Center (EOC) for the Integrated Air and Missile Defense (IAMD) Battle Command System (IBCS) was delivered to the Army, at Huntsville, Alabama.
IAMD is intended to integrate :
Lower Tier Air and Missiles Defense Sensor (LTAMDS) which replaces the Patriot radar, fits on a C-17 Globemaster, and feeds data to IBCS. LTAMDS uses gallium nitride (GaN) RF elements. —PEO RCO is accelerating LTAMDS experimentation by concentrating on two competitors with award by 2023 The fielding aim for LTAMDS is 2022. LTAMDS is being renamed GhostEye MR.
Lockeed Martin F-35 Lightning II, Aegis, Patriot missile system, LTAMDS, and Terminal High Altitude Area Defense (THAAD) radars will interoperate. On 30 August 2019 at Reagan Test Site on Kwajalein atoll, THAAD Battery E-62 successfully intercepted a medium range ballistic missile (MRBM), using a radar which was well-separated from the interceptors; the next step tested Patriot missiles as interceptors while using THAAD radars as sensors; a THAAD radar has a longer detection range than a Patriot radar. THAAD Battery E-62 engaged the MRBM without knowledge of just when the medium range ballistic missile had launched.
In July 2020 a Limited user test (LUT) of IBCS was initiated at WSMR; the test ran until mid-September 2020. The LUT was originally scheduled for May but was delayed to handle the COVID-19 safety protocols. The first of several LUTs of IBCS, by an ADA battalion was successfully run in August 2020. IBCS successfully integrated data from two sensors (Sentinel and Patriot radars), and shot down two drones (cruise missile surrogates) with two Patriot missiles in the presence of jamming; In the week after, by 20 August 2020 two more disparate threats (cruise missile and ballistic missile) were launched and intercepted; the ADA battalion then ran hundreds of drills denoting hundreds of threats for the remainder of the IBCS tests (the increased effort occupied the entire unit); the real-world data serve as a sanity check for Monte Carlo simulations of an array of physical scenarios amounting to hundreds of thousands of cases. IBCS created a "single uninterrupted composite track of each threat" and handed off each threat for separate disposition by the air and missile defense's integrated fire control network (IFCN). The same battalion running the LUT, for both IBCS, and LTAMDS radar, is scheduled to run the Initial Operational Test & Evaluation (IOTE) in 2021, and is to run well into 2022. The ranges of the IAMD defensive radars, when operated as a system, are thousands of miles. Cross-domain information from ground, air, and space sensors was passed to a fire control system at Project Convergence 2021 (PC21), via IBCS, during one of the use case scenarios. At PC21 IBCS fused sensor data from an F-35, tracking the target, and passing that data to AFATDS (Army Field Artillery Tactical Data System). The F-35 then served as a spotter for artillery fire on ground target data. IBCS is projected to be at its initial operating capability (IOC) in Fiscal year 2022.
In September 2020 a Joint exercise against cruise missiles demonstrated AI-based kill chains which can be formulated in seconds; One of the kills was by a "M109-based" tracked howitzer (a Paladin descendant).
Although on 21 August 2019 the Missile Defense Agency (MDA) cancelled the $5.8 billion contract for the Redesigned kill vehicle (RKV), the Army's 100th Missile Defense Brigade will continue to use the Exo-Atmospheric Kill Vehicle (EKV). The current Ground-based Midcourse Defense (GMD) programs continue per plan, with 64 ground-based interceptors (GBIs) in the missile fields for 2019 planned. Command and Control Battle Management and Communications (C2BMC), was developed by the Missile defense agency (as a development organization) and is integrated with GMD, as demonstrated by FTG-11 on 25 March 2019. By March 2021, the decision to approve further development of the Next Generation Interceptor is on the agenda for the 35th Deputy Secretary of Defense Kathleen Hicks. Hicks has extensive background in defense modernization; the 28th Secretary of Defense Lloyd Austin has recused himself from acquisition matters.
The TRADOC capability manager (TCM) for Strategic Missile Defense (SMD) has accepted the charter for DOTMLPF for the Space and Missile Defense Command (SMDC/ARSTRAT).
High Energy Laser Tactical Vehicle Demonstrator
A contract for the U.S. Army Space and Missile Defense Command/Army Forces Strategic Command's High Energy Laser Tactical Vehicle Demonstrator (HEL TVD) laser system, a 100 kilowatt laser demonstrator for use on the Family of Medium Tactical Vehicles, was awarded 15 May 2019 to Dynetics-Lockheed. A 300 kilowatt laser demonstrator (HEL-IFPC) effort supersedes the HEL TVD (after the critical design review). System test at White Sands Missile Range in 2023.
Indirect fire protection capability (IFPC) Multi-mission launcher (MML) fielding 50 kW lasers on Strykers in 2021 and 2022 to two battalions per year.
Maneuver short-range air defense (MSHORAD) with laser cannon prototypes in 2020, In July 2021 RCCTO conducted a combat shootoff on just how to control pointing these high-energy lasers. Raytheon is providing the high energy laser (Directed Energy Maneuver-Short Range Air Defense system —DE M-SHORAD) for the Strykers in 2022.
RCCTO has awarded a contract to build a 300 kW high-energy laser (HEL) for the Army in FY2022, capable of defending against airborne threats, by acquiring, tracking, and maintaining the HEL's aimpoint on the threat until it goes down.
Soldier lethality
Soldier Lethality:
Next-generation squad weapon: Expect 100,000 to be fielded to the Close Combat Force: Infantry, Armor, Cavalry, Special Forces, and Combat engineers. Tests at Fort Benning in 2019. —Chief of Staff Milley
Nine thousand systems, with two drones apiece are being purchased over a three-year period for the 9-man infantry squads heading to Afghanistan.
Integrated Visual Augmentation System (IVAS) —an augmented reality display— allows soldiers to use multiple sensors to fight.
Enhanced night vision goggles (ENVG)-B, will be fielded to an Armor brigade combat team (ABCT) going to South Korea in October 2019
A CCDC program which instrumented a battalion with sleep monitors, Redibands, and smartwatches to detect exertion, detected soldiers with elevated heart rates, indicating the beginnings of a streptococcus infection. This condition was detected by the medics, and would have impacted the battalion, detected before deploying to Afghanistan.
Synthetic training environment (STE)—a CFT devoted to an augmented reality system to aid planning, using mapping techniques, even at squad level will begin fielding by 2021. In October 2019 the Synthetic Training Environment (STE) prototype is being used by Special Operations for planning actual missions. Development for the Synthetic Training Environment (STE) is to be accelerated to meet MDO and JADC2 training demands.
On the battlefield of the future, where no headquarters is safe for long, the commander's task is:
"Avoid being detected and targeted."
"Work through and survive attacks."
"Rapidly recover from losses."
Thus the commander has to be continuously aware of the current status (that is: alive or not) of the deputy commander (and the staff) so that the mission can be completed.
Enterprise campaign planning
In 2019 DoD planners are exercising Doctrine, organization, training, materiel, leadership and education, personnel, and facilities (DOTMLPF) in planning, per the National Defense Strategy (NDS),
in the shift from counterinsurgency (COIN) to competition with near-peer powers. The evaluations from planners' scenarios will be determining materiel and organization by late 2020.
Futures Command is formulating multiyear Enterprise campaign plans, in 2019. The planning process includes Army Test and Evaluation Command (ATEC), AFC's cross-functional teams (CFTs), Futures and Concepts (FCC), Combat Capabilities Development Command (CCDC), and Army Reserve's Houston-based 75th Innovation Command. At this stage, one goal is to formulate the plans in simple, coherent language which nests within the national security strategic documents.
Futures
AFC faces multiple futures, both as threat and opportunity. The Army's warfighting directive, viz., "to impose the nation's political will on its enemy" —Chief of Staff Milley, is to be ready for multiple near-term futures.
Under Secretary McCarthy notes that Gen. Murray functions as the Army's Chief Investments Officer (more precisely, its "chief futures modernization investment officer"). Funding for the top six priorities could mean that existing programs might be curtailed.
In the top six priorities:
LRPF Long range precision fires
Hypersonic materiel development: the Strategic long range cannon (SLRC), for a hypersonic projectile, is meant to have a range up to 1,000 nautical miles. An early ballistic test took place at Naval Support Facility Dahlgren, as announced at AUSA in October 2019.
ERCA development at Picatinny Arsenal: evaluate several manufacturing technologies, tied to the XM1113 munition.
Targeting with thousand-mile missiles, "streamlining the sensor-shooter link at every echelon"—BG John Rafferty, in Integrated fire
NGCV Next generation combat vehicle
Much smaller and lighter ground combat vehicles, optionally unmanned (Dedicated short-range communications for robotic vehicles
If robotic combat vehicles (RCVs) do not need to be manned, neither would they need to be armored (see Uran-9); use of sensors and batteries could replace the armor. Soldiers have learned to remotely operate the weapons on such RCVs in several days; the CCDC RCV Center and CFT are placing RCV prototypes and the Soldier's vehicle prototypes in company-level scenarios in Europe, in 2020 and forward. Modified Bradley Fighting Vehicles and M113s at Fort Carson went through unit-level operations to gain experience with RCVs in July and August 2020. Future breaching operations will be affected in detail by the robotic breaching concept, according to the panel at the AUSA October 2020 meeting.
In October 2020 the Army's Chief of Staff reminded the force that "The time is now" to modernize for the future, including how the Army develops the systems themselves; if a soldier can now use IVAS to shoot around corners and hit the target, if soldiers and their units can use STE (synthetic training environment) to depict the mission's terrain and train for the mission before the conflict occurs, if deploying robotic reconnaissance vehicles at the time of the mission can smoke out defenses before committing manned combat vehicles against those defenses, then even light vehicles can transport soldiers in conflict, and precision fires can neutralize threats against those soldiers in a conflict. STE can depict these scenarios.
Robotic warfare, as a concept or capability at the Joint Corps echelon, was demonstrated at the operational level using Joint Warfighting assessment (JWA) 18.1 in April 2018.
JWA 19 (April–May 2019): I Corps, at Joint base Lewis-McChord, is getting modernization training on the robotic complex breaching concept (RCBC), and the command post computing environment (CPCE) from Joint modernization command (JMC) training staff.
Create decisive lethality: Robotic experiments
Jen Judson reports that Lt. Gen. Eric Wesley is proposing that the brigades begin to electrify their vehicles using hybrid, or all-electric propulsion, or perhaps other mobile power plants.
Modified M2 Bradleys (MET-Ds) and other RCVs operating at Fort Carson, and in Europe have used robotic software to operate the vehicles, for both logistics and also for combat maneuver. As of August 2020, the RCVs are able to perform limited waypoint navigation; multiple vehicles can be controlled by one human operator.
Smaller brigades and stronger division-level maneuver, with robotic aerial reconnaissance vehicles, robotic combat vehicles (RCVs), and long-range precision fires (LRPFs) are under consideration.
FVL "Our new approach is really to prototype as much as we can to help us identify requirements, so our reach doesn’t exceed our grasp. ... A good example is Future Vertical Lift: The prototyping has been exceptional." —Secretary of the Army Mark Esper
The Future Attack Reconnaissance Aircraft (FARA) scout helicopter prototypes are to be designed to fly along urban streets, to survive air defenses. Five design vendors were selected, with downselect to two for prototyping by February 2020.
These aircraft are envisioned as platforms for utilizing sensor networks to control and enable weapons delivery, as demonstrated in a 2019 experiment. In preparation for FVL platforms, the FVL CFT demonstrated a 2020 Spike non-line of sight missile launch from an Apache gunship at Yuma Proving Ground, for extended range capability; a forward air launch of an unmanned sensor aircraft (UAS) from a helicopter was demonstrated at YPG as well.
Mobile & Expeditionary Network / MDO Multi-domain operations
In the battlefield of the future, where nowhere is safe for long, "you will miss opportunities to get to positions of advantage if you don't synthesize the data very quickly"—LTG Wesley (AI for multi-domain command and control: MDC2)
ISR (intelligence, surveillance, and reconnaissance) needs to match the range of the upcoming LRPF (Long range precision fires) and thousand-nautical-mile missile standoff capability of the Army. Soldiers on the ground are now able to receive satellite ISR.
Cybersecurity RAND simulations show Blue losses
Cyber warfare / urban warfare / Underground warfare / Multi-domain combined maneuver Robotic swarms are a tactic under consideration.
Assured Positioning, Navigation and Timing (A-PNT) A solar-powered drone successfully stayed aloft at Yuma Proving Ground for nearly 26 days, at times descending to 55,000 feet to avoid adverse weather conditions, while remaining well above the altitudes flown by commercial aircraft, and landing per plan in the summer of 2018, to meet other testing commitments.
An A-PNT event is scheduled at WSMR for August 2019
Prototype jam-resistant GPS kits are being fielded to 2nd Cavalry Regiment in US European Command (EUCOM) before year-end 2019. More than 300 Strykers of the 2nd Cavalry Regiment are being fitted with the Mounted Assured Precision Navigation & Timing System (MAPS), with thousands more planned for EUCOM.
A Modular Open Systems Approach (MOSA) to Positioning, Navigation and Timing (PNT) is under development.
Low Earth orbit satellites for Assured Positioning, Navigation and Timing—"When you look at the sheer number of satellites that go up and the reduced cost to do it, it gives us an array of opportunities on how to solve the problems" in A-PNT
CCDC Army Research Laboratory (ARL) researchers have proposed and demonstrated a way for small ground-based robots with mounted antennas to configure phased arrays, a technique which usually takes a static laboratory to develop. Instead the researchers used robots to covertly create and focus a highly directional parasitic array (see Yagi antenna).
CCDC Army Research Laboratory (ARL): ARL's Army Research Office is funding researchers at University of Texas at Austin, and University of Lille who have built a new 5G component using hexagonal boron nitride which can switch at performant speeds, while remaining 50 times more energy-efficient than current materials—the "thinnest known insulator with a thickness of 0.33 nanometers".
CCDC Army Research Laboratory (ARL): ARL's Army Research Office (ARO) is seeking diamond colloids, microscopic spheres which can assemble bottom-up into promising structures for laser action.
Newly developed materials with nanoscale trusses could serve as armor or coatings.
A demonstration of proof of concept allows Soldiers to communicate their position using a wearable tracking unit. The technology allows soldiers (or robots) to prosecute a fight even indoors or underground, even if GPS were lost during a NavWar.
Air, Missile Defense is being reframed, as more integrated.
Integrated Air and Missile Battle Command System (IBCS) award, including next software build. $238 million also funds initial prototypes of the command and control system for fielding in FY22.
Hypersonic glide vehicle launch preparations, beginning in 2020, and continuing with launches every six months.
At Naval Air Weapons Station China Lake an FVL CFT-sponsored demonstration of interconnected sensors handed-off the control of a glide munition which had been launched from a Grey Eagle unmanned aircraft system (UAS). During the flight of that munition, another group of sensors picked up a higher-priority target; another operator at the Tactical Operations Center (TOC) redirected the glide munition to the higher-priority target and destroyed it.
Soldier lethality
Sensor-to-shooter prototype for multi-domain battle, 2019 operational assessment: Air Force RCO / Army RCO / Network CFT
Night vision goggles thermal polarimetric camera. Integrated Visual Augmentation System (IVAS) The Synthetic Training Environment (STE) is available to some of the troops outfitted with IVAS. Christine Wormuth, 25th Secretary of the Army, has identified the Army's work on a Common operating picture (COP) as foundational for the operation of the Joint services.
CCDC ARL researchers are developing a flexible, waterproof, lithium-ion battery of any size and shape, for soldiers to wear; the electrolyte is water itself. In 2020 the batteries were engineering prototypes; by 2021 soldiers will wear the battery for themselves for the first time.
CCDC ARL and DoE's PNNL are examining the solid-electrolyte-interphase (SEI) as it first forms during the initial charging of a Lithium-ion battery. They have found an inner SEI (thin, dense, and inorganic—most likely lithium oxide) between the copper electrode, and an outer SEI which is organic and permeable—a finding which will be useful when building future batteries.
CCDC ARL and MIT researchers are formulating atomically thin materials to be layered upon soldiers' equipment and clothing for MDO information display and processing.
Integrated, wearable cabling for capabilities such as IVAS, NGSW, or Nett Warrior are under development; the potential exists to reduce 20 pounds of batteries to half that weight.
CCDC ARL is undertaking an Essential research program (ERP) in the processes underlying additive manufacturing (3D printing), which is applicable to munitions.
Natick Soldier RDEC has awarded an Other Transaction Authority (OTA) contract to prototype soldier exoskeletons which augment human leg strength under harsh conditions.
Plans for the Infantry Squad Vehicle (ISV) are underway. An ISV is meant to be airdropped for a squad of nine paratroopers. The GM design was selected; first unit is expected at 1/82nd AB division in February 2021.
Assured pointing, navigation and tracking (A-PNT) devices are being miniaturized, with increased redundant positioning sources. This aids wearability.
In September 2019 in the Maneuver CoE's Battle Lab at Fort Benning, OneSAF simulations of a platoon augmented by UAS drones, ground robots, and AI were able to dislodge a defending force 3 times larger, repeatedly. But by current doctrine, a near-battalion would have been required to accomplish that mission.
Headquarters (HQ)
AFC's headquarters is based in Austin, Texas where it spreads across three locations totaling 75,000 ft2; One location is a University of Texas System building at 210 W. Seventh St. in downtown Austin, on the 15th and 19th floors; the UT Regents were not going to charge rent to AFC until December 2019. The command began initial operations on 1 July 2018.
Value stream
In a hearing before Congress' House Armed Services Committee, the AFC commander projected that materiel will result from the value stream below, within a two-year time frame, from concept to Soldier. The commanding general is assisted by three deputy commanders.
the Futures and Concepts Center, is led by LTG Scott McKean. The first commander was AFC deputy commanding general Lt. Gen. (Ret.) Eric Wesley, who sought 4 value streams for reducing the time invested to define a relevant requirement:
Science and technology (S&T: discovery / collection of ideas with usable effects)
Experiments (Testing of a system to a known expectation of effects, or else observation of that system, in the absence of a specific expectation of effects)
Concepts development (Development of a relevant idea about that system)
Requirements development (Development of the terms and conditions for that system)
Combat Development element, Army Futures Command. Lt. Gen. James M. Richardson is the deputy commander. He assists the commander with efforts to assess and integrate the future operational environment, emerging threats, and technologies to develop and deliver concepts, requirements, and future force designs to posture the Army for the future.
The Capability development integration directorate (CDID) of each Center of Excellence (CoE), works with its CFT and its research, development and engineering center (RDEC) to develop operational experiments and prototypes to test.
The Battle Labs and The Research Analysis Center (TRAC) prototype and analyze the concepts to test.
JMC is capable of providing live developmental experiments to test those concepts or capabilities, "scalable from company level to corps, amid tough, realistic multi-domain operations".
RDECOM becomes the Combat Capabilities Development Command (CCDC), part of the Combat Development element, on 3 February 2019.
Combat Systems Directorate was to be led by the ASA(ALT)'s Principal Military Deputy (Principal Military Deputy (PMILDEP) to the ASA(ALT)) who will produce those developed solutions and seek feedback.
Gen. Robert Abrams has tasked III Corps with providing Soldier feedback for the Next Generation Combat Vehicles CFT, XVIII Corps for the Soldier feedback on the Soldier lethality CFT, the Network CFT, as well as the Synthetic training CFT, and I Corps for the Long Range Precision Fires CFT.
Combat Systems refines, engineers, and produces the developed solutions from Combat Development.
An analysis by AMSAA can then assess that concept or capability, as a promising system for a materiel development decision.
Army Chief of Staff Milley is looking for AFC to attain full operational capability (FOC) by August 2019.
List of commanding generals
On 16 July 2018, Lieutenant General John M. Murray was nominated for promotion and appointment as Army Futures Command's first commanding general. and his appointment was confirmed on 20 August 2018 and he assumed command during the official activation ceremony of AFC on 24 August 2018, in Austin, Texas.
Murray relinquished command of AFC on 3 December 2021.
See also
Military acquisition#In the United States
Military budget of the United States
Command systems in the United States Army
Air and Missile Defense
Combat Capabilities Development Command Soldier Center#Soldier Lethality
Manhattan Project
Notes
References
External links
See AFC Events for the upcoming events: Association of the United States Army (AUSA) 26–28 March 2019: Multi-Domain Operations. Previous event: 8–10 October 2018
2018 establishments in the United States
Buildings and structures in Austin, Texas
Commands of the United States Army
Military units and formations established in 2018
Futures |
8899175 | https://en.wikipedia.org/wiki/The%20COED%20Project | The COED Project | The COED Project, or the COmmunications and EDiting Project, was an innovative software project created by the Computer Division of NOAA, US Department of Commerce in Boulder, Colorado in the 1970s. This project was designed, purchased and implemented by the in-house computing staff rather than any official organization.
Intent
The computer division previously had a history of frequently replacing its mainframe computers. Starting with a CDC 1604, then a CDC 3600, a couple of CDC 3800s, and finally a CDC 6600. The department also had an XDS 940 timesharing system which would support up to 32 users on dial-up modems. Due to rapidly changing requirements for computer resources, it was expected that new systems would be installed on a regular basis, and the resultant strain on the users to adapt to each new system was perceived to be excessive. The COED project was the result of a study group convened to solve this problem.
The project was implemented by the computer specialists who were also responsible for the purchase, installation, and maintenance of all the computers in the division. COED was designed and implemented in long hours of overtime. The data communications aspect of the system was fully implemented and resulted in greatly improved access to the XDS 940 and CDC 6600 systems. It was also used as the front end of the - Free University of Amsterdam's SARA system for many years.
Design
A complete networked system was a pair of Modcomps: one II handled up to 256 communication ports, and one IV handled the disks and file editing. The system was designed to be fully redundant. If one pair failed the other automatically took over. All computer systems in the network were kept time-synchronized so that all file dates/times would be accurate - synchronized to the National Bureau of Standards atomic clock, housed in the same building. Another innovation was asynchronous dynamic speed recognition. After a terminal connected to a port, the user would type a Carriage Return character, and the software would detect the speed of the terminal (in the range of 110 to 9600 bit/s) and present a log in message to the user at the appropriate speed. Due to limitations of the operating systems which came with the Modcomps, new Operating systems had to be created, CORTEX for the Modcomp II's and IV BRAIN for the Modcomp IV's.
History
(Dates are approximate - from memory)
1970: First discussions of new communications system for XDS 940
1971: The COED Project was created
1972: The system was designed, funding was approved, a Request for Quote for the hardware was issued and executed
1973: The hardware components—2 Modcomp IV's and 2 Modcomp II's were delivered and installed and implementation began
1976: (April 8) First communication through COED to XDS 940 worked!
1979: project terminated
Staff
Those involved in the original design meetings were:
Ralph Slutz, George Sugar, Jim Winkelman and most of the COED implementors. Support was also provided by Tom Gray.
The COED implementors were:
W. Schyler (Sky) Stevenson, Project Manager and operating system implementer
Howard Bussey, Mark Emmer, David Lillie, and Vern Schryver. The 6600 interface to COED was implemented by Anthony Brittain, Dan Dechatelets and Kathy Browne.
External links
Author - David Lillie's homepage
Computer systems
Software projects
1970s establishments in Colorado |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.